Category Archives: assessment

why i hate writing learning objectives*

[*even when they’re labeled “course goals”]

I’ve been puzzling over the course goals etc etc for a new grad seminar in pedagogy I’m teaching this spring, and I think I finally pinpointed the single most frustrating aspect of the language of assessment for me, especially when it’s used to guide, direct, or evaluate instruction. It’s the reversal of priorities it seems to entail, when assessment drives pedagogical decision-making instead of the other way around.

Assessment, if it’s viewed as something that manages or directs pedagogy, threatens to take faculty away from all the stuff that we love and value in teaching (e.g., literature, disciplinary research, students, discussions, interactions etc) towards stuff that we may never love, or only barely value (e.g., quantification, social science notions of data and evidence, standardized teaching methodologies, bureaucratic protocols of compliance).

Yet even with these caveats, I still believe that these kinds of assessment exercises have the potential to improve our instruction, so long as they’re conceived as another form of feedback for faculty to use in the creation and revision of our courses. And I think that any course about pedagogy nowadays needs to introduce future teachers to the complex relation of assessment to one’s classroom practice.

Some of the best, most lucid discussions of these issues can be found in Erickson et al.’s book, Teaching First-Year College Students, which is designed to help instructors of first-year students understand the sheer difficulty and significance of this transition for students. But the book is comprehensive enough to help new teachers at any level understand the challenges of teaching and learning in contemporary universities.

So here’s the paragraph I was using to think about my own learning objectives/course goals, in Erickson, et al., p. 71:

Erickson p.71

The process of drawing up these course goals begins by moving the focus away from the person teaching the course to the students taking the course. In other words, we move from “the course will do X” to “students will be able to do Y”) This is a difficult but useful shift in perspective that I think most teachers would endorse.

What is truly counter-intuitive is the major shift identified in the quote: “indicate the behavior expected, not the state of mind students will be in” [emphasis mine]. In other words, what outward behavior or activities manifested by students would provide visible, or even measurable, evidence that students are indeed “knowing, understanding, or thinking” the content of your course? What kinds of evidence can you provide that would corroborate your intuition that student A knew, understand, or thought better than student B?

I believe that experienced teachers intuitively regard “student thinking” as something that they are able to engage with, understand, assess, or try to improve, even if our intuitions and experience can be shown to be fallible.

Redirecting teachers’ attention strictly to student behavior, however, takes us away from our perceptions of students’ thinking, and often forces our attention on the lowest-level tasks and students’ demonstrated acts of compliance, which are of course the easiest parts of student activity to measure. The extent to which we demand that students “know, understand, or think” seem to vanish from this minimalist depiction of learning. And higher ed teachers are particularly baffled by this kind of goal displacement, when discussions of “critical thinking” or “higher-level learning” ignore disciplinary “ways of thinking” that remain tacit or opaque to outsiders.

Unless really ingenious methods of indirect observations are put into effect, the minimal, behaviorist picture of learning is where most of assessments of the learner and learning remain. They essentially inform us of the number of students attending classes and the number of hours they filled seats and drew upon “resources,” meaning instructor time and possibly attention. In some sense, the “competency movement” represents the instructional model that this kind of assessment and its advocates would move towards, but there are real questions about whether it can be done credibly enough to compete with more traditional educational approaches. But the biggest difficulty for all these externally-focused programs of assessment is that they are uninterested in the quality of those interactions or learning that would define an experience as “education” in our usual sense of the term. There is nothing transformative, or potentially transformative, in these kinds of experiences.

The most infuriating aspect of this situation, however, is when this behaviorist language of assessment, once it has rendered most higher-level work invisible, demands that something called “critical thinking” or “upper-level learning” be taught and assessed using the methods that are least suited to generating or observing them. In this scenario, true evaluations of success or failure are essentially irrelevant to the system getting built, because it is outside the control of the student or teacher to alter.

Having said all this, I agree with Erickson that an integration of pedagogy with assessment (via strategies like provisional, instructor-written course goals) remains a worthwhile activity, because it helps clarify to ourselves and our students what we’re attempting to do. In other words, this integration of pedagogy of assessment should be done to the extent that it improves our teaching or our students’ experiences of learning. And anything beyond that feels like a displacement of our genuine goals and values regarding teaching.

DM

Advertisements

rosenthal and heiland’s literary study, measurement, and the sublime, reviewed

Since we’re talking about the value of curricular discussions, even when these reveal fundamental disagreements, contested terms, and hidden curricula, I thought I would point out that the Laura Rosenthal and Donna Heiland’s collection Literary Study, Measurement, and the Sublime has just been  discussed (and reviewed) in the latest issue of Change.

The writer of this article, Pat Hutchings, endorses the approach of Rosenthal, Heiland and many of this volume’s contributors (including myself), which is to use assessment in all its varieties to inquire into what is singular and distinctive about literary studies in relation to other fields.

Of course, this approach contrasts strongly with how assessment has generally been understood and practiced for many years in American higher education, as Hutchings admits.  She observes that

assessment’s focus on cross-cutting outcomes makes perfect sense, but it has also meant that the assessment of students’ knowledge and abilities within particular fields, focused on what is distinctive to the field, has received less attention. And that’s too bad.

It’s too bad because we do, after all, value what our students know and can do in their major area of concentration and because students themselves typically care most about achievement in their chosen field of study. But it’s also too bad because anchoring assessment more firmly in the disciplines may be a route to addressing its most vexing and enduring challenge: engaging faculty in ways that lead to real improvement in teaching and learning.
What Rosenthal’s and Heiland’s volume shares with the threshold concepts framework Kathryn and I have been discussing is the fact that, as one researcher notes, “getting academics to think about what is critical to learn in their subject is easier than getting them to think about learning outcomes” (7).  It also addresses a persistent problem in persuading literature faculty to adopt both assessment and student-centered learning, which has been these discourses’ consistent “mortification of the teacherly self” (Cousins, 5), or “erasure of teacher expertise” (6). Such a “restoration of dignity for academic teachers” (6) also helps to address the unsettling suspicion voiced by a number of contributors to the Rosenthal and Heiland volume, which was that the subjective, first-person experiences explored in literature courses would have to be “translated” via social science methodology into something more valued and “objective” before it could be taken seriously by outsiders, whether in- or outside the university community. Beginning, though not ending, with the disciplinary perspective of practitioners and experts seems like one of the best ways to address this disciplinary imbalance.
DM