Tag Archives: assessment

why i hate writing learning objectives*

[*even when they’re labeled “course goals”]

I’ve been puzzling over the course goals etc etc for a new grad seminar in pedagogy I’m teaching this spring, and I think I finally pinpointed the single most frustrating aspect of the language of assessment for me, especially when it’s used to guide, direct, or evaluate instruction. It’s the reversal of priorities it seems to entail, when assessment drives pedagogical decision-making instead of the other way around.

Assessment, if it’s viewed as something that manages or directs pedagogy, threatens to take faculty away from all the stuff that we love and value in teaching (e.g., literature, disciplinary research, students, discussions, interactions etc) towards stuff that we may never love, or only barely value (e.g., quantification, social science notions of data and evidence, standardized teaching methodologies, bureaucratic protocols of compliance).

Yet even with these caveats, I still believe that these kinds of assessment exercises have the potential to improve our instruction, so long as they’re conceived as another form of feedback for faculty to use in the creation and revision of our courses. And I think that any course about pedagogy nowadays needs to introduce future teachers to the complex relation of assessment to one’s classroom practice.

Some of the best, most lucid discussions of these issues can be found in Erickson et al.’s book, Teaching First-Year College Students, which is designed to help instructors of first-year students understand the sheer difficulty and significance of this transition for students. But the book is comprehensive enough to help new teachers at any level understand the challenges of teaching and learning in contemporary universities.

So here’s the paragraph I was using to think about my own learning objectives/course goals, in Erickson, et al., p. 71:

Erickson p.71

The process of drawing up these course goals begins by moving the focus away from the person teaching the course to the students taking the course. In other words, we move from “the course will do X” to “students will be able to do Y”) This is a difficult but useful shift in perspective that I think most teachers would endorse.

What is truly counter-intuitive is the major shift identified in the quote: “indicate the behavior expected, not the state of mind students will be in” [emphasis mine]. In other words, what outward behavior or activities manifested by students would provide visible, or even measurable, evidence that students are indeed “knowing, understanding, or thinking” the content of your course? What kinds of evidence can you provide that would corroborate your intuition that student A knew, understand, or thought better than student B?

I believe that experienced teachers intuitively regard “student thinking” as something that they are able to engage with, understand, assess, or try to improve, even if our intuitions and experience can be shown to be fallible.

Redirecting teachers’ attention strictly to student behavior, however, takes us away from our perceptions of students’ thinking, and often forces our attention on the lowest-level tasks and students’ demonstrated acts of compliance, which are of course the easiest parts of student activity to measure. The extent to which we demand that students “know, understand, or think” seem to vanish from this minimalist depiction of learning. And higher ed teachers are particularly baffled by this kind of goal displacement, when discussions of “critical thinking” or “higher-level learning” ignore disciplinary “ways of thinking” that remain tacit or opaque to outsiders.

Unless really ingenious methods of indirect observations are put into effect, the minimal, behaviorist picture of learning is where most of assessments of the learner and learning remain. They essentially inform us of the number of students attending classes and the number of hours they filled seats and drew upon “resources,” meaning instructor time and possibly attention. In some sense, the “competency movement” represents the instructional model that this kind of assessment and its advocates would move towards, but there are real questions about whether it can be done credibly enough to compete with more traditional educational approaches. But the biggest difficulty for all these externally-focused programs of assessment is that they are uninterested in the quality of those interactions or learning that would define an experience as “education” in our usual sense of the term. There is nothing transformative, or potentially transformative, in these kinds of experiences.

The most infuriating aspect of this situation, however, is when this behaviorist language of assessment, once it has rendered most higher-level work invisible, demands that something called “critical thinking” or “upper-level learning” be taught and assessed using the methods that are least suited to generating or observing them. In this scenario, true evaluations of success or failure are essentially irrelevant to the system getting built, because it is outside the control of the student or teacher to alter.

Having said all this, I agree with Erickson that an integration of pedagogy with assessment (via strategies like provisional, instructor-written course goals) remains a worthwhile activity, because it helps clarify to ourselves and our students what we’re attempting to do. In other words, this integration of pedagogy of assessment should be done to the extent that it improves our teaching or our students’ experiences of learning. And anything beyond that feels like a displacement of our genuine goals and values regarding teaching.


michael quinn patton unknowingly addresses the assessment debates in higher education, and tells us why accountability data is (almost) never used:

I’m having enormous fun with this classic argument about the hows and whys of program evaluation, which has lots of implications for higher education’s experience of “accountability”:

[Patton, Utilization-focused Evaluation, p. 88]

I’m impressed by the fact that Patton in 1978 is chiding fellow-evaluators for ignoring political and personal factors that help determine the shape, direction, and use of their studies. He argues that because evaluators would rather imagine themselves as “scientific researchers” rather than participants in a political process, they engage in a process that wastes the time of all involved, and ensures that no one uses the information gathered.  Even if the present generation of evaluators has escaped this kind of scientism,  however, it seems that many pundits, administrators, and especially politicians persist in this naive view of the role of “data” in “decision-making.”


mla round-up: learning from assessment

Since Eleanor asked, I’ll briefly report on the exchanges we had at Donna Heiland’s and Laura Rosenthal’s MLA panel, which was set up in conjunction with a Teagle Foundation collection of essays they’re editing together with the same title.  Along with Laura R. and Donna, I participated with Laura Mandell, and John  C. Ottenhoff.  These summaries are of course my own, and so if the participants or audience members have any corrections, let me know, and I’ll fix immediately.

Continue reading