MLA 2012: When Assessment Goes Bad

[x-posted at http://assessmentforlearning101.wordpress.com/]

On the first day of MLA 2012 I attended “Assessing Assessment(s),” chaired by Jeanne A Follansbee (Harvard), with talks by Donna Heiland (Teagle Foundation), John M. Ulrich (Mansfield University), and Eve Marie Wiederhold (George Mason). Reed Way Dasenbrock was unable to attend, which is a shame because I heard an excellent talk that he gave last year and was looking forward to his perspective on this issue. (I have also taught his essay from Falling into Theory in my “Critical Methods in Literary Study” class.)

All the papers were sharp and interesting, with Heiland considering the role of assessment in cultivating student learning, Ulrich reporting on the highs and lows of his institutional practice, and Wiederhold offering a vigorous critique.

But what really enlightened me at the panel was the Q&A, during which it became clear that there was a lot of really terrible assessment going on out there. One speaker described how an “assessment professional” had been hired at her institution to set the learning outcome goals for all the programs. Another reported that he regularly turned in a series of graphs charting student grades, much to the delight of local assessment administrators.

I had mostly assumed that everyone hated assessment because it is part of the paradigm shift described by Tagg and Barr from “Instruction” to “Learning” (a point discussed by Heiland) which pretty radically goes against the status quo and thus makes people anxious. (Maybe this goes back to Dave’s discussions of “threshold concepts.”) Further, I too hated it at first, as it seemed redundant and intrusive. Now, though, I see it as part of a potential change from counting credit hours (or as my former provost used to say, “butts in seats”) or relying on student evaluations (or, as Roksa calls them, “student satisfaction surveys”) to opening up new ways of emphasizing, appreciating, and thinking about learning itself as the goal, which in turn leads to thinking that there might be better ways to get there than counting up things up, be they credit hours or survey scores. So while assessment has the reputation of bean counting, in fact we are currently wading through heaps of beans (credit hours; evaluation scores; grades; office hours; chairs bolted to the floor; multiple choice tests) without even noticing them as they have become so natural to our environment. In a true “culture of assessment,” there would be fewer beans.

It seems, though, at some institutions assessment has not been part of a larger consideration of student learning, but instead the evil bureaucratic exercise that many feared it would become.

About these ads

4 responses to “MLA 2012: When Assessment Goes Bad

  1. I don’t think any treatment of assessment can dismiss its danger of becoming an instrument of Weberean, bureaucratic rationalization. In the absence of faculty engagement, it almost always devolves into compliance-and-accountability regimes, since it doesn’t lead to better practice by the people doing the teaching. On the other hand, there are plenty of unreflective yet powerful economic and disciplinary forces in the academy who would argue against any kind of improvement in teaching practice. So any assessment should be able to answer the question, “how is this going to improve practice?”

    What I’ve decided about this topic is that assessment is only the first, information-gathering phase in an cycle that must include critique of existing practices and the creation and implementation of improvements, followed by further assessment of results. In and of itself, it does not necessarily represent “critical thinking” about instructional practices or institutional processes, though it must aspire to critical and reflective thinking if it is to become “assessment for learning.”

  2. Laura Rosenthal

    In fairness, every expert on assessment that I have read would say that assessment is itself about that critical process you describe, and that it doesn’t count as assessment unless it feeds back into some kind of thinking about improvement. In other words, for people like Peggy Maki or Linda Suskie, limiting your activities to the information-gathering phase is in fact not doing assessment, but something else. Even the accreditors who I have heard speak on this topic would agree. There seem, though, to be places where this is not happening.

  3. Sure. It’s called “compliance,” and it’s a standard bureaucratic response to top-down administrative or legislative initiatives, to minimize their effects and push changes to the edges of practice.

    What I would like to see more of, though, is more analyses of assessment that takes its institutional politics into account. Peggy Maki’s work does this, as does Adrianna Kezar’s, but I think the power differentials among administration, faculty, and assessment staff are really key for understanding how it does and doesn’t work.

  4. Laura Rosenthal

    Alas, “compliance.”
    You re right that there needs to be more of that kind of analysis. I think a lot of us were trying to do that in the Teagle collection (including you), but it would be good to see more.