Perhaps because the semester is over and faculty are thinking about evaluations more generally, some interesting threads on assessment have appeared lately in In Socrates’ Wake, Easily Distracted, and Blogenspiel. One of the first things that struck me about these posts was how differently this language of assessment and accountability plays out, depending on where you stand in the hierarchies of the discipline or professional status: public universities vs. private colleges, Research 1 vs. everyone else, tenured faculty vs. tenure-track, lecturers or grad students. Assessment looks different to people in each one of these locations, and offers different opportunities and risks. This shouldn’t surprise anyone, but it should make us bite our tongues and think a little harder before speaking out on behalf of others. After all, though assessment is usually framed as an insiders/outsiders debate, the interests of those “inside” are in fact multiple and may in fact conflict. Some quick takeaways:
1. Another Damn Medievalist notes that
all of the assessment of us as faculty is predicated on the idea that students are here to learn. It’s based on the idea that students do learn, and remember. Even though we like to cringe, because so much of assessment and accreditation seems to focus on whether students are getting value for money, it *is* also based on the idea we try to impress upon our students: what we teach is important, and a BA/BA is not just a piece of paper that means a better entry-level job. But what if that’s all the students want? What if they aren’t so worried about the experience, about savouring and remembering what we try to teach them? What if, as one of my students said to me, they just aren’t that into it and really just didn’t feel like getting the A they could have got, because all they needed to graduate was 70%? For most of my students at SLAC, that seems to be the rule.
So assessment gauges individual teacher performance on the basis of student performance: so far so good, but this mimetic approach may also ignore how systemic disparities of student motivation and/or skills will interfere with this model. As she notes, “All kidding aside, how do we assess places like SLAC in a way that is fair to the good faculty and to the good students? If a campus tries to push a reputation as ‘selective’, then how do we integrate the results for those students who came in on waivers?” My answer would be, such an assessment will not be “fair” unless these factors are somehow represented as well. But this inevitably demands that those being assessed work with those doing the assessments, to make sure the questions, criteria, etc. reflect the local circumstances. And in my experience, it also means that those below a certain threshold of job security/status will never have a voice within the assessment process.
2. The ISW discussion was valuable, I think, because it introduced the distinction between standardized assessments (multiple choice tests designed for all students of a particular discipline, for example) and quantitative assessments (rubrics etc. that would allow faculty to exercise qualitative judgments that could in turn be translated into quantitative measures). Since both philosophy and literature lack a definitive canon that students could be universally tested on, I think the assumption of standardization is the biggest problem with these kinds of proposals, which come most often from non-academics. It may be different in other disciplines, though. Those who spent some time in the sciences would know more about this than I do.
3. Finally, Tim Burke’s discussion advocates pretty convincingly for more transparency and demystification of higher ed, and argues against a “regulatory machine administering tests, enforcing rigorous common standards, hauling professionals up before a bureaucratic star chamber every four years?” Fair enough, but which do you think your state legislature would prefer?