michael quinn patton unknowingly addresses the assessment debates in higher education, and tells us why accountability data is (almost) never used:

I’m having enormous fun with this classic argument about the hows and whys of program evaluation, which has lots of implications for higher education’s experience of “accountability”:

[Patton, Utilization-focused Evaluation, p. 88]

I’m impressed by the fact that Patton in 1978 is chiding fellow-evaluators for ignoring political and personal factors that help determine the shape, direction, and use of their studies. He argues that because evaluators would rather imagine themselves as “scientific researchers” rather than participants in a political process, they engage in a process that wastes the time of all involved, and ensures that no one uses the information gathered.  Even if the present generation of evaluators has escaped this kind of scientism,  however, it seems that many pundits, administrators, and especially politicians persist in this naive view of the role of “data” in “decision-making.”


About these ads

4 responses to “michael quinn patton unknowingly addresses the assessment debates in higher education, and tells us why accountability data is (almost) never used:

  1. Laura Rosenthal

    The whole issue of data that Carey addressed in your link about data sharing is still very tricky. I know of at least one group where institutions have agreed to share data among themselves
    mainly to learn from it. But probably anything we have now is too crude to be used for accountability purposes.

  2. I think that’s right. Institutions don’t share because at bottom their leaders consider themselves engaged in a competitive, market-driven enterprise rather than a knowledge-driven one. But one issue that Patton highlights is that there are lots of factors governing the use of information, which most accountability initiatives ignore: these are related to highly specific circumstances of personality and power. Patton quotes Keynes to the effect that “there is nothing a government hates more than to be well-informed; for it makes the process of arriving at decisions more complicated and difficult.” (1978 edn. , p.54). From my own vantage point in the university, this rings very true, and it reveals a lot of the limits on deliberation within a hierarchical organization like a university.

    The broader point that Patton makes is something Texas accountability politics has made me highly aware of: the models of knowledge and learning, but also of rational deliberation and decision-making, posited in most accountability initiatives are utterly reductive and unworkable. They are basically an ideological fantasy of how learning takes place, or “should” take place. However, one of the best places to begin contesting this highly ideological representation of learning would be in concrete, empirical accounts of how it really happens, and what kinds of effects certain proposed “improvements” actually have. In the context of an accountability movement driven now by right-wing suspicions of “reality” and the bureaucratic rationalities (including, of course, the university) that help to constitute it, facts, evidence, and arguments are all we have to defend higher ed. To some extent, that necessarily aligns those working in higher ed with the imperatives of assessment (though not accountability), to gain better knowledge of learning and to put this knowledge to better use.

  3. Laura Rosenthal

    That’s a good point. The trick is to figure out how to persuade colleague of their interest in the face of stunning misperceptions and some really bad practices. But I agree that offering empirical accounts are the best way to counter ideologically-driven forms of accountability. Maybe, though, those empirical account would themselves be a form of accountability, although a more genuine one.

  4. Oh sure. Perhaps I’m falling back into the naivete Patton cautions against, with my own confidence that better data would help us construct better arguments against an ideology that travesties learning. Or perhaps even a carefully-constructed, self-initiated account could be misused by a bad administrator or a right-wing think tank with an agenda. It’s hard to anticipate those kinds of political moves, and I’m not sure it’s always productive to communicate that defensively. I suppose I would concede that there are legitimate and illegitimate forms of accountability, which center on outside stakeholders’ understandable concerns whether our publicly funded institutions are being run well. The problem is that this kind of scrutiny is political, selective, and as we’ve seen, often motivated by a deep sense of the illegitimacy of the academic enterprise generally.