stalemate UPDATED

This morning I saw a post from Paul Corrigan about the assessment movement’s real impact, which amounted to “widely observed rituals of compliance” but little genuine change. The real focus of both the post and the Ted Marchese essay it took its title from was the continuing stalemate between assessment and accountability in higher ed.  This is caused in part by everyone talking past one another. Assessment experts tend to regard their own activity as a scholarly enterprise that unaccountably gets abused by the administrators who implement it. Faculty hear most assessment talk as either meaningless College of Ed jargon or administrators’ pernicious attempts to micromanage the work conducted in  classrooms.  Administrators regard it chiefly as something done to satisfy trustees or politicians, and try to think of it as little as possible otherwise. So yes, no one understands anyone else here, but that’s not why the stalemate has lasted almost as long as the assessment movement itself.

What Corrigan doesn’t seem to recognize is that these three groups do not have equal voice in this matter, because it is the administrators, as the folks who hire the assessment experts as staffers or consultants, and who “manage” the faculty, who have decided time and again to define and pursue assessment largely as accountability, standardization, and outward compliance.  There is a political economy to the way that higher education evaluates itself, and I believe that both assessment experts and disciplinary faculty need to understand how assessment and accountability both work within the emerging regimes of neoliberal management of public higher education.

Christopher Newfield, in the important piece I just linked to, spells out the strange imperviousness of administrators to the knowledge extracted by the accountability schemes they use to manage faculty and student interactions. Their imperviousness derives from their recent self-definition as managers rather than faculty members:

In contrast to professional authority, which is grounded in expertise and expert communities, managerial authority flows from its ties to owners and is formally independent of expertise.  Management obviously needs to be competent, but competence seems no longer to require either substantive expertise with the firm’s products or meaningful contact with employees.  The absence of contact with and substantive knowledge of core activities, in managerial culture, function as an operational strength.  In universities, faculty administrators lose effectiveness when they are seen as too close to the faculty to make tough decisions.

In the upside-down world of managerial culture and Christensen’s fantasies of “disruption,” paying too close attention to the information collected by others, or seeming too responsive to what it tells you about students or faculty, all these are signs of weakness, not strength.  And how else can we read the last 10 years of developments in public higher education, except as a demonstration of these principles in action?

So how might we redirect the discussion back toward improvements in learning, for both students and faculty?  One possibility suggested by Newfield is to tie improvement back to the notion of shared governance, and regard good governance and faculty communities of expertise as a necessary but not sufficient condition for improved teaching and learning.  And while we’re discussing research, I would love to see someone analyze the impact that governance has on teaching and learning.

DM

(NOTE: Those interested in discipline-specific approaches to assessment in literature departments should simply go to Laura Rosenthal and Donna Heiland’s Teagle Foundation collection to see a full range of responses to this problem)

(UPDATE, NOTE: Dr. Randi Gray Kristensen directed me to this article, which laid out a similar argument in 1999: Cris Shore and Susan Wright, “Audit Culture and Anthropology: Neo-Liberalism in British Higher Education,” The Journal of the Royal Anthropological Institute, Vol. 5, No. 4 (Dec., 1999), pp.557-575;  http://www.jstor.org/stable/2661148 )

(2nd UPDATE, NOTE: Also found this, an illuminating comparative, ethnographic discussion of “audit culture” and “neoliberalism” in various national contexts: ANDREW B. KIPNIS, “Audit cultures: Neoliberal governmentality, socialist legacy, or technologies of governing?” American Ethnologist, Volume 35, Issue 2, pages May 2008: 275–289; http://onlinelibrary.wiley.com./doi/10.1111/j.1548-1425.2008.00034.x/full)

About these ads

9 responses to “stalemate UPDATED

  1. Paul T. Corrigan

    The relationship between the asymmetric power structures in higher education, on one hand, and the quality of teaching and learning that happens at an institution, on the other, are tricky to untangle. I appreciate your comments on it. (Also, thanks for reading and citing my post.)

    • Dave Mazella

      Well, that’s a piece of research I’d be eager to read. And thanks for the provocation, which led me to Anne’s fine paper.

  2. Pingback: Some essays I would like to send my colleagues | coldhearted scientist وداد

  3. Dave Mazella

    Sample sentence from Shore/Wright: “The audit system
    appears to rely largely on fear, expectations of compliance and a lack of imagination regarding the possibility of alternatives. To seize the agenda requires an alternative semantics of accountability and a knowledge of power.”

  4. Dave Mazella

    Money quote from Kipnis: “The three audit cultures briefly depicted in this article share many features. First of all, they all attempt to devise numeric performance measures. In doing so, they all to a greater or lesser degree distort the phenomena they purport to measure. The number of newspaper subscriptions ordered says little about the “ideological consciousness” of cadres. The number of articles published in education journals or even the test scores of students does not directly reflect the quality of teaching, and, even in the case of measuring something as seemingly simple as processing check and credit card payments, the number of payments processed cannot directly indicate employee efficiency. In all of these cases, the distortions and irrationalities brought about by the false equivalence between what was measured and the qualities that were supposedly indicated by that measure led to dissatisfaction and complaints by those whose performance was measured. In all of these cases, employees took collusive measures with other employees to promote their chances of receiving positive reviews. The township cadres colluded with village cadres; the bank clerks colluded with each other; and the principal of the primary school, with the help of his teachers, students, and full-time audit specialists, put on a performance designed to sway the auditors more than educate the students. Ironically, and consequently, regardless of whether the measures were designed to individuate workers, they also always produced particular forms of sociality and related, nonindividuated forms of personhood.”

    • Paul T. Corrigan

      Wow. What a quote.

      In social and natural sciences, one often has to measure something other than what one wants to measure, using proxies, because the real object cannot be measured or cannot be measured easily. http://en.wikipedia.org/wiki/Proxy_(statistics) That’s all fine and good as a methodology, especially when one can identify a close enough proxy. The problem comes in when the measurement of the proxy becomes all-important and when it is forgotten that it is a proxy. Then, those being evaluated, will put their efforts into improving the proxy which often renders it useless as a proxy anymore, i.e., the correlation erodes because the context in which it was useful has been manipulated by its use. I think that this is an example of Goodhart’s law in action: “When a measure becomes a target, it ceases to be a good measure.” http://en.wikipedia.org/wiki/Goodhart%27s_law

      Or, even better, Campbell’s law, applied thusly to educational assessment: “achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways.” http://en.wikipedia.org/wiki/Campbell%27s_Law

  5. Dave Mazella

    Oh, yeah. Goal displacement is the Achilles’ Heel of the entire accountability project, since human agents have a feedback loop (rewards and punishments) that tells them how to identify and pursue the goal collusively as target. All this seems entirely familiar and expected, except for whatever reason, to the “naive” advocates of accountability or audit culture (e.g., Arne Duncan), who remain completely incapable of acknowledging the predictable problems with this model. But I haven’t seen nearly enough discussions of these problems in educational research, apart from people like Sara Goldrick-Raab. Or maybe they haven’t had any impact?

  6. Laura Rosenthal

    I have one less theoretical observation about outcomes assessment that actually dovetails with the above conversation. Over the years I have noticed (and this is purely anecdotal, but based on conferences, contacts with different campuses, etc) that the work of assessment itself has a tendency to be pushed further down the hierarchy. When this happens, TT faculty tend to think less of it and higher administrators turn it over to people who have little authority within the institution. I have seen some wonderful exceptions to this, but I have also observed an unfortunate tendency to find the cheapest way to get it done. The knee-jerk reaction at its introduction was always “unfunded mandate,” and as much as I still think that outcomes assessment differs significantly from NCLB (the reference of the previous phrase), the “unfunded” part is a crucial problem and it won’t be effective until resources get invested in reflection on student learning.

  7. Dave Mazella

    Yeah, I absolutely agree with this: you can tell a lot about the real priority of something by the assignment of the responsibility: if the person at the top considers it important, then it will be a direct report. Assessment stuff tends to go in and out of its own little shop in universities, and units and their leaders tend not to want queries or results that could cause too much conflict or self-doubt.

    When it’s well-done, though, it’s an inherently recursive process, with departments having at least input into what info gets collected, and with some opportunity for the department to discuss a department-wide response. The get it done, rush it through approach is a good sign that the info will never get used.