Since Eleanor asked, I’ll briefly report on the exchanges we had at Donna Heiland’s and Laura Rosenthal’s MLA panel, which was set up in conjunction with a Teagle Foundation collection of essays they’re editing together with the same title. Along with Laura R. and Donna, I participated with Laura Mandell, and John C. Ottenhoff. These summaries are of course my own, and so if the participants or audience members have any corrections, let me know, and I’ll fix immediately.
Laura M. went first, and discussed the challenges she had experienced aligning course objectives with overall departmental or program objectives. Most offputting was the existence of a computer software program being developed at Miami that had the benefit of forcing instructors to select and focus upon their objectives, but which also had the potential to become an instrument of surveillance for administrators and others apart from the instructor.
John’s talk was next, and John discussed the dissatisfaction he often felt with the time he devoted to discussion in the classroom, and the extent to which he and other lit professors he observed simply assumed their discussions were having a pedagogical benefit. He was especially concerned about how we could assess our students’ comprehension in reading, and whether our usual practice of discussing texts contributed in any way to such improvement in comprehension.
My talk was about the two aspects of assessment we see as faculty in the contemporary research university: the reflexive language of top-down “improvement” via standardization, and the actual desires of faculty and students to engage in institutional improvement and enhanced learning. My argument was that much of our assessment talk was necessarily rhetorical and tactical, done essentially to justify ourselves to the public, but that if conducted in a properly transparent and faculty-driven way, could actually produce the kinds of surprises and insights that could contribute to actual faculty refinement of their teaching practices.
Laura R. talked about her experience at MD on a number of assessment projects at the departmental and college levels, and what it taught her about her own (our own) discipline of literary studies. What she appreciated most about the experience was the way that discussing literary studies with other scholars forced her to articulate disciplinary frameworks and assumptions she otherwise would never have had to describe. She felt that this kind of rhetorical work of explaining her own discipline to outside scholars made her own task in the classroom that much clearer, though she wished that scholars in literature could explain and justify their own disciplinary activities, and the objects of literary study, more effectively to other scholars and to the general public.
Finally, Donna called attention to the excellent resources in Assessment found on the Teagle website for those interested in further reflection upon these problems, and summarized what she had learned as someone who had evaluated and funded many different projects in assessment over the years. These included insights like making sure that assessment never became an end in itself, but a means to an end; that quantitative study was the beginning of the discussion, not the end to the argument; that institutions should use the data they get; that this kind of work succeeds when it is initiated and seen through by the faculty.
Many of the audience responses focused on John’s talk, and how to evaluate the relation of discussion to reading comprehension. There was also some discussion of the MLA/Teagle report of a year or so ago, which some audience and panel members (including myself) thought was a little too focused on the traditional, historical-period organized English department, without enough focus on the role of rhet/comp and creative writing and other kinds of scholars who are currently working within the majority of departments today. It would be interesting to know, though, what percentages of departments correspond nowadays to the Teagle/MLA template.
Finally, my own response to these very suggestive papers was that, as usual, we might have spent more time talking about the corporatization of the university that was driving such accountability-talk, and the extent to which we were internalizing such values with our own suggestions. Of course, focusing on this corporatization instantly fills everyone with deep despair about the future of higher education, so maybe that’s why we sidestepped that issue. (This is my ambivalence talking, so don’t mind me)
One other thought I had, which occurred to me over the next day or so, is that the increasing importance of assessment for humanities departments argues for either some kind of retraining of in-house humanities people to help do this kind of work inside departments or colleges, or maybe longer-term collaborative relationships with people whose expertise would allow them to do this kind of qualitative work with better insight than the current rather random arrangements you often find inside colleges. This is an insight I gained from the wonderful Digital Collaborations talk I heard the following day, but that’s for another post.