mla round-up: learning from assessment

Since Eleanor asked, I’ll briefly report on the exchanges we had at Donna Heiland’s and Laura Rosenthal’s MLA panel, which was set up in conjunction with a Teagle Foundation collection of essays they’re editing together with the same title.  Along with Laura R. and Donna, I participated with Laura Mandell, and John  C. Ottenhoff.  These summaries are of course my own, and so if the participants or audience members have any corrections, let me know, and I’ll fix immediately.

Laura M. went first, and discussed the challenges she had experienced aligning course objectives with overall departmental or program objectives.  Most offputting was the existence of a computer software program being developed at Miami that had the benefit of forcing instructors to select and focus upon their objectives, but which also had the potential to become an instrument of surveillance for administrators and others apart from the instructor.

John’s talk was next, and John discussed the dissatisfaction he often felt with the time he devoted to discussion in the classroom, and the extent to which he and other lit professors he observed simply assumed their discussions were having a pedagogical benefit. He was especially concerned about how we could assess our students’ comprehension in reading, and whether our usual practice of discussing texts contributed in any way to such improvement in comprehension.

My talk was about the two aspects of assessment we see as faculty in the contemporary research university: the reflexive language of top-down “improvement” via standardization, and the actual desires of faculty and students to engage in institutional improvement and enhanced learning.  My argument was that much of our assessment talk was necessarily rhetorical and tactical, done essentially to justify ourselves to the public, but that if conducted in a properly transparent and faculty-driven way, could actually produce the kinds of surprises and insights that could contribute to actual faculty refinement of their teaching practices.

Laura R. talked about her experience at MD on a number of assessment projects at the departmental and college levels, and what it taught her about her own (our own) discipline of literary studies.  What she appreciated most about the experience was the way that discussing literary studies with other scholars forced her to articulate disciplinary frameworks and assumptions she otherwise would never have had to describe.  She felt that this kind of rhetorical work of explaining her own discipline to outside scholars made her own task in the classroom that much clearer, though she wished that scholars in literature could explain and justify their own disciplinary activities, and the objects of literary study, more effectively to other scholars and to the general public.

Finally, Donna called attention to the excellent resources in Assessment found on the Teagle website for those interested in further reflection upon these problems, and summarized what she had learned as someone who had evaluated and funded many different projects in assessment over the years.  These included insights like making sure that assessment never became an end in itself, but a means to an end; that quantitative study was the beginning of the discussion, not the end to the argument; that institutions should use the data they get; that this kind of work succeeds when it is initiated and seen through by the faculty.

Many of the audience responses focused on John’s talk, and how to evaluate the relation of discussion to reading comprehension. There was also some discussion of the MLA/Teagle report of a year or so ago, which some audience and panel members (including myself) thought was a little too focused on the traditional, historical-period organized English department, without enough focus on the role of rhet/comp and creative writing and other kinds of scholars who are currently working within the majority of departments today.  It would be interesting to know, though, what percentages of departments correspond nowadays to the Teagle/MLA template.

Finally, my own response to these very suggestive papers was that, as usual, we might have spent more time talking about the corporatization of the university that was driving such accountability-talk, and the extent to which we were internalizing such values with our own suggestions.  Of course, focusing on this corporatization instantly fills everyone with deep despair about the future of higher education, so maybe that’s why we sidestepped that issue.  (This is my ambivalence talking, so don’t mind me)

One other thought I had, which occurred to me over the next day or so, is that the increasing importance of assessment for humanities departments argues for either some kind of retraining of in-house humanities people  to help do this kind of work inside departments or colleges, or maybe longer-term collaborative relationships with people whose expertise would allow them to do this kind of qualitative work with better insight than the current rather random arrangements you often find inside colleges.  This is an insight I gained from the wonderful Digital Collaborations talk I heard the following day, but that’s for another post.

DM

Advertisements

25 responses to “mla round-up: learning from assessment

  1. Eleanor Shevlin

    Dave,

    Many thanks for a thorough account of this MLA session. The presentations cover and raise a rich array of issues about assessment, including its challenges and shortcomings. Miami’s requirement to use a standardized template of objectives dovetails, in my mind, with Dave’s concerns about the corporatization of the university. And if humanities scholars were not involved with constructing this menu of objectives (and I sense that they were either not involved or not well-represented), then the resulting assessment process would seem to have little hope of fostering ways to make the data more than simply the end product. Both John’s and Laura R.’s presentations seem relevant to arguing against Miami’s practice.

    Dave’s call for more collaborative work is something that my institution seems to be fostering at the university, college, and department level (and Laura R.’s comments about what she gained from having to explain our discipline to scholars in other fields suggest that Maryland has set up similar avenues for exchange and collaboration). Besides offering a wealth of resources, these meetings have emphasized what a department does with the data–that is, the action plans created in response to the data aimed at “closing the loop” (the lingo for improving ones courses and program).

    As for Dave’s remarks about too much focus on traditional periodization at the expense of rhet/comp/creative writing, we revamped our BA and BSEd programs in English in Fall 2007, and the revised program offers a choice of pursuing one of two tracks–“literatures” or “writings.” Moreover, each student takes 2 (if in the BA) or 3 (if in the BSEd) cross-over courses from the track he or she is not pursuing. Heading a team, I have been radically revising all our assessment tools for the new program; along the way, we have been involving the whole department in the process. The tools are being piloted this year (our first class under the new programs will graduate in 2011). In the future, I have recommended that assessment tools be created when new programs, minors, courses are being devised as part of the proposal process.

    In any case we have been working hard to devise program assessment instruments that do justice to the new writings track. We have based one of the tools on our three core courses that all majors take and that aim to prepare students for the intermediate and seminar courses regardless of which track they are following. It’s difficult work, but as Laura R. says, it offers insights and illuminations about our discipline and its practices. The multiple-choice instrument, one of three program assessment tools, is designed to assess, foremost, student proficiency in applying skills gained (or honed) in the Core courses. Some questions are aimed at assessing comprehension and will serve as a test of sorts to measure John’s concerns about whether our discussion-based approaches help develop student reading abilities. The data we had been collecting previously really told us little and thus relegated assessment to busy work rather than yielding information that would be truly useful in determining what we are doing well and where improvements are needed.

    ES

  2. Dave Mazella

    Thanks, Eleanor. Laura M. can speak more knowledgeably about the situation at Miami than I ever could, but my impression is that everything was done there in good faith, but that it’s simply very difficult to create these kinds of instruments without favoring the quantitative disciplines or creating at least _potentially_ panoptic systems of accountability.

    As for my own talk, I _do_ think that assessment offers a very interesting possibility for cross-disciplinary collaboration among faculty-members, one that could encourage humanities people to overcome their fears of the quantitative, and vice versa. And I am completely convinced that those doing the work of teaching need to be in charge of the assessment, to make sure we really are measuring what’s important, and that the results are taken into account with further refinement to the curriculum and teaching practices.

  3. Eleanor Shevlin

    Thanks, Dave. I suspect that the standardization process at Miami was indeed a good faith effort, and perhaps also initiated in an effort to make assessement less burdensome to faculty. Yet, from an outsider’s perspective this approach seems to resemble aspects of the business-model of education increasingly being applied to higher ed.

    As for collaboration across disciplines and having assessment being a faculty-driven proces, I agree wholeheartedly with you, Dave. The idea for our multiple-choice instrument emerged from workshops I attended in which assessment plans from other departments were exchaged.

    • The Teagle website I mentioned in the post has some great resources, and one of the best is this essay by Peter Ewell, whose work was mentioned repeatedly at the panel. Ewell talks at length about the affinities and the contradictions between the “assessment as accountability” vs. “assessment as institutional improvement” agendas. Both of these require at least some degree of agreement, to the extent they both demand some consensus about what to look for and how to define it, but obviously the accountability-talk we’ve all heard is much more driven to impose uniformity on what people do. So the question is, how do you accomplish one version of assessment without getting coopted by the other? The challenge is to involve the various disciplines and approaches without turning it into some kind of mush, some kind of intellectual esperanto without any application or any validity for anything in particular . . .

  4. Eleanor Shevlin

    Elwell’s excellent overview of the two strands that have informed assessment historically–accountability and improvement–is certainly accurate, but “accountability” is not necessarily inherently negative. Meriam-Webster defines “accountability” as “an obligation or willingness to accept responsibility or to account for one’s actions,” and this definition seems quite akin to what Laura R. appreciated perhaps the most about engaging with others from across her campus on assessment:

    the way that discussing literary studies with other scholars forced her to articulate disciplinary frameworks and assumptions she otherwise would never have had to describe. She felt that this kind of rhetorical work of explaining her own discipline to outside scholars made her own task in the classroom that much clearer…

    If we think of accountability as a need to explain [not justify] our “actions” [the teaching of various subjects within the discipline of English] in order to discover ways to improve our practices and results, the two concepts can arguably be viewed as working in tandem rather than opposition with each other.

    Elwell’s suggestion to begin assessment with a focus on pre-requisite or foundational courses is sound and can easily be translated across disciplines. Indeed, the focus on these early, foundational courses in another department’s assessment plans is what led me to using our own three core courses as a basis for my department’s plan.

  5. Laura Rosenthal

    Hi All,

    A couple of points:
    Re Laura M’s discussion of what’s happening at Miami: I could be wrong, but I did not get the impression that all departments were being required to use a particular software. I think she was introducing us to us as a potentially helpful tool, but not without flaws.

    Re Peter Ewell: I actually used this article as the foundation for the Delegate Assembly discussion of assessment a few years ago. To me, the most important point about the article for this discussion is that assessment to improve learning long predates new pressures for accountability and has its own history and its own experts. Assessment thus does not necessarily have anything to do with accountability, although it can certainly be used for this. My sense is that when many universities first started thinking about using assessment for the purposes of accreditation, there was a lot of concern about how the information would be used, etc. I think, though, much of this fading. Middle States (the accreditors for my region) does not as far as I can tell actually ask for the results of assessment projects. They only ask that institutions can demonstrate that they have a mechanism for checking to see that they are doing what they say they do. This translates into: how do you know that your students are learning? So far, this has been the question and not: give us proof that your students are learning or (heaven forbid) demonstrate that your students learn more than students at other campuses. So right now the form of accountability demanded is not of the micromanaging kind, although one can certainly imagine that there would be opportunities to turn it into this–which is what everyone worries about. Thus I think conflicts between the desire to better understand program goals/student learning and accountability is perhaps often exaggerated.

    Finally, re my paper: I think the main point that I wanted to make was that not only does assessment force us to think about the foundations and assumptions of our discipline, but it does this in a way that could be useful outside of assessment. In other words, to put it bluntly, we have to stop seeing assessment as an attack on our discipline and instead recognize that it might be the only thing that can save it.

  6. Hi Laura,

    I think the debate about the courseware at Miami is suggestive, because, as I recall the talk, the courseware was simply developed by individuals interested in creating a better, more integrated framework for teaching and learning, but that very same quality of integration is what makes some of us suspect that it could be “turned against us.”

    So even initiatives conceived as institutional self-improvement can be seen, rightly or wrongly, as top-down authoritarian moves of “accountability.” Both are about collecting information, but one is for the formative uses of teachers and the other is about the summative role of administrators and other public bodies.

    Ewell’s argument, as I understand it, is not just about distinguishing these two, but seeing them in some dialectical relation.

    As long as we consider the motivation of teachers important for the implementation of these measures, these concerns have to be addressed. Otherwise, these kinds of practices will remain unintegrated into our conception of what we do.

    In terms of the most intrusive, top-down micromanagement, in Texas our legislature passed something called HB 2504, which is dictating that a lot of information, including syllabi and faculty teaching evaluations, be eventually posted onto public university websites. This was the _compromise_ after significantly scarier initiatives were pushed forward by our governor. So there’s no shortage of attempts to control the content or delivery of courses in higher education, nor is the accountability movement going to go away any time soon, because they’ve had such success framing the debate about public K-12 education. So, again, this is not just something we can decide internally among ourselves as a discipline, but something that we will have to contend as a matter of public debate for some time to come.

  7. Eleanor Shevlin

    Laura: Although I did not highlight in my comments that you saw benefits to assessment for our discipline that go beyond the actual process of assessing programs, I did indeed recognize that you were making these arguments. I share your feeling about the ways in which working on assessment programs can help us articulate the value of our discipline to those in other academic disciplines and, more important perhaps, to lay audiences.
    My institution is under Middle States, too, and while we have been asked to post learning outcomes/objectives on our websites (and that is a good thing because it is another way of articulating the skills those majoring in English should gain to a larger, public audience), we have not been subject to what Dave is experiencing in Texas.

    I had not read Elwell’s article in a while, but I am sorry if my comments misrepresented him. Based on my memory of his piece, he is indeed also concerned with the dialectical relationship between accountability and improvement.

    I don’t remember where I read a British piece that expressed concern about the need to prove the commercial value of humanities projects or risk losing out in major ways to scientific endeavors in terms of funding, but accountability here, though not being used in an assessment context, was clearly being used in top-down, authoritarian ways.

  8. Laura Rosenthal

    Hi Eleanor,
    No, I didn’t think you misrepresented Ewell. The reason I found his essay so useful for the MLA discussion was because was pointing out the distinction between assessment and accountability at a time I think when they were pretty regularly conflated. This still happens, I think, because the experience of most faculty with assessment comes from accountability demands. Most of us would probably never have heard of assessment if it weren’t for the accountability movement. But it’s good to be reminded that assessment once had a life outside of accountability. I also agree with you that it would be counterproductive (and maybe arrogant) to reject accountability out of hand.

    So to Dave’s point: yes, then, they are intertwined, maybe not necessarily inherently intertwined but certainly intertwined in practice. Yet I hope to maintain that it is possible nevertheless to acknowledge this but then move on the explore the potentially productive aspects of assessment. Thus a minor correction in your helpful account of my paper (which I will keep in mind as I develop the essay): it is not so much that assessment forced me to articulate the hidden framewords of my discipline, but that I believe that multiple assessment projects in many different English Language and Literature programs will retheorize the discipline itself in better ways than the Teagle/MLA report could do from the perspective of one small (though distinguished) committee.

    Laura
    PS I think the ‘commercial value’ essay might have been in the London Review of Books and it was indeed very disturbing.

  9. Eleanor Shevlin

    Thanks, Laura– and I look forward to seeing your essay on this topic at some point.

    The piece I was thinking about was indeed the one in the London Review of Books…

  10. Dave Mazella

    Laura, Eleanor,

    First of all, if there’s a link to the LRB piece you’re referring to, could you share it with us?

    Second, I think accountability/improvement distinction is important, but the confusion is not caused simply by assessment-shy academics. There are genuine and deep misapprehensions about the purposes of higher education embedded within those initiatives, questionable assumptions about what we do and why, and so forth, and so the confusions to my mind are largely getting perpetuated by the kinds of folks who are writing this kind of legislation or cheerleading it in the press.

    So the inside strategy of addressing internal concerns will have to be matched with an outside strategy of addressing the value of our work to the public.

    Finally, I think your idea of theorizing the discipline through these kinds of “ethnographic” description really suggestive, and a fabulous use of documents that otherwise would go unread and unused. This, incidentally, is exactly the kind of thing the MLA could do. Looking forward to hearing more about it. I’m similarly revising my own thoughts about this as we speak.

    • Donna Heiland

      Hello, everyone. I too think Laura R’s idea of re-theorizing the discipline through the work of assessment is quite wonderful, and indeed, I think it ties in nicely with the one part of my remarks that Dave’s good summary didn’t mention, and that is the need for more good disciplinary assessment. If assessment has the potential to help us re-theorize the discipline of literary study, isn’t the reverse true as well? Doesn’t our discipline (like other disciplines) shape—or have the potential to shape—assessment work in various ways? There’s clearly a lot already happening at the disciplinary level, and yet my sense—maybe still more intuitive than anything else—is that there’s a lot of good work yet to be done in this area. I’d love to see us learn more about how to bring the learning and tools of a specific discipline to bear on assessing the learning that is at the heart of that discipline.

      That said, I should add that I am also cautious about re-inventing wheels. I don’t think we necessarily need to develop entirely new assessment technologies and methods to do good discipline-based assessment work, and we probably do want to collaborate with others as we do it (maybe assessment experts outside the discipline, though I agree with Dave that we do need to be training some good in-house assessment people in humanities). Still, there is room to work here.

  11. Eleanor Shevlin

    Dave,

    I can’t seem to find a link, but here is another piece that I had read on the same theme. As you will see, it is from the Times.

    And I agree with you about what I would deem the misuse of accountability in certain hands and about the need to articulate the value of what we do to the general public. My earlier remarks were aimed at more narrow discussion, for I see the use of accountability-as-assessment in legislation and general public articles as one strand of a large, disturbing trend that acts to diminish the value of humanities.

  12. Laura Rosenthal

    Re MLA: I have also thought that MLA could be a resource for assessment and even maintain an archive. Many professional organization do things like this. But there is also a good case against it, given the kind of professional organization that MLA is and the various demands on its resources.

    Re inside and outside strategies: in the larger essay (in progress) from which the talk was taken, I hope to suggest that what you are describing as internal and external issues have to potential to be of a piece.

  13. Dave Mazella

    Hi Donna, thanks for coming by. I absolutely agree with your suggestion that our discipline may very well have particular kinds of strategies or concepts that could help us expand our practices of assessment. One of the first steps we might take would be restoring writing, narrative, and argument to their rightful place in our definitions of core learning activities to be assessed. Using rubrics to assess discrete student writings or entire portfolios is a well-known way to fold the qualitative back into the quantitative. But I have also been experimenting with semester-end self-assessment essays as a spur both to my and my students’ metacognition.

    However, I also think that one of the important forms of knowledge we attempt to impart to literature students is stylistic and historical, or better yet, epochal (why is Johnson different than Milton?, etc. etc.) and yet I don’t know of any assessment scheme that would try to measure whether our efforts were successful. We spend huge amounts of time teaching periods, and all that that entails, and yet the purposes or value of something like periodization only very rarely get discussed at the undergrad level. Ditto, for the most part, the value of literary theory or literary criticism, whose relations often do not get articulated for students until graduate study.

    So we have certain kinds of sub-disciplinary structures, hierarchies, and practices, but we generally let students discover their rationale on their own. I don’t have an answer to this, but thought I’d throw it out.

  14. Eleanor Shevlin

    We’ve been assessing writing for about 10 years now–both for gen ed courses and the major. For the “old” major, students submitted a portfolio containing 4 different types of papers: 1) one from either the historical contexts or theory (structuralist and postmodern/poststructuralist theories are highlighted) core courses; 2) one from American lit; 3) one from Brit lit; and 4) one from one of the three seminars they take. Students also write an introductory essay for the portfolio that not only explains the assignment/context for each paper but also offers reflection and self-assessment of the papers in terms of the writing and content. We use a rubric for assessing these portfolios, and style issues are included as well as a scores for historical and cultural familiarity.

    The new portfolio will concentrate on the seminar papers for several reasons including that we received too many different types of papers for the American and British categories and also because we’re developing (almost finished–will be piloted late February) an assessment tool, a multiple-choice instrument, that measures students proficiency in addressing historical periods (including the strengths and weakness of labeling literary time periods and providing endpoints ) and applying and recognizing various theoretical methodologies. This instrument also poses questions about style and rhetorical strategies.

  15. Anna Battigelli

    Eleanor,
    The portfolio model seems admirably complete, but how do you enforce submission of the portfolio? And when do you distribute the assessment tool/proficiency exam?

  16. Eleanor Shevlin

    Anna,

    For the major, we list the portfolio and the exit survey as graduation requirements. When a student first enters the department, whether as a first-year student or an internal or external transfer student, he or she meets with an advisor, and they review the major and related information together. The portfolio and exit survey requirements are mentioned then as the student and advisor review the handbook. The assessment pieces are discussed in full in the handbook (roughly 60-pages, it is revised every summer and distinct from the course catalog). Students are reminded to save clean copies of their papers for their portfolio throughout their careers. In addition, we have a dedicated Blackboard site for our undergraduate majors (as well as one for the dept., and another for grad students). This site not only has an electronic copy of the handbook, but also a separate link for portfolio/assessment info. Twice a semester I send an email to all majors stating that if they are graduating the following semester, they need to submit their portfolio at the end of the semester. Often this message produces a flurry of panicked responses in which students ask if they could submit their portfolio at the start of their final semester, and I say, “That’s fine, but make sure I have it by the end of the first week of classes.” Students also are sent reminders to take the exit survey on Blackboard during their final year.

    Because our English Education students must submit a portfolio and receive a passing grade for it, or they cannot student teach (no exceptions), the majors in general see the portfolio requirement as serious business. Under the old major, only BA students had to submit a portfolio, but under the new major (first class under the new program will graduate Spring 2011), all BA and BSEd students will submit a seminar portfolio instead of the 4-paper seminar I described for the old major. This new portfolio will have the same introductory essay and two seminar papers (our students take three seminars near the end of their undergraduate career). We have a very high compliance rate for the portfolio (we have about 420 majors, split fairly equally between the BA and BSED). We do not review all the portfolios, but instead randomly select about 30-40% to grade using the rubric. As is standard, we have three faculty reviewers read and score the portfolios.

    For our gen ed writing courses, the portfolio is part of the final assignment for both the WRT 120 class and the WRT 200 series.

    As for the multiple-choice instrument for majors, we are piloting this tool this semester. My plan is to administer it in the ENG 400 seminar classes using laptops from our mobile computer lab; we will need to see how that works in terms of reducing valuable class time (yet it is a perfect task if the professor will be away at a conference). The instrument will be on Blackboard as a survey monkey link. It consists of 45-50 questions (we are still fine-tuning it) and is based on knowledge and skills acquired in the CORE courses and then honed in intermediate courses. The ENG 400 courses carry a pre-requisite of the three CORE courses.

  17. Anna Battigelli

    Thanks. That sounds like a sensible and admirably efficient system. Putting the exit exam on Blackboard makes sense.

  18. Eleanor Shevlin

    Because I offered perhaps too much detail on our collection process, I did not really say much about the move from the portfolio to assess historical knowledge, etc. In reviewing the portfolios, it became increasingly obvious that the papers being submitted for the American and British literature categories were really not offering us a good sense of student proficiency in these areas. I will be very interested in seeing whether our turn to a multiple-choice instrument will enable us to obtain more valuable information about our program.

  19. I’m curious about how something like this might be measured or assessed. Our departmental rubrics only go so far as to call for “contextualized understanding of literature,” but this seems too vague to be helpful. I wonder if it would be better to treat periodization as part of one’s “critical thinking” and view it as part of the comparison and articulation of (say, stylistic or generic) differences between various kinds of sources?

  20. Eleanor Shevlin

    Dave,

    This vagueness was part of my dissatisfaction with what we were doing with the assessment of “knowledge of American” and “knowledge of British literature” through papers written for an American or British literature class. How much can a single paper written on Hawthorne or one on Frances Burney tell us about a student’s knowledge of American literature or British literature as whole entities? Plus, we received wildly different papers done for very different assignments.

    We are considering retaining the submission of a paper written for the historical contexts core class as well as one written for the contemporary theories core class for the new portfolio. These two classes often have paper asssignments that require application and a meta-level engagement with course concepts. (For example, assignments often require a preface to be written as part of the paper in which the student explains his or her rationale for choosing the theorists and sources used in the analysis that follows.) In the multiple choice instrument, we have questions that address which kinds of sources would likely best serve a particular topic and ones that assess student understanding of the differences between primary and secondary sources within a given context (different than your very fine suggestion, Dave, but your comments made me think of these questions).

  21. Laura Rosenthal

    Eleanor,
    What you’re doing sounds really interesting. I would love to get a copy of your survey if you feel like you can share it. We try to measure “appreciation” through an exit survey, but it might be interesting also to measure their sense of historical difference this way also. This is one of those things that I really think an English major should understand by the end of their program.

  22. Eleanor Shevlin

    Laura,

    I would be happy to share the instrument once the “draft” is “finalized”–which should be in early February. It has been a hard but interesting process, and I suspect we may need to revise the instrument further after we actually do the pilot administering of it later this semester. I would very much welcome feedback from everyone when I do circulate it.

    • Eleanor Shevlin

      PS–we have an exit survey and a multiple-choice instrument for the program review, and I think what you would like to see is the multiple-choice instrument.