Assessment Lady

I recently gave a plenary talk at the ADE East here at the University of Maryland on Outcomes Assessment, thus irrevocably outing myself as an assessment lady (put an outcome on it?). But one observation that David Laurence (MLA) and I shared was how much the climate on outcomes assessment had changed. A survey we conducted before the conference showed that 90% of respondents were currently engaged in an assessment project, and that 88% believed that assessment could contribute positively to student learning. That, of course, does not mean that the many problems that assessment is raising have gone away. It does, however, suggest that assessment may be with us for the long haul, and that many departments are actually finding it useful.

I wanted to share a few good ideas that came up in the discussion and in a workshop I facilitated the following day.

Faculty, quite reasonably, often object that they already have too much work to fulfill this unfunded mandate. One department chair, however, reported that he cancels classes for a day or two in order to allow faculty to assess student work. As he pointed out, K-12 does this regularly (“professional development days”). While we are always struggling to get all we want done in a class accomplished, most courses could probably spare one class for this purpose. By doing this, the chair not only makes assessment a collaborative, valued, and department-wide project, but addresses the main point of faculty vexation.

In the workshop, we also talked about faculty distress over having papers from their courses read by colleagues, who might draw unfavorable conclusions about each other’s teaching. We didn’t solve this one, but some good ideas were put on the table. One was to sample papers early in the course, so that only conclusions about the program could be drawn (rather than conclusions about an individual instructor). Another was continuous collection (which I always recommend to department), so that a variety of papers appear in the assessment pool. Still, I think until there is full assurance that assessment will be used to improve learning and not to evaluate individual faculty members, this will continue to be a problem.

Advertisements

9 responses to “Assessment Lady

  1. Dave Mazella

    Hey Laura,

    “Assessment Lady” is not a bad title, as long as it comes with a summer stipend.

    These issues sound familiar to me, from my experience with our institutional effectiveness requirements. I think the fears could be addressed by inviting faculty beforehand to help generate the rubrics by which the papers would be assessed. This was a major issue with an earlier version of our IE process, when people felt that the papers were being pushed through a process that didn’t address their own teaching or its priorities. The added benefit, assuming faculty involvement, would be that faculty would learn a great deal by generating such a rubric.

    I think it’s always necessary to stress the distinction between the program-level focus of assessment and the individual focus of grading. I suppose you could make the papers more anonymous by omitting assignments, but I think it’s better to see things within their contexts. But a sampling technique seems the best way to make sure no one feels singled out.

    DM

  2. Laura Rosenthal

    Having everyone develop rubrics together is a good idea, although I imagine that would be both revealing and chaotic.

    • Dave Mazella

      Courtesy of Carl of Dead Voles from a while ago, here‘s an example of a department-generated rubric. The ADE Bulletin a while ago had a nice piece about getting departments (or to keep it manageable, subcommittees) together to generate rubrics like these. I must admit that I’ve never been able to convince my own department to undertake one of these (may still happen), but I still think it would be worthwhile.

  3. Brilliant post. So happy to find another 18th century enthusiast.

  4. Laura Rosenthal

    Hi Leah,
    Thanks for stopping by and we look forward to your further engagement.

  5. Great Post, Laura. I’m also known as an “assessment lady” (to go with my other nickname “theoryhead.”). My department has a standard rubric we developed for our core classes and one that many faculty use in all of their courses (modified for lower / upper division courses). We developed the rubric as we re-organized the classes.
    Revealing and chaotic both, but the end result is very good, especially when we’re envisioning students walking out of the same course (multiple sections) with standard skills sets.

  6. Laura Rosenthal

    We are legion–but you usually don’t see “assessment lady” and “theoryhead” in the same sentence!

    Sounds like you have a good process

  7. Jill Bradbury

    Laura – assessment is time consuming and my department has done a lot of it over the past few years. I am finding (as designated assessment coordinator) that the amount of work has dropped significantly now that we have developed dept. outcomes, assessment plan and rubric. For a couple of years, we got summer funding from the assessment office to run end of the school year workshops, where we ironed out a lot of kinks. But this year faculty did assessment on their own and it was not too time consuming. We assess at major entrance/exit points and faculty assignments to those classes rotate, so no one feels singled out. I think that the culture of sharing and evaluating student work has benefited most of us, except for the few who need to do the most to raise the quality of their classes. We haven’t yet figured out how to address that problem (other than, ‘well, Prof. X will retire soon…’), but it has led to some uncomfortable moments. Have you had this problem also?

  8. Laura Rosenthal

    Hi Jill,
    Yes, I have also had the experience that it gets easier over time. I’ve been working more at the college level than the department level, but I have also indeed noticed that participation is uneven. Since it’s all so new, so far I’ve tried to concentrate on supporting those who are trying to get on board.