What I’m about to share with you is probably an incredibly dangerous thing to do with assessment coordinators, MEBD pathway specialists, administrators, and others watching. Deep breath. Here goes.
My name’s Jeremy Fiebig. I teach theatre in the Department of Performing and Fine Arts and I’m a recovering rubric addict.
I think rubrics serve a useful function in a variety of teaching and administrative settings. In the past couple of years, I’ve played a small role in looking at and developing rubrics for writing, for an information literacy project, and for more than a handful of other projects and assignments, big and small, in my classes. I find rubrics, in many cases, to be useful tools that help clarify for me and for my students exactly what the expectations for a project are, how a given student performed against those expectations, and what that student’s grade should be given their performance. Rubrics save time for professors and administrators who need to score whatever’s being assessed in an efficient way. Rubrics provide some semblance of an objective standard, which is important. Rubrics enable you to really spend time considering what it is you’re looking for from students (or faculty members who are being assessed). Rubrics serve an important function in terms of assessment — they help provide a performance baseline so that grades, instead of just being grades, are able to be mined for meaning, for “closing the loop” in continuous improvement, and for providing evidence to administrators and legislators who need to see that we’re doing something. But it is far past time to admit we have a rubric problem. A big one.
Yes, rubrics are useful. Yes, they do some things really, really well. But there are some things they don’t do. And there are some things they do very poorly. And there are some ways in which they lie to us. And there are some ways in which they are abused. And there are some ways in which they are letting faculty members, students, and administrators off the hook when it comes to some of the core values of our institution and of the academy as a whole. Here are some examples:
1) Rubrics lose information. Rubrics are almost by definition narrow. A really, really full rubric might measure 6 or 8 or 10 criteria, which don’t necessarily provide a complete picture of whatever it is that is being assessed. Not everything gets measured.
2) Rubrics give the illusion of objectivity. Here’s the thing: our brains tell us that if the standards and criteria are in a rubric, that our measurement of student performance against that rubric is objective. And we can sleep better at night believing we’ve been fair and impartial. This is a lie. Just because the criteria are explicit and the standards are fair does not mean that professors or administrators are being explicit or fair about how they score things on the rubric.
3) Rubrics make us substitute parts for the whole. Let’s say you’ve just finished a meal and are thinking to yourself that you really, really liked it. Why? Because it tasted good? Because you were hungry? Because the plating of the food was exceptional? Because of the music playing in the background? Because of your dinner date? Because of your dietary preferences? Because it was a good value? Some combination of these things? If so, how much does does the plating score in your mind versus how hungry you were? How does taste score against value? Does service get a score? And how does that figure in? When we think about our meal, we may be thinking of all of these things — some of them consciously, some unconsciously, but rarely do we bring a scorecard with us to dinner. When thinking about assessment, the rubric asks us to do precisely that: to predetermine whether and how we will evaluate the assignment. Not only do we potentially lose information here, but we substitute evaluation of parts of the assignment for the whole.
4) Rubrics don’t happen most places in the way they happen here. Rubrics do happen in real life. When I apply for a credit card or home loan, some computer or number cruncher somewhere scores my credit history against a rubric and assesses what kind of risk I am. Same with insurance. Same when the IRS determines whether I should be audited. There are hard-and-fast criteria and standards out there that we all have to deal with. And I can get a pretty good sense of what those criteria and standards are if I pay attention. Some of them are even legislated and regulated. But some criteria and standards are not so published. If I’m about to hire you as an employee, you need to have experience, you need to meet some basic qualifications, and your references need to vouch for you (all of these are criteria — and therefore ARE, essentially, rubrics). These are things I can evaluate — maybe even somewhat objectively — and that you will know are part of the deal. But there are also a billion little assessments I make: your dress, your appearance, your hair, your vocal quality, your confidence, your spirit, your personal drama, your determination and drive, your language, your grammar, and so on. My grandfather, who was CFO of a company back in the 1980s, had a hidden criteria at all his interviews: if you salted your food before you tasted it, the interview was over (so to speak) and you weren’t getting the job. It can be argued that this was, in fact, an undisclosed rubric. Judgment is made on a host of things that are both consciously and subconsciously scored on rubrics — both those for which the standards are known and those which go on undisclosed, silent, or even unknown. In short, not everywhere we go on planet earth are the expectations clearly laid out before us — and when they are, our performance is not always evaluated in an open or objective way. In the face of these realities, we still are compelled to figure out what we can, to do our best, and to try. In the classroom, and in assessment, the use of rubrics is often couched in language about being fair, clear, forthright, and objective. My question is simply this: by using rubrics, are we training students to expect that they will be graded or judged in a way that will always be clear, fair, and forthright? Are we making students who simply “perform to the rubric” while disregarding some of the intangible and intrinsic and unspoken criteria at play all around us? Are we providing incentive for students not to think creatively, or to apply themselves, or to “figure it out on their own” with some of our rubrics?
5) Rubrics risk substituting measurement for judgment and instruments for expertise. We hire quality faculty because of their expertise, but then risk essentially pigeon-holing that expertise when it comes to rubrics — making faculty become, in essence, extremely overqualified graders and not experts who are in tune with the nuances of performance. While outcomes assessment has drifted more and more to the use of rubrics in recent years, the academy is of at least two minds when it comes to the use of rubrics. On the one hand, we use language about rubrics and assessment when we are assessing students, learning outcomes, and programs that suggests fairness, consistency, objectivity, and forthrightness. On the other hand, we shape our processes of tenure and promotion around experts sitting in a room making what we hope are informed judgments. Why do we treat the two forms of assessment differently? I don’t think the answer is necessarily sinister. We value both kinds of assessment — the role of instruments and the role of experts. Is it possible that we seek to strike this balance in the classroom and in program assessment and in other arenas where the pendulum has swung far to the side of the rubric?
6) Rubrics do not always speak to value. Let’s go back to the meal analogy. The question is not whether the plate looked pretty or whether the flavors fused together in a balanced way. The question is: was it good? or am I satisfied? or would I recommend this to my friends? Our ability to value (or evaluate) performance is, fundamentally, about these kinds of value questions. Now, it will be (and should be) argued that, in our role as educators, we are not your run-of-the-mill restaurant patrons — we are professional chefs, food critics, and gastronomes — and this isn’t a restaurant, it’s a cooking school. And whomever argues that point will be correct. It is to these people, and to myself, that I say this: in our mad scramble to evaluate whether students can fry an egg or bake a souffle in the hopes that they will someday know enough to become a chef, are we evaluating the goodness, the artfulness, the sensibility of their egg frying or souffle making? Are we making people who can manage spreadsheets, but not account? Who can film commercials, but not engage in commerce? Who can paint canvasses but not be artists? Who can memorize lines but not be actors? Do we — can we — value and evaluate things *besides* mere performance? What about evaluating a student’s disposition? Or their role as citizens?
7) Rubrics can be massaged. This is the story of a professor who looks at a project and says to himself or herself, “This seems like a C to me,” and then massages how he or she score the project on the rubric to get at the C he or she feels the project deserves.
8) Rubrics can be abused in the same ways not using a rubric can be abused. This is the story of a professor who, up against a midterm deadline, brain fried from several projects to grade, says, “the student who did this project seems to be with it and has been earning As so far. Rather than looking closely at the project or making substantive notes or giving worthwhile feedback, I’m just going to circle some things on this rubric that seem to make sense to me.”
9) Rubrics standardize assessment, which can standardize teaching and learning. I won’t rehash the debate of the last couple of decades over standardized testing. What I will do is suggest that regardless of what you think about standardized testing and its value is that it risks institutions, schools, professors, and programs to begin engaging in standardized teaching and learning. In the face of scholarship about multiple intelligences and in light of thinking like that presented here (http://teachingatfsu.com/ken-robinson-changing-education-paradigms-a-discussion-starter/), I wonder if we might engage in an honest discussion about the implications of the rubrickization of our classrooms, outcomes assessment, and program assessment at our institution.
This is a starter list. As mentioned, I’m a recovering rubric addict — and one who is relapsing in several instances across several of my courses this semester. I wonder if you are, too, and if we might engage in a discussion about healthy rubric use, about making better rubrics where possible, and about exchanging rubrics for different kinds of assessment tools (and not just tests).
I’m sitting down now. Anyone else want a chance to talk?