It was not that long ago when I did a workshop where the staff from the Dodge Foundation (who were funding my work at the time) took me aside at the break because they were concerned with my constant use of a term that they had never heard of – rubric. Those of us promoting their use over the past 20 years can now smile and take satisfaction in the fact that the term is now familiar and the use of rubrics is commonplace world-wide.
Alas, as I wrote in my last post, as with other good ideas, there has been some stupidification of this tool. I have seen unwise use of rubrics and countless poorly-written ones: invalid criteria, unclear descriptors, lack of parallelism across scores, etc.  But the most basic error is the use of rubrics without models. Without models to validate and ground them, rubrics are too vague and nowhere near as helpful to students as they might be.
Consider how a valid rubric is born. It summarizes what a range of concrete works looks like as reflections of a complex performance goal. Note two key words: complex and summarizes. All complex performance evaluation requires a judgment of quality in terms of one or more criteria, whether we are considering essays, diving, or wine. The rubric is a summary that generalizes from lots and lots of samples (sometimes called models, exemplars, or anchors) across the range of quality, in response to a performance demand. The rubric thus serves as a quick reminder of what all the specific samples of work look like across a range of quality.
Cast as a process, the rubric is not the first thing generated, therefore; it is one of the last things generated in the original anchoring process. Once the task has been given and the work is collected, one or more judges sorts the work into piles while working from some general criteria. In an essay, we care about such criteria as: valid reasoning, appropriate facts, clarity, etc. So, the judges sort each sample into growing piles that reflect a continuum of quality: this pile has the best essays in it; that pile contains work that does not quite meet the criteria as well as the top pile, etc.
Once all the papers have been scored, the judge(s) then ask: OK, how do we describe each pile in summary form, to explain to students and other interested parties the difference in work quality across the piles, and how each pile differs from the other piles? The answer is the rubric.
Huh? Grant, are you saying the assessment is made before there are rubrics? Isn’t that backward? No, not in the first assessment. Otherwise, how would there ever be a first assessment? It’s like the famous line from Justice Potter Stewart: I can’t define pornography, but I know it when I see it. That’s how it works in any judgment. The judgments come first; then, we turn our somewhat inchoate judgments into fleshed-out descriptors – rules that rationalize judgment – into a more general and valid system. Helpful rubrics offer rich descriptors that clarify for learners the qualities sought; poor rubrics amount to no more than saying that Excellent is better than Good, etc. (more on this in the next blog post).
Once we have the rubrics, of course, we can use them in future assessments of the same or similar performance. But here is where the trouble starts. A teacher borrows a rubric from a teacher who borrowed the rubric, etc. Neither the current teacher nor students know what the language of the rubric really means in the concrete because the rubric has become unmoored from the models that anchor and validate it. In a very real sense, then, neither teacher nor students can use the rubric to calibrate their work if there are no models to refer to.
Look at it from the kids’ point of view. How helpful is the following descriptor in letting me know exactly what I have to do to get the highest score? And how does excellence differ from merely adequate? (These two descriptors actually come from a state writing assessment):

5. This is an excellent piece of writing. The prompt is directly addressed, and the response is clearly adapted to audience and purpose. It is very well-developed, containing strong ideas, examples and details. The response, using a clearly evident organizational plan, engages the reader with a unified and coherent sequence and structure of ideas. The response consistently uses a variety of sentence structures, effective word choices and an engaging style.

3. This is an adequate piece of writing. Although the prompt is generally addressed and the response shows an awareness of audience and purpose, the response’s overall plan shows inconsistencies. Although the response contains ideas, examples and details, they are repetitive, unevenly developed and occasionally inappropriate. The response, using an acceptable organizational plan, presents the reader with a generally unified and coherent sequence and structure of ideas. The response occasionally uses a variety of sentence structures, appropriate word choices and an effective style.

Do you see the problem more clearly? Without the models I cannot be sure what, precisely and specifically, each of the key criteria – well-developed, strong ideas, clearly-evident organizational plan, engages the reader, etc. – really mean.  I may now know the criteria, but without the models I don’t really know the performance standard; I don’t know how “strong” is strong enough, nor do I know if my ideas are “inappropriate.: There is no way I can know without examples of strong vs. not strong  and appropriate vs. inappropriate (with similar contrasts needed for each key criterion.)
In fact, without the models, you might say that this paper is “well-developed” while I might say it is “unevenly developed.” That’s the role of models; that’s why we call them “anchors” because they anchor the criteria in terms of a specific performance standard.
Knowing the criteria is better than nothing, for sure, but it is nowhere near as helpful as having both rubric and models. This same argument applies to the Common Core Standards: we don’t know what they mean until we see work samples that meet vs. don’t meet the standards. It is thus a serious error that the existing samples for Writing exist in the Appendix to the Standards where far too few teachers are likely to find them.
This explains why the AP program, the IB program, and state writing assessments show samples of student work – and often also provide commentary. That’s really the only way the inherently-general language of the rubric can be made fully transparent and understood by a user – and such transparency of goals is the true aim of rubrics.
This is why the most effective teachers not only purvey models but ask students to study and contrast them so as to better understand the performance standards and criteria in the concrete. In effect, by studying the models, the student simulates the original anchoring process and stands a far better chance of internalizing and thus independently meeting the standard.
But doesn’t the use of models inhibit creativity and foster drearily formulaic performance? This is a very common question in workshops. Indeed, it was posed in a recent workshop in Prince George’s County we ran (and it spawned the idea for this post). The answer? Not if you choose the right models! Even some fairly smart people in education seem confused on this point. As long as the models are different, of genuine quality, and in their variety communicating that the goal is original thought, not formula, then there is no reason why students should respond formulaically except out of fear or habit. If you don’t want 5-paragraph essays, don’t ask for them! If you don’t want to read yet another boring paper, specify via the examples and rubric descriptors that fresh thinking gets higher scores!
Bottom-line: never give kids the excuse that “I didn’t really know what you wanted!” Always purvey models to make goals and supporting rubrics intelligible and to make good performance more likely. Thus, make sure that the variety of models and the rubrics reflect exactly what you are looking for (and what you are warning students to avoid).
In my next post, I’ll provide some helpful tips on how to design, critique, and refine rubrics; how to avoid creativity-killing rubrics; and other tips on how to implement them to optimize student performance. Meanwhile, if you have questions or concerns about rubric design and use, post a reply with your query and I’ll respond to them in the following post.

Categories:

Tags:

21 Responses

  1. If you had to choose between models with no rubric or a rubric with no models, what would it be? Easy choice or difficult?

    • Eh, false choice. You need both, but if you twist my arm, I lean to models plus criteria plus discussion. If time is limited, then I would share excellent and weak papers, and work with students to flesh out a basic rubric that highlights the differences (see the reply here by another teacher on working with students to flesh out rubric). Getting kids to see the differences is key – that requires the models – and the guide to that discussion can be the rubric (or at the very least the criteria).

  2. I understand what you are saying about requiring anchor papers before writing the first rubric. And I’m wondering…Can some preliminary rubric descriptors be written with the students?
    I’m thinking about a workshop-model classroom. Students are researching for a creative twist on biographies, using biographical picture books as mentor texts. As the teacher walks around to formatively assess the mini-lesson objective, he/she notices three or four students doing something well.
    Can’t the teacher stop, show a small piece of the work to the class and ask, “What do you notice about this [part]?” As as a discussion follows, the class might create descriptors for what they see and make the descriptors part of the final rubric.
    When rubrics are organically created (at least in part) by both students and teachers, the students should be aware of the work expectations.

    • I think it is very wise to work with students to flesh out rubrics from basic criteria – as long as you are drawing from a sufficiently rich and demanding batch of samples.

  3. The annotated samples of student writing found in Appendix C of the Common Core ELA standards, in my opinion, was a great step forward in standards writing. As a teacher it is very helpful because it gives me a finite standard against which I can base my judgment of say a 2nd versus a 4th grade student’s writing. But you seem to be saying that students should “study and contrast them” as well. Is that correct? Should teachers take the time to analyze these samples in Appendix C with students?

    • Indeed I am. Yes, it was a step forward, but it’s buried in the Appendix, alas. The key to concept attainment in all fields is to study examples and non-examples (think of a T-chart) so that you sharpen your understanding of the differences. Merely seeing a few models is not enough.
      However, keep in mind a big caveat: these samples in Appendix C were gathered BEFORE the standards were released and before the two test consortia started their work. It will be far more important to look at models and commentary that come from the consortia. In the mean time, you’ll have a much richer collection, with commentary, from NAEP writing assessment results from over the years.http://nces.ed.gov/nationsreportcard/about/naeptools.asp

  4. I am delighted you revisited this topic. You do it in a clear, straight forward fashion – that also pleases me. Most teachers have not participated in “anchoring” and may not be well grounded in deconstructing “models.” It is likely few of them have read a “set” of responses from, say, more than 100 writers. This is a task which departments — and not just English departments — must adopt and do as this next group of tests begin to emerge.
    Was that first “basic text” named “Measuring Growth in English?” More than 20 years ago. First time I heard the word “rubric.” I have never liked the way that word feels in my mouth.

  5. It is all in the expectation, isn’t it? I love the fact that we give students the expectation! We just want them to learn and grow We aren’t trying to trick them, or make it impossible to succeed. They need to know what a weak example looks like, as well as average and great. Rubrics provide us with so much more than grades. I enjoyed your post! I know that I need to know more about rubrics and how to develop them.

  6. 1) Valid rubrics are borne from the study of models.
    2) Rubrics are reminders of specific samples of work across a range of quality.
    3) Without models, rubrics are vague and unhelpful.
    Thank you.

  7. That is not the order in which rubrics for big standardized assessments are created, and I don’t know that they should be.
    Generally, well done big standardized assessments are created more thoughtfully than most school- or classroom-based assessments, methinks.
    Afterall, those creating these assessments are 100% dependent upon them for everything they know about a student, and are 100% depentent upon them for every direction or expectation that they can communicate with a student. That requires test developers to be much more clear in what they are creating.
    Furthermore, these test developer are working on these big standardized assessments full-time. They are not also working on lesson planning, lesson teaching, coaching, working with students, or any of the myriad of other important tasks that school personnel must address.
    So, when done properly, the standards (or assessment targets) is established, Rubrics are created so that student work can be assessed **for the standards specified**, and items (i.e. questions and tasks) created to assess the standards. The tasks are designed to elicit the kind of evidence that the rubric can be applied against, evidence of student knowledge, skills or abilities.
    Then comes field testing — the first time the tasks are assigned to test-takers — and the subequent range-finding. This range-finding process generate examples. And it it is a forum for revisiting and clarifying rubrics.
    When done properly, experienced educators with the proper training in assessment development review work through each step. They sign off on the standards/assessment targets. They sign off on the rubric(s). They sign off on the items and tasks. And they take part in range-finding. They experience should allow them to anticipate how students’/test takers’ responses, and the process should ensure that that anticipation does not substitute for actual verification.
    Note that rubrics are created BEFORE student work is elicited.

  8. Absolutely right on with my philosophy or way of thinking!.I am a mother of a hands on bright 12 year old, 2 clever grandsons under 5 and a busy tutor using rubrics. My experience began as early as a ten yearold babysitting playing school, from rock school,hopscotch, and jumping rope to ABC’s, 123’s, spelling names, words while jumping rope.So, after growing up loving to teach, babysitting all through college and presently and sustitute teaching, retiring with close to 30 years’INSPIREDi’ is Always, Always,Always ” IS how Ms.Wiggin’s work and research touches my heart……

  9. Thank you for yet another thought-provoking piece. A small correction: the Supreme Court justice mentioned in paragraph 6 was Potter (first name) Stewart (surname). I am honored to have known his brother, the late Zeph Stewart, Professor emeritus of the Classics at Harvard.

  10. YES YES YES!!!
    I can create a pretty good rubric for any given introductory-intermediate level programming assignment. Know why? Because I’ve seen literally thousands of student solutions. Can I do that for a brand new style of assignment I’ve never done before? Heck no!
    Crappy Rubrics is one of my many pet peeves. Far too many educators treat the term “rubric” as if it is simply another word for marking guide.
    But perhaps even worse, professional Educationists (…you know, professors of education, many of who have never actually taught anything other than “Education”. That would be like teaching programming without ever bothering to learn anything about the domains of the problems you are trying to solve with those programs. There are plenty who do that too.)…. Where was I? Oh yes, professional Educationalists (read: Education Faculty faculty) are all of one mind it seems when it comes to the wonderfulness of rubrics. Problem is, many of them INSIST that every learning task (for some reason, “assignment” is not the right word for assignments anymore) for every course taught be assessed using a rubric. First time this work is being done? No matter, gotta use a rubric. I guess we’re supposed to just know all the different ways people will approach the task.
    Hmmm… maybe THAT’s why almost ALL of the Education grad-level seminar courses where I sometimes teach include a research style paper as the major assignment. It’s because they have the rubric!!!
    There’s a saying: “Nothing is more dangerous than a professor with a powerpoint.” (In case you’re wondering, it’s because far too often, this also means that the professor will follow that ppt, faithfully, regardless of the student’s needs.)
    Well, here’s one to add to it: “Nothing is more dangerous than an Educator with a rubric.” (I spent a LOT of time generating this rubric, of COURSE I’m going to give out assignments that fit my rubric. Until I retire.)

  11. Do you feel that every lesson should have a rubric? I have an administrative staff that expects every lesson to have a rubric of performance (based off of Robert Marzanno’s research as dictated by Mark Rowleski). I think that rubrics can be used for some things but that in basic lesson they seem to be forced. Also, in many good lessons that are geared towards common core I should be touching on multiple standards so my question about rubrics then is how do we get away from making our rubrics hard to follow for students?

    • A rubric for every lesson? I would say that this is overkill. However, if the point is that students as well as teachers should focus on some explicit success indicators, I’m OK with that. In other words, if we’re just saying: kids need to know what success and good behavior are for a given activity or assignment, that makes sense. Even something as commonly-said as “Turn and talk to your partner” is NOT sufficient information as to what constitutes ‘good’ vs ‘not good’ talk – i.e. what we should talk about and how we should talk about it to be successful. Only when students are extremely clear on what a process is and what quality work at the process entails can we dispense with being explicit about such criteria. But to demand a full-blown rubric for each such event seems like a lot of work. Sounds like there is a happy middle ground here, though.

  12. Do you have any tips on how to avoid a set of anchors becoming very voluminous? The rubrics i am developping are for 5 student 4 ects projects, so a total of 500 hours to be spent on it, yielding about 150 page design works. If i am to provide multiple samples of different quality works it will easily run up to 1000 page or more. Little chance anyone will read it i’m affraid.

    • I can’t figure out the second sentence – some typos, I think – but I don’t see any benefit either from anchors that are so massive. You may have taken my notion too literally. Perhaps a simple way to make the point is this: what are the half dozen teaching points you want to make about the differences between the good and not so good works? What, then, are the most revealing sections or segments that will illustrate those teaching points? That’s the efficient way to proceed, I think.

Leave a Reply

Your email address will not be published. Required fields are marked *