Let’s begin the new year with a nuts and bolts educational issue. (My New Year’s Resolution is to say less about hot-button political issues and make fewer needless enemies…). In this post I consider the place of final exams. In the next post I consider the place of lectures in teaching.
Exams vs. projects? UbD is agnostic about many educational practices, be they final exams or projects. Yet, we often get queries such as these two recent ones: what’s the official UbD position, if any, on final exams? Should we be doing more hands-on projects if we’re doing UbD? The glib answer: no technique is inherently sacred or profane; what matters is how exams and projects are shaped, timed, and assessed – mindful of course goals. As you’ll see below, I think we tend to fixate on the format instead of worrying about the key question: regardless of format, what evidence do we need and where can we find it?
There are really only 3 non-negotiables in UbD:
- There has to be a clear, constant, and prioritized focus on ‘understanding’ as an educational goal. Content mastery is NOT a sufficient goal in an understanding-based system; content mastery is a means, in the same way that decoding fluency is a means toward the real goal of reading – meaning, based on comprehension, from texts. This logic requires teacher-designers to be clear, therefore, about which uses of content have course priority since understanding is about transfer and meaning-making via content.
- The assessments must align with the goals via ‘backward design’; and the goals, as mentioned, should highlight understanding. So, there can be quizzes of content mastery and questions on the exam re: content, but the bulk of assessment questions and tasks cannot possibly be mere recall of content kinds in an understanding-based system. The issue is therefore not whether or not there are final exams but what kinds of questions/tasks make up any exams given; and whether the kinds of questions are in balance with the prioritized goals.
- The instructional practices must align with the goals. Again, that doesn’t mean content cannot be taught via lectures or that content-learning cannot be what lessons are sometimes about. But a course composed mainly of lectures cannot logically yield content use – any more than a series of lectures on history or literacy can yield high-performing historians or teachers of reading. The instructional methods must, as a suite, support performance with understanding.
In sum, UbD says: IF you use a method, THEN it should align with course goals. IF you use varied methods, THEN they should be used in proportion to the varied goals and their priority in the course. A method (and how much weight it is given) can only be justified by the goals, in other words, not by our comfort level or familiarity with the method. (There are other considerations about exams that reflect more general principles of learning and long-term recall that I will not address in this post.)
Alas, far too many final exam questions do not reflect higher-order understanding-focused goals and only reflect habits, as countless studies using Bloom’s Taxonomy have shown. Yet, few teachers, when asked, say that ‘content mastery’ is their only goal. So, there is typically an unseen mismatch between assessment methods (and types of questions) vs. goals. That’s not an ideological critique but a logical one; it has nothing to do with whether we ‘like’ or ‘value’ content, process, multiple-choice questions or performance tasks. What matters is that the evidence we seek logically derive from what the goals demand.
In fact, it is smart design to think about “evidence needed, given the goal” and thus not think about assessment type until the end. We should thus be asking: if that’s the goal, what evidence do I need? Once I know, which format(s) might work best to obtain it?
This same critique applies to hands-on projects, not just blue-book exams. Many hands-on projects do not require much understanding of core content or transfer of core learning. (My pet peeve are the Rube Goldberg machines in which no understanding of physical science is required or requested in the assessment.) Often, just a good faith student effort to do some teacher-guided inquiry and produce something fun is sufficient to get a good grade. Which only makes the point more clearly that we should think about goals and their implied evidence, not format, first and foremost.
Getting clearer on evidence of understanding. So, what is evidence of understanding, regardless of assessment format? A frequent exercise in our workshops is to ask teachers to make a T-Chart in which they compare and contrast evidence of understanding vs. evidence of content mastery. Students who understand the content can… vs. Students who know the content can…
Every group quickly draws the appropriate contrast. Students who understand can
- justify a claim,
- connect discrete facts on their own,
- apply their learning in new contexts,
- adapt to new circumstances, purposes or audiences
- criticize arguments made by others,
- explain how and why something is the case, etc.
Students who know can (only) recall, repeat, perform as practiced or scripted, plug in, recognize, identify, etc. what they learned.
The logic is airtight: IF understanding is our aim, THEN the majority of the assessments (or the weighting of questions in one big assessment) must reflect one or more of the first batch of phrases.
The verbs in those phrases also signal that there is a kind of doing with content required if understanding is assessed, regardless of test format. We need to see evidence of ability to justify, explain, adapt, etc. while using core content, whether it be in a blue book or in presenting a project. Note, then, that a project alone – like the Rube Goldberg project – is rarely sufficient evidence of understanding as a product: we need the student’s commentary on what was learned, what principles underlie the product and its de-bugging, etc. if we are to honor those understanding verbs. Only by seeing how the student explains, justifies, extends, adapts, critiques, and explains the design or any other project can we confidently infer understanding (or lack of it).
That is, of course, why PARCC and Smarter Balance will be using such prompts; and why AP and IB have used them from their inception.
Clear and explicit course goals are key. The most important action to be taken, if one wants to create great assessments, is suggested by this backward logic of validity. In order for the assessments to be valid, the course goal statements have to be clear and suggestive of appropriate evidence for assessing whether goals have or have not been met. Each teacher, department, and subject should thus ensure that there are clear, explicit, and sound course goal statements for each course so that sound assessments can be more easily constructed (and critiqued).
“Students will understand the Pythagorean Theorem” or “The Civil War” or “the student will know how to subtract” are thus not really goal statements. Nor does it help to add more content to those sentences. It still just says: know stuff and recall it on demand. They don’t tell us, for example, how that understanding will be manifested – the understanding verbs – and under what general circumstances or challenges the understanding or ability must be shown.
Thus, to rewrite the first goal in admittedly very wordy terms just to make the point:
Students will be able to solve non-routine real-world and theoretical problems that involve the Pythagorean Theorem, and justify their solution path as well as their answer using it. By ‘non-routine’ I mean that the term “Pythagorean Theorem” will not be found in the problem nor will the problem look just like simple exercises that involve only that theorem. Thus, some creative thinking is required to determine which math applies. By ‘problem’ I mean puzzling and complex situation, not a plug-and-chug ‘exercise’ with one simple solution path. Rather, the student will have to infer all the geometry needed from a very spare-seeming set of givens. Thus, the Pythagorean relationship might not be visible on first glance, and it might be one of a few key ones needed in the solution, e.g. de-composing and re-composing might be needed first before one even sees that the theorem could apply. The student must thus judge whether the theorem applies; if it does, when to use it; and how to adapt their prior learning to the atypical circumstances at hand. The student ‘understands’ the theorem if they can envision and create a solution, explain and justify it (in addition to calculating accurately based on the Theorem).”
(Once we are clear on the terms ‘non-routine’, ‘problem vs. exercise’ and multi-step inferencing, we can reduce the goal statement to a briefer sentence.)
Arguably one reason why so many local math, science, and history exams are not sufficiently rich and rigorous is that they haven’t been built and tested against such explicit goal statements. So, they end up testing discrete bits of content. Once teachers become explicit about priority long-term goals – e.g. ‘non-routine problem solving’ in math and ‘thinking like a historian’ in history – they quickly see that few current exam questions ever get at their most important purposes. Rather, the tests are testing what is easy and quick to test: do you know bits of content?
In sum, genuine goal statements, as I have long stated – and as Tyler argued 70 years ago – are not written primarily in terms of the content by itself. They are written in terms of uses of the content, contexts for the evidence, and/or changes in the learner as a result of having encountered the content. Here are a few helpful goal-writing prompts to see how this can make a difference for the better in your goals:
- Having learned ______________[the key content], what should students come away able to do with it?
- By the end of the course, what should students be better able to see and do on their own?
- How should learners be affected by this course? If I am successful, how will learners have grown or changed?
- If those are the skills, what is their purpose? What complex abilities – the core performances – should they enable?
- Regardless if details are forgotten, in the end the students should leave seeing…able to…
- Having read these books, students should be better able to…
- What questions should students realize are important, and know how to address more effectively and autonomously by the end of the course?
With better goals in hand, here are three simple audits, in no particular order, that you can do to self-assess the validity of your tests (be they traditional exams or cool projects):
The first audit:
- List your course and unit goals and number them. Be sure to highlight those understanding-related goals that involve the ‘verbs’ mentioned above. Add more understanding-verb goals, as needed.
- Code your draft exam, question by question, against each numbered goal. Is every goal addressed? Are the goals addressed in terms of their relative importance? Is there sufficient demand of the understanding verbs? Or are you mostly testing what is easy to test? Adjust the exam, as necessary.
The second audit – ask these questions of your draft exam/project:
- Could a student do poorly on this exam/project, in good faith, but still understand and have provided other evidence of meeting my goals?
- Could a student do well on this exam/project with no real understanding of the course key content?
- Could a student gain a low score on the exam/project, but you know from other evidence that this score does not reflect their understanding and growth?
- Could a student have a high score on the exam/project merely by cramming or by just following teacher directions, with limited understanding of the subject (as perhaps reflected in other evidence)?
If, YES, Then the exam/project is likely not yet appropriate. (Note how the Rube Goldberg machine by itself fails this validity test, just as a 100-item multiple choice test of the related physics also fails this validity test.)
A third audit – forget the goals for a minute and just look at the exam/project:
- Honestly ask yourself, just by looking at the exam/project only, what might anyone looking at this set of questions or prompts infer the course goals to be?
- If you think you may be deceiving yourself, ask a colleague to do the audit for you: “Here’s my exam/project. What, then, would you say I am testing for, i.e. what can you infer are my goals, given these questions and prompts?”
- Revise test questions or project directions, as needed, to include more goal-focused questions and evidence. Or, if need be, supplement your exam/project with other kinds of evidence.
- Next, consider point values: are your goals assessed in a priority way or is the question-weighting imbalanced toward certain goals or types of goals more than others? (This same question should be asked of any rubrics you use.)
- Adjust your questions, point values, and/or rubrics as needed to reflect your goals and their relative priority.
When you finally give the exam or do the project, code each question or project part in terms of which goal(s) that question/prompt addresses. (You might choose to make the code ‘secure’ or transparent to students, depending upon your goals). Once the results come back, there may be interesting goal-related patterns in student work that would other wise remain obscured by just looking at the content of the question(s)/project.
PS: How often are typical blue-book finals used in college these days? How often do final exams want only content recalled? Far less than many educators realize. Here is recent data at Harvard, followed by some illuminating writing on the subject of college exams:
Now, according to recent statistics, only 23% of undergraduate courses (259/1137) and 3% of graduate courses (14 of approximately 500) at Harvard had final exams.
Here is the original article in Harvard Magazine on the trend, the article that spawned a fairly significant debate in higher education.
38 Responses
The theory of this type of design is good, but I can’t shake the idea that we are STILL not teaching kids HOW to learn and HOW to create. We prod and coach them to complete projects and master tests that SHOW competence, but how do we instill in them a DESIRE to question…a DESIRE to learn. It is during “free times” that I am stunned by how much my students DO NOT know and I always find myself begging them to NEVER quit questioning.
Lately, I’ve been really turned on by the counter core curriculum schools that have been teaching only reading/writing/arithmetic/rhetoric/classics to kids. The kids are actually learning Latin again. In the core curriculum, Latin has no substantial place, but I know it is Latin and endlessly translating classics from Latin that fueled my love of language and discourse. Please don’t fixate on the Latin as it is only one example of things that kids don’t learn anymore that as a thinking/feeling person I strongly feel are hallmarks of a great education. Our kids are experiencing mental, multiple choice atrophy as we keep revising and rewriting curriculum and tests. Maybe the problem is not with the curriculum or the tests in the first place.
But your lament is precisely my point: if one of our goals is to foster the desire to create then we should hammer away at it. Alas, far too many teachers wish for creative work but don’t demand it, reward it, foster it, and make clear that it is a key goal.
Why question, for example, if nothing in the curriculum, instruction, and assessment demands, warrants, and incentivizes questioning? (See our book on Essential Questions and the last chapter on a culture of inquiry, related to this point.)
I think that’s a really great point, Grant. “Creative” work can be a key goal…”creativity” can be defined as a learning objective. We can teach kids how to be creative and it doesn’t mean just turning them loose to do “whatever” they come up with.
Absolutely! I never get why people say you can’t measure creativity. If you ask people to describe the features of creative vs. uncreative work, they easily come up with indicators and great examples.
Agreed. It’s one of my favorite examples to give in talks. I love to see the audience “come around” to agreeing that something that they think of as being abstract can be defined. Definitely an “aha” moment for those who have been taught to believe that the “really” important stuff, like creativity, can’t be defined or measured.
Thanks for the great blog, Grant. It’s one of my favorites.
Karen
I particularly appreciate the line about being “agnostic” about certain practices.
I think it is really important to look critically at all common practices that have a whiff of political correctness. Any technique is only justified by the goals. I have argued with the most hard-headed traditional lecturers and the most tender-hearted progressives on this issue, trying to get them to see that their stubborn commitment to an approach is blinding them to basic issues like validity or engagement or some other essential educational need.Of course, we can and we should have strong views about the goals and their implications: it’s not at all obvious how much lecturing or of what kind should be done even in a course with lots of content. That’s after all what Eric Mazur’s work has shown: less lecturing, more targeted lectures in response to queries, and more interactive problem solving = better learning than all lecture.
Most wonderful.
here is to a ‘new’ year where we can get away with teaching like this… 🙂
Thank you.
Don’t know why you are name-dropping PARCC and Smarter Balance – it’s not as if they are the end-all when it comes to assessment.
At any rate, I do think there is a significant difference between projects and these academic prompts. Most of the academic prompts were created to make assessment easier – not because they are a better type of assessment. These prompts tend to drift towards being too academic and not as tightly connected as projects.
oh, c’mon – that’s grumpy. It’s just a reminder to people at the local level that they need more rigorous assessments and that the new tests will be requiring more understanding-based prompts. It’s hardly name-dropping, nor is it a ringing endorsement. The facts are clear: local assessment for decades has been too soft in most schools. That’s the key leverage point in reform – local assessment.
Perhaps it is grumpy, fair enough. But you did bring those groups up for some reason. As to local assessment being soft, I would argue that getting the local community more involved in education (a la Mission Hill) is the leverage we need, not that of a national curriculum sent out from Washington, DC.
I’m all for community involvement, having helped a number of schools use locals as judges of performance and portfolios. But school staffs still do a weak job in general at ensuring the rigor and richness of local assessment. There is little oversight, and it’s arguably a key reason why standardized tests became as important as they did. (I worked with Deb Meier on this very issue at CPE, helped a number of Coalition schools develop alternative assessments, and developed and piloted 2 state portfolio assessments, so I know what can be done. Alas, many alternative assessment projects lack technical soundness which makes it rough going when you lobby for such policies.)
Great post. As my semester exams have wrapped up, this gives me a lot to work with going forward on how to address weaknesses and re-think final exams for June.
One question — what to do when there is a huge range of ability in the same classroom? Is differentiation in exams ever advised? It’s hard to acknowledge that some students may never achieve that full understanding no matter how hard we try both in and out of class to support them. Am thinking of students in HS who already have ESL or big gaps in reading and writing and are years behind some of their peers…
My answer is boring: it depends upon your goals – and basic fairness to kids. First, let’s clarify a few terms. When we ‘differentiate’ a lesson or a test we offer VALID options. In other words, our goals are not sacrificed by the choices given. However, when we ‘accommodate’ we are basically setting up a different set of goals for those students – e.g. there is an official IEP or, informally, we’re cutting the kid a break because they are ESL in a ‘normal’ English class. Grading the students is a different matter entirely. I might not differentiate or accommodate but I might give the kid a grade based on their situation. (Obviously that’s problematic for the transcript and grade integrity, but that’s another issue for another day).
It’s especially problematic in HS because a kid 4 grades below grade level in a 10th gr English class cannot begin to read or write with sufficient ability to meet many course goals. However, some goals can be met – participation, multimedia exercises, etc. – and others can be attempted without accommodation. We do the best we can. However, I have often seen ‘differentiation’ not work out at all: the choices given to students are supposed to be ‘equal’ in terms of the challenge related to course goals, but they either sacrifice the goal in some choices – e.g. no real writing is required for the final project they choose – or the choices are just so different in rigor as to make the assessment not worth much (and thus make the grades screwy).
My own view is that the only long-term solution for solving this is to bring back an aspect of the 1-room schoolhouse. Instead of saying ‘all these kids are in English 10’ we say some kids in the room are in English 10a others are in English 10b and still others are in English 10c. Why penalize them just because of there being a small school with no other teachers and base placement on birthdays? Even in small international schools, there is placement in the language – at ASP there are 3 levels of French in each school division. Why not do that for English?
In any event, the placement-fairness issue is the solution, I think, so that the teacher can have a more reasonable time of goal-setting and assessing.
Unfortunately, in this day of inclusion, placement or sorting will not happen. If we even mention sorting students based on abilities, we are looked upon as heretics. Some sorting occurs naturally in higher level courses because some students avoid advanced courses, but, other than that, we must be homogeneous. My own children went to high school in Germany for a year, and there, if you don’t test at a certain level, you go to a different (lower) school.
I think the pendulum is swinging back, and it must be so, since a standards-based diploma means that kids take the time needed to meet standards; and we place you not to dead-end you but to support you better to meet standards. Already happens in math and foreign language…
This is a timely post. I am the math department chair an independent school that identifies itself – as most do – as a college preparatory school. I have requested that our curriculum council have a discussion about final exams. We say that we believe in final exams because our students will have to deal with them in college. However, many of our teachers, especially those of seniors, have triggers built in to excuse students from their exams. This says a couple of things to me. First, we are not in alignment with our school’s stated beliefs about exams. Second, many of our teachers think that they are either doing their students a favor (or, sadly, themselves a favor) by minimizing the number of final exams to process. I will look into the Harvard links to bring to the table as we discuss this issue in our January meeting. Thanks for the thought-provoking way to start the morning.
You’re welcome. I think the desire to not grade them is driving some of it, for sure, but I think there are also technical reasons to be wary of using them all the time and to give them high point values. And if achievement is really the point, the data are reasonably clear: more tests, more often, with fewer big-deal cumulative finals actually yields better results. (More feedback = more learning). Math is the most common user of finals BTW in my explorations of the topic, in both college and secondary school. Very rare in English, Philosophy, Social Sciences. As a prep school, it’s especially in your interest to get some info on this from schools at which your kids matriculate.
Grant ~ thanks again for pushing us to consider the outcomes/goals and the indicators of our students’ understanding rather that the traditions that permeate our practice. The audits demand us to have integrity not only in what we say but what we do. As we look at the feedback from our students provided by your survey, we must look in the mirror of our own practice and challenge to improve the learning opportunities we provide to our students.
Happy New Year and thanks for always providing the blue prints.
You’re welcome! And I’ll be interested to hear local thoughts on the student survey results.
Two things:
1. I agree that all written tests should be within the upper-levels of Bloom’s Taxonomy. Simple recall is unimpressive (e.g., Jeopardy … most of the time). I teach 6-12 orchestra, and my written tests (beyond playing evaluations) assess students’ ability to decode, analyze and synthesize musical elements. Simple recall will not help them, because I am requiring them to create something that is not explicitly in from of them.
2. I think the approach of some courses to have very minimal summative assessments (e.g., midterm and final exams) do a huge disservice to students – but this point has been discussed at great lengths by others. These final exams can often end up being the “catch all” of the curricula. I have heard some students say, “We NEVER discussed that in class,” and the teacher respond, “well…. it was in the book.” This is clearly an example of the “means” not justifying the “end”, in which the backward design helps remedy. Exams should not be a “GOTCHA!” I also believe “typical exams” are unnecessary. If various benchmark assessments are used at the ends of units throughout the course, the final exam can be used to demonstrate the upper tiers of Bloom, requiring the student to manipulate and synthesis knowledge in order to solve a content-specific, real-world problem.
I am personally not a big fan of big-deal ‘finals’ but that’s a function of my teaching of English and Philosophy where it’s pretty pointless. But i recognize that in some courses such as math and foreign language it is often necessary to double back on various discrete pieces.
The real reform comes when you do the course-goal exercises I alluded to, however. Then, many current quiz and test practices look misaligned. As I have often written in this blog, math is arguably the worst offender in failing to honor its mission to focus on problem solving vs. drills on discrete bits.
Kevin, I liked what you wrote in your second point and plan to use it in a talk I have to give on assessments/benchmarks tomorrow. Thanks! (and awesome post, Grant!)
Good stuff. Nothing to disagree with here. I would offer an insight though, maybe better labeled as an irony. I’m sure you’re familar with adragogy and the educational values that undergird the theory; namely, that the only type of learning worth assessing is that which is problem-based, highly relevant to life, and sensitive to the experiences of the learner in his/her context. Alas, when the adults are the learners they demand that these values are honored when it comes to assessment, yet often don’t adhere to the same values with entrusted with care of students. Assessments are often poor, too content-centered and so on… (I think teachers’ psychological need to control- within an environment of isolation- is the culprit here).
Profound reflection on the part of educators must take place to capture this irony, usher it out into the open, and change teaching practice accordingly.
Thanks.. great post. I hope it’s okay to steal some of these questions for my curriculum team!
‘Steal’ away! Agreed on the irony. It’s a bitter irony in light of the uproar now over accountability testing as part of teacher scores since for 100 years the students were in the same boat, with no mouthpiece to be heard.
Continuing here for just a moment— the roadblock to reform here is two-fold: 1) teachers aren’t good at asking the kinds of questions you ask above and 2) when encouraged to these types of quesitons, they aren’t given enough intellectual space to understand why they are important, let alone develop coherent answers. Instead, they are often expected to produce ‘data’ from each of their meetings to validate collaboration time spent for the school improvement plan. Struggle and confusion that are products of meetings are often not acceptable data (they should be!). This race to produce raw data creates a minset of “What do we need to produce from this meeting to justify time spent? ” This mindset often kills the naturaly curiosity necessary to pursue inquiry.
Here is where as a social studies teacher (philosophy and government) I go crazy because it just may take three sessions of thinking and processing to even UNDERSTAND THE PROBLEMS WITH OUR ASSESSMENTS, before we even think about embarking on solutions. It requires excruciating patience and humility to position oneself intellectually to make change in this area. So maybe the issue with education reform is that it is moving TOO quickly and needs to slow down. This is a profound leadership challenge. If only Philosopher Kings ran the schools!
Dan: You raise a few fascinating issues. I have long been puzzled as to WHY teachers aren’t good at asking these kinds of questions. Indeed, it has been the puzzle that guides all my work: ‘coverage’ can’t possibly work as education, but people do it all over the world; teachers should ask basic questions about goals and their implications, but they don’t. My work is really an attempt to inquire into this puzzle. I also agree that the word ‘data’ clouds the issue because it makes people think in reductionist rather than expansive terms (even though the idea was a good one, i.e. of getting teachers to be evidence-based instead of hunch-based.)
The problem is old, of course. You can read bitter criticism of educator thoughtlessness in Plato, Locke, Kant, Hegel, and Dewey. But there isn’t much that really explains its pervasiveness.
One fruitful approach for me the last few years has been to use more explicitly some of the design-thinking, creative-problem-solving, and Synectics approaches that have been around for decades – all designed to help people overcome thinking in ruts and poor group processes. Also, all of these approaches begin from a premise that you (and I) bemoan: jumping to solutions (or doing nothing) instead of asking: what’s the problem? What’s symptom vs. root cause?
Thoughtless and purposeless but well-meaning actions: the bane of our existence, in and beyond education. (See some of my posts on thoughtlessness if you haven’t already: this was actually my dissertation topic many moons ago.)
Since you know philosophy, a great place to begin is the Meno from Plato and Heidegger’s great little book What is Called Thinking?
Grant–Yes, yes, yes on all fronts. To say in a different way what was said above, my sense is that the thoughtlessness and purposeless that pervades teacher-work is a symptom of the fact that sustained, deep thinking on any issue is a perceived waste of time WHEN COMPARED with spending time doing the other ‘stuff’ that is easier to quantify. This is because the little curriculum team voice in our head is always saying How are you going to measure that? What percentage mastery are you looking for?. We are answering those questions as scientists and our thinking is very limited…
But now look at one of your questions…
Could a student do poorly on this exam/project, in good faith, but still understand and have provided other evidence of meeting my goals?
This question I would characterize as intellectual disrobing! It exposes the thinking of the teacher, the values, the decisions he/she made in how the lesson was structured. It forces the teacher to CONSIDER the possibility that one of HIS students could actually bomb an exam but still understand the material. It reveals the deep profound decisions that were made and calls them into question. This is critical thinking; this is philosophy; and it hurts.
Ultimately, the school leadership must model this critical thinking to shape the prevailing mindset in the building, which in turn shapes the format of the teacher-work and the conversations that ensue. When I first started teaching I thought administrators didn’t matter; now I think they are the only solution! I’m throwing out some of your questions tomorrow and see what happens.
thanks for the resources… I looked at the thoughtlessness post before but I’ll check it out again. I’d like to check out more of your earlier work too. It’s so refreshing to get this perspective…
Some very thoughtful musings and suggestions here. Where I am struggling is not just in final exams but culminating assignments as assessment of learning. In reality, today’s students will not, in the future, learn in a solitary environment. Therefore, why do we artificially create assessment that generally discourages or flat out denies collaboration as an integral part of the final product. Do you have any thoughts to share here?
There was nothing in my argument that precluded ‘finals’ from being collaborative efforts, open-book, or questions known in advance (so as to promote study groups). I think these kinds of practices are far more common in college now than many HS people realize.
On the other hand, we can’t lose sight of the validity issue re: the individual student. I need to justify any score/grade/mark/judgment about individual achievement, so I need some way to sort out the individual’s role in group tests. One method we used 2 decades ago in the NSF-funded math and science performance assessment in Connecticut was to have a solo parallel quiz go along with all group tasks, with the quiz involving the same content knowledge as was required for the task.
One other thought: what they always said at School Without Walls, the alternative school in Rochester: the final project is not the chair you built; it’s the learning about the chair you built.
Hi Grant,
Thanks for an excellent blog!
Just reread your “wordy”, but helpful, rewritten goal describing understanding of the Pythagorean Theorem. As a history teacher, I’d be curious to see what your description for understanding the Civil War would say.
I’ve sat in countless common assessment creation meetings with colleagues. Often, as Dan F described above, with an expectation that something ‘clean’ be produced as evidence that the meeting was productive.
What it means to understand topic x is rarely discussed. Instead, the discussion typically goes like this: Since we are assessing understanding of World War II (are we?), we need to ask students some questions about Hitler, about Pearl Harbor, about D-Day. Who has ‘good’ questions about these topics? Ok, let’s put those into a new word doc. Let’s write some questions about the atomic bomb.
A messy discussion about goals and attempts to articulate what it means to understand a topic, such as World War II, are much more common, in my world at least, on blogs or on twitter.
Joe: Thanks for this practical insight, namely, that folks need a protocol for having the right conversation in meetings about exams and courses. Clearly, having the proper goals in advance of such talk is a key part of the solution, but as you remind us, the talk quickly turns to content without some process for avoiding such a focus. I want to give this a thought; readers, do you have or know of a good process for making such discussions more focused on the issues of sound design?
I think this is an important point. Many of my peers have aced their multiple choice Spanish exam (a language they could speak a word in) or failed an essay final in a Math class. The tests should reflect the type of material taught, and should test mastery, not memorization!
It might be just me – but my students will often ask, “Is this going to be on the exam?” Or, “Is this assignment for a grade? How do we get students to shift their focus to understanding?
Rest assured, it is NOT just you. We had this very conversation in 3 meetings today at the American School in Paris. While at some level it is a sensible question – How important is this assignment, really? – at some point we have to take a SCHOOL-WIDE stance on extrinsic motivation and narrow utilitarian thinking by kids.We have to act as one as a staff, supported by admins and clear program and course goals that communicate the alternative: attitude matters; desire to learn and be curious matters; parents need to help us in this; etc.
I have a vivid story on this front. At my last school, I had a student ask to meet with me after receiving a disappointing grade on an assessment. She started the conversation by asking what she could do to improve her grade. I asked her to reframe this conversation by asking what she could do to improve her understanding. Her response was “Colleges have no idea what I understand. They know what my grades are.”
And of course, she is correct. Because historically we have not viewed grades as anything more than a judgment in relation to a vague summary of content knowledge as reflected in the course title.
Interestingly, this issue would be moot if grades reflected understanding, wouldn’t it?
I’m puzzled. It seems to me that the college application requirements drive what high school teachers emphasize on in their teaching. High school graduate requirements, in turn, drive what is emphasized in middle school. Middle school content influences what is being taught in elementary school. I’m not sure if what I am observing is accurate. Feedback, anyone?