What is “authentic assessment”?
Almost 25 years ago, I wrote a widely-read and discussed paper that was entitled: “A True Test: Toward More Authentic and Equitable Assessment” that was in the Phi Delta Kappan. Download it here: Wiggins.atruetest.kappan89 I believe the phrase was my coining, made when I worked with Ted Sizer at the Coalition of Essential Schools, as a way of describing “true” tests as opposed to merely academic and unrealistic school tests. I first used the phrase in print in an article for Educational Leadership entitled “Teaching to the (Authentic) Test” in the April 1989 issue. (My colleague from the Advisory Board of the Coalition of Essential Schools, Fred Newmann, was the first to use the phrase in a book, a pamphlet for NASSP in 1988 entitled Beyond standardized testing: Assessing authentic academic achievement in secondary schools. His work in the Chicago public schools provides significant findings about the power of working this way – Authentic-Instruction-Assessment-BlueBook.)
So, it has been with some interest (and occasional eye-rolling, as befits an old guy who has been through this many times before) that I have followed a lengthy back and forth argument in social media recently as to the meaning of “authentic” and, especially, the idea of “authentic assessment” in mathematics.
The debate – especially in math – has to do with a simple question: does “authentic” assessment mean the same thing as “hands-on” or “real-world” assessment? (I’ll speak to those terms momentarily). In other words, in math does the aim of so-called “authentic” assessment rule in or rule out the use of “pure” math problems in such assessments? A number of math teachers resist the idea of authentic assessment because to them it inherently excludes the idea of assessing pure mathematical ability. (Dan Meyer cheekily refers to “fake-world” math as a way of pushing the point effectively.) Put the other way around, many people are defining “authentic” as “hands-on” and practical. In which case, pure math problems are ruled out.
My original argument. In the Kappan article I wrote as follows:

Authentic tests are representative challenges within a given discipline. They are designed to emphasize realistic (but fair) complexity; they stress depth more than breadth. In doing so, they must necessarily involve somewhat ambiguous, ill structured tasks or problems.

Notice that I implicitly addressed mathematics here by referring to “ill-structured tasks or problems.” More generally, I referred to “representative challenges within a discipline.” And notice that I do not say that it must be hands-on or real-world work. It certainly CAN be hands-on but it need not be. This line of argument was intentional on my part, given the issue discussed above.
In short, I was writing already mindful of the critique I, too, had heard from teachers of mathematics, logic, language, cosmology and other “pure” as opposed to “applied” sciences in response to early drafts of my article. So, I crafted the definition deliberately to ensure that “authentic” was NOT conflated with “hands-on” or “real-world” tasks.
My favorite example of a “pure” HS math assessment task involves the Pythagorean Theorem:

We all know that A2 + B2 = C2.  But think about the literal meaning for a minute: The area of the square on side A + the area of the square on side B = the area of the square on side C. So here’s the question: does the figure we draw on each side have to be a square? Might a more generalizable version of the theorem hold true? For example: Is it true or not that the area of the rhombus on side A + the area of the rhombus on side B = the area of the rhombus on side C? Experiment with this and other  figures.

From your experiments, what can you generalize about a more general version of the theorem?

This is “doing” real mathematics: looking for more general/powerful/concise relationships and patterns – and using imagination and rigorous argument to do so, not just plug and chug. (There are some interesting and surprising answers to this task, by the way.)
Real world and hands on defined. While I don’t think there are universally-accepted definitions of “real-world and “hands-on” the similarities and differences seem straightforward enough to me. A “hands-on” task, as the phrase suggests, is to be distinguished from a merely paper-and-pencil exam-like task. You build stuff; you create works; you get your hands dirty; you perform. (Note therefore, that “performance assessment” is not quite the same as “authentic assessment” because the performance could be inauthentic). In robotics, life-saving, and business courses we regularly see students create and use learning as a demonstration of practical (as well as theoretical) understanding – transfer.
A “real-world” task is slightly different. There may or may not be mere writing or a hands-on task, but the assessment is meant to focus on the impact of one’s work in real or realistic contexts. A real-world task requires students to deal with the messiness of real or simulated settings, purposes, and audience (as opposed to a simplified and “clean” academic task to no audience but the teacher-evaluator). So, a real-world task might ask the student to apply for a real or simulated job, perform for the local community, raise funds and grow a business as part of a business class, make simulated travel reservations in French to a native French speaker on the phone, etc. Indeed a real-world task for a budding mathematician would be to present original research to a panel of mathematicians.
Here is the (slightly edited) chart from the Educational Leadership article describing all the criteria that might bear on authentic assessment. It now seems unwieldy and off in places to me, but I think readers might benefit from pondering each element I proposed 25 years ago:
Authentic assessments –
A. Structure & Logistics

1. Are more appropriately public; involve an audience, panel, etc.

2. Do not rely on unrealistic and arbitrary time constraints

3. Offer known, not secret, questions or tasks.

4. Are not one-shot – more like portfolios or a season of games

5. Involve some collaboration with others

6. Recur – and are worth retaking

7. Make feedback to students so central that school structures and policies are modified to support them

B. Intellectual Design Features

1. Are “essential” – not contrived or arbitrary just to shake out a grade

2. Are enabling, pointing the student toward more sophisticated and important use of skills and knowledge

3. Are contextualized and complex, not atomized into isolated objectives

4. Involve the students’ own research

5. Assess student habits and repertories, not mere recall or plug-in.

6. Are representative challenges of a field or subject

7. Are engaging and educational

8. Involve somewhat ambiguous (ill-structures) tasks or problems

C. Grading and Scoring

1. Involve criteria that assess essentials, not merely what is easily scores

2. Are not graded on a curve, but in reference to legitimate performance standards or benchmarks

3. Involve transparent, de-mystified expectations

4. Make self-assessment part of the assessment

5. Use a multi-faceted analytic trait scoring system instead of one holistic or aggregate grade

6. Reflect coherent and stable school standards

D. Fairness

1. identify (perhaps hidden) strengths [not just reveal deficits]

2. Strike a balance between honoring achievement while mindful of fortunate prior experience or training [that can make the assessment invalid]

3. Minimize needless, unfair, and demoralizing comparisons of students to one another

4. Allow appropriate room for student styles and interests [ – some element of choice]

5. Can be attempted by all students via available scaffolding or prompting as needed [with such prompting reflected in the ultimate scoring]

6. Have perceived value to the students being assessed.

I trust that this at least clarifies some of the ideas and resolves the current dispute, at least from my perspective. Happy to hear from those of you with questions, concerns, or counter-definitions and counter-examples.

Categories:

Tags:

36 Responses

  1. Great summary and much needed reminder, Grant! As a comrade who was there from the beginning of this discourse, I’ve been making the same points ever since. The Dan Meyer blog is the best ongoing discussion these days, but in this era we need way more of it in other disciplines as the new testing era heats up old questions and arguments.

  2. Thanks for clarifying an often misused phrase…”authentic” assessment. You’re correct that many educators consider that it has to be real world. Ideas that I will take with me from this post are:
    Intellectual Design Features:
    2. Are enabling, pointing the student toward more sophisticated and important use of skills and knowledge.
    My takeaway: Assessments (formative and summative) are part of the ongoing cycle of student learning.
    3. Are contextualized and complex, not atomized into isolated objectives.
    My takeaway: There is a place for assessing itemized objectives, such as multiplication facts, operations on fractions, etc. However, when assessing discrete topics, although it may be necessary at times, don’t mistake that type of assessment as “authentic”.
    5. Assess student habits and repertories, not mere recall or plug-in.
    My takeaway: Assessing students’ facility with the 8 CCSSM Practice Standards should be an important part of assessment.
    Grading:
    1. Involve criteria that assess essentials, not merely what is easily scored.
    My takeaway: There goes that word “essential” again. I’ve always struggled with the phrase “essential question” because it has become such an educational catchphrase that most educators (including me) don’t fully understand, causing it to be formulaic. I prefer to use the word “essence”. The word “essential” signifies to me something that is required and necessary. The word “essence” signifies the underlying meaning or feel of something. I would rephrase the above to say “Involve criteria that assesses the essence of the topic being assessed.” (Perhaps you can change my mind on this, but I have heard the phrase “essential question” thrown around so much with little understanding that I rebel against the phrase!)
    Scoring:
    2. Are not graded on a curve, but in reference to legitimate performance standards or benchmarks.
    3. Involve transparent, de-mystified expectations
    4. Make self-assessment part of the assessment
    5. Use a multi-faceted analytic trait scoring system instead of one holistic or aggregate grade
    My takeaway: The above 4 are intertwined. It should not be a mystery to students what is expected of them. A rubric in which they self-assess and which is also scored by the teacher creates a dialogue in which student growth can occur.
    Fairness:
    1. identify (perhaps hidden) strengths [not just reveal deficits]
    My takeaway: Identifying students strengths is so often overlooked. I think that doing this would go a long way towards moving students towards more success. Comments from teachers such as “Your work on [fill in the blank] makes it evident that you are very solid in your understanding of [fill in the blank]. Your next step is to use that understanding to stretch your facility with [fill in the blank].” This gives students a pat on the back, lets them know that math concepts they learn are connected, and gives them a focus on a specific concept.
    As always, your posts give me food for thought.

    • Great musings on each idea. (In the article I attached i also offered commentary such as this; you and I are online on almost all points).
      1. Having somewhat coined the phrase ‘essential questions’ for the Coalition of essential schools (and recently co-authored a book with Jay trying, once again, to undo the bad uses of the phrase), I feel your pain very deeply. But I think the problem is not the phrase but the failure of people to use it properly. Many teachers think that the ‘essence’ of their lessons is whatever they happen to stress, so your fix doesn’t help end thoughtless use of the phrase, alas. Until and unless people get thoughtful about the role of inquiry in achieving learning outcomes, they will simply fail to understand why the phrase is what it is. See my post on EQs.
      2. The Practice Standards are very likely to be completely lost unless people give grades against them. I wish the Standards Writers had made more clear what good design vis a vis the Standards looked like, as I have oft lamented. More generally, habits of mind should be assessed K-12 and reported out. A good place to start is the list from the Common Application teacher Reference form for college. “Reaction to setbacks” is my favorite.
      3. There is indeed a place for discrete items, but students must learn – by the way tests are constructed and questions weighted – that genuine problem-solving performance on constructed response questions is the ‘game’ and the quiz items are ‘drills’. (See my various math posts on this).

      • In # 1 of your reply, you state, “Many teachers think that the ‘essence’ of their lessons is whatever they happen to stress, so your fix doesn’t help end thoughtless use of the phrase, alas. Until and unless people get thoughtful about the role of inquiry in achieving learning outcomes, they will simply fail to understand why the phrase is what it is. See my post on EQs.”
        Reading that statement gave me an aha moment, an “understanding”. Although I have read your books and your blog, I never really “understood” the importance of inquiry in student learning. As a former HS math teacher, I have to admit that I was certainly guilty of teaching students what I deemed that they needed to know, based on standards that others had deemed important.
        I looked for a post on EQs, and the closest I came was your 9/12/11 post on “Working with Understandings”. The following paragraph hit me like a brick.
        “In other words, one of the great teacher misunderstandings is that we have to teach the understanding for understanding to occur. On the contrary, as mystery stories, movies, and (especially) video games reveal, the learner is not only perfectly capable of drawing appropriate inferences, such activity is key to increasing intellectual engagement and reducing the boredom of schooling. And ironically, the literature on student misconception reveals that in spite of clear “teaching” of big ideas, many students do not understand what they have been taught (even if they pass our quizzes).”
        I know from experience that I personally have to make sense of something in my own way before I “get it”. I “get” K-12 mathematics because I have had to understand it deeply in order to teach it. I have had to make the connections myself between the different strands of concepts and skills that build upon each other and inform each other. I continue to learn new connections every day, mainly by sorting through what I hear and read and fitting those new ideas, concepts, and skills into my framework of understanding. However, as a teacher, I always felt it was my responsibility to build that framework for them.
        This discussion has made me realize that I am incapable of building a framework of understanding for someone else. The intellectual engagement of students is built through getting them to think creatively. Thinking creatively only happens when a person has to find an answer to some nebulous overarching inquiring question. The question is one in which the person must have some stake, some interest in answering. Dialogue between students and teacher must regularly relate back to the focus of the inquiry, the essential question, as they work towards understanding. Duh!
        I’m going to pull out my Expanded 2nd Edition of Understanding By Design and give it another read. I now have a reason for wanting to put the pieces together, other than getting credit for a graduate course!

  3. What we grade or measure is what is deemed valuable. I seek to measure what I want to see the students achieve. I have to do this independently because the ‘gradebook’ measures only scores on homework (did or not/not accuracy) and quizzes/tests. My tracking is a much better way to inform my teaching. Getting the idea across to others who set policy and testing is the challenge here, but teachers must set up the cry that the current system is not adequate and teaching to those assessments is more CYA than truly teaching students. That is the tension truly gifted and dedicated teachers grapple with every day.
    I agree with Elaine regarding the 8 practices. I look for growth/mastery in those categories. I believe those 8 practices stretch across all disciplines and if we would look to those goals, we wouldn’t need all those strands and standards! Thank you for your comments here. There is much to consider.

  4. Thank you for this post. I think it is clear what you mean by authenticity. I’m not sure everyone who uses the term shares your definition. There are a lot of, ‘imagine you are a pool designer…’ type activities in maths. Indeed, many reform-oriented maths teachers often use the terms ‘real’ or ‘whole’ to describe a task so this might be coming from a different place. It could be the conflation of pedagogy with epistology that Kirschner, Sweller and Clark discuss in their 2006 paper: http://dspace.library.uu.nl/bitstream/handle/1874/16899/kirschner_06_minimal_guidance.pdf?sequence=1
    I am not personally convinced of the need for authenticity, even as you define it. I think that ‘inauthentic’ maths tests are an efficient way to assess a broad range of mathematical knowledge and skills. I don’t buy the ‘why would I care about this?’ argument. People perform a lot of tasks – such as completing crosswords – for sheer intellectual pleasure. Motivation resides elsewhere.

    • I think your point is well taken, that motivation resides elsewhere. My argument on behalf of authenticity is more about epistemology and transparency: kids need to do what real adults do in each field. And complex performance is what they do, not exam questions. It’s the distinction between the game vs. the drill that I often make: far too many math tests are not tests of genuine ability in using math to solve problems. So, it’s no wonder that kids often dislike math and do not see it as a place for inventiveness, for example. Sure, traditional tests are efficient; but they are not sufficiently valid in this long-term-purpose sense of mathematics goals.
      Thanks for the link to the paper. I knew of it through Hattie’s summary of these findings only. I’ll review it closely. The devil is in the details of how much guidance to provide, and how to engage in gradual release of responsibility in a manner that leads to competent autonomy and transferable expertise.
      PS: in reviewing the paper it is clear that my slant fits, since the authors note that transfer is key but that unguided work doesn’t foster it.

      • I think that the main issue here is the conflation of pedagogy with epistemology. For instance, an authentic mathematics task may not necessarily be linked to a mundane usage – even if this is how it is often interpreted – but rather, it could be seen as something that a real-life mathematician might involve herself with.
        For me, this does still reduce complex ideas to issues of employment and risks us turning schools into solely a preparation for work rather than a preparation for making our students’ minds “an interesting place to spend the rest of their lives”. Mathematics is an abstract science, not simply ‘what mathematicians do’. However, this is a little philosophical and there are more practical concerns.
        In suggesting that the best way to learn mathematics is by adopting the processes of a mathematician, we are assuming that the best way to LEARN mathematics (pedagogy) is by employing the processes that an expert uses to DO mathematics (epistemology). In this regard, I find the argument of Kirschner, Sweller and Clark compelling; novices do not have enough domain specific knowledge in their long term memories to be able to solve problems in the way that experts can. Therefore, attempts to use this as a pedagogy will either lead to cognitive overload and confusion or will need to be mitigated by the teacher to a point far removed from “authenticity”.
        For instance, rehearsing multiplication tables is an unlikely thing to find a mathematician doing. Yet, such automatised knowledge is important to building expertise. Once you simply “know” these tables, you no longer have to use working memory resource in order to work them out. This enables you to focus on the deeper structure of a problem. This is why, I believe, the direct instruction treatment in Project Follow Through – which emphasised such basic skills – not only showed the greatest gains in basic skills but also showed the greatest improvement in problem solving.
        I am not aware that you have ever suggested that multiplications table should not be practised. However, you do seem to dislike drill, plug-and-chug etc.; concepts that I would rather term ‘practice’. And I would also emphasise that these aspects need to be explicitly taught and explicitly assessed. I worry that well-meaning attempts to make mathematics more engaging by using ‘authentic’ tasks and assessments actually de-emphasise basic skills and therefore leave students badly-served. Although only correlational, the decline in Canadian PISA scores since the introduction of a more inquiry-oriented form of mathematics in most provinces from 2003 onwards, suggests that the issue is worthy of our concern. This is made particularly poignant when we realise that PISA maths is based upon the RME program in the Netherlands which emphasises realistic uses for mathematics, in contrast to TIMSS which asks standard academic questions (and in which Canadian provinces have also declined).

        • No, I have never said that practice is unwise. I have repeatedly referred people to athletics and music as a realm in which we see the importance of practice. My beef is in the other direction, as I have often said: in far too many math classes it is ALL de-contextualized practice; it is all plug and chug, driven by discrete topics with no overall ideas or purposes. I have never advocated for “discovery learning” either. What our focus has been on for decades is aiming for transfer with conceptual understanding, based on a clear view of the ultimate performance target – the “real” game of learning and performing. And while I agree with you that simply because, epistemologically, the nature of math performance and knowledge-building is what it is, it doesn’t follow that pedagogically one should mimic experts. That’s never been the issue. On the contrary, the aim is to help students realize the value of the times tables in the service of doing real math.
          In that sense, what I am advocating is no different than what Suzuki violin, soccer, reading, and painting teachers have done for decades: intermingle practice with meaningful performance tasks that provide purpose. No student – except the most gifted or persistent – can keep doing meaningless work for years – anymore than faculty can in PD sessions that do not link to their concerns.
          Perhaps if you were to spend more time in art and PE classes as I have you would see that in those areas teaching has blended these concepts quite well – the balance I am discussing is far better there than in math classes. Strikingly – and not coincidentally – students universally report that art and PE are their most favorite subjects in middle and hs, with math getting the lowest marks for interest value and “being made to feel stupid”. These are ancient criticisms of math instruction – they were made by Plato, Kant and Hegel – and math people have rarely put their pedagogy to the test. Having visited classes and worked with teachers in every subject I can say, unequivocally, that the best pedagogy is rarely found in math because of the way the march through discrete topics is typically conceived. And the poor results on our tests are, for me, the consequences. (The endless criticism of “inquiry-based learning” belies the fact that in most American classrooms it almost never exists, as TIMSS movies and my own observations over decades reveals.

  5. I’m going to take the easy way out and say that creating authentic assessments is a very tricky/difficult process fraught with disagreement. I do think that we need to look at how we would create authentic assessments for real-life jobs and take that info and try to apply it in the classroom. We ask for facts and memorization way too often and seem to go off on tangents way too often. What could we do to test those people that would really let us judge their mastery of that topic/subject?
    I do question how much multiple choice, true/false, and fill-in-the-blank tests are truly authentic. I don’t see how these show what someone knows and what they can do with that knowledge.
    I also wonder how much of our tests are set up to trick people (or even worse give them hints) and what is our motivation to use those types of questions.
    I do think we need to consider what is truly essential to know for a course and then how we can have students show mastery. I love the discussions and there are certainly no easy answers.
    Great post – lots to think about!

    • Gary, here’s the rub: multiple-choice tests are NOT – by definition – authentic. However, that doesn’t prevent them from being efficient and VALID proxies for authentic assessment. This is what so many educators who bash tests fail to understand. A test can be a valid but inauthentic measure of a stated goal; a test can be authentic but invalid as a measure of the stated goal. Simple example: as much as people dislike those analogy questions from the SATs, GREs, and LSATs, they are a very good measure of a student’s intellectual and academic ability; same with vocabulary tests. If student x in the 6th grade ha the vocabulary of a 9th grader, then it is a pretty good bet that such a kid is very literate – a fact that can be made for “reliable” by having more vocal questions and some reading passage questions. These tests are nothing more than predictors and proxies, in the name of expediency of time and cost.
      Vice versa: just because a task is authentic doesn’t mean that it is a valid predictor of a specific outcome. Very often, in fact, and authentic task is messy and dealing with confounding variables. Thus, even though the task involves, say “math problem solving” or “writing and speaking to an audience” and the use of some content, it doesn’t follow that we can infer, with confidence, the student’s level of achievement with those process or that content – especially, if the kid had help from peers and teacher along the way. (Never mind, that the result might be a one-time thing, hence “unreliable” as a score.)
      I wish more people understood this…

      • Great points – wow! Lots to think about. I guess I am looking for an authentic assessment to give me a more reliable predictor of success based on what is known. So an authentic assessment of a plumber/carpenter/etc. would hopefully tell me how well that person will do the job. They know their stuff and how/when to use it. Certainly not perfect and certainly not easy to come up with a measure other than work reviews (and that is open to lots of gaming of the system).
        I sort of agree with your M/C question points. I would certainly agree it may be part of authentic assessments – maybe showing basic knowledge. I really have a problem going any further than that. Having said that, I would not use those questions as my test. Part of the test – yes. This may be slightly off-task, but 360 math ( http://news.yahoo.com/stand-360-degree-math-revolutionizes-classrooms-220024511.html ) has the students standing up – sort of performing math problems for all to see. Maybe this type of demonstration is more authentic in many ways. I could also this type of performance being used in other areas – maybe creating a timeline and leading a discussion about the importance of the battles of 1066 to the class.
        I guess the real struggle is designing assessments that really matter. I wonder if interpretation of those results should also be more thoughtful…
        Thanks for another great post!

        • When I was involved with the great performance assessment work in CT funded by NSF and led by Steve Leinwand, the rule was that for every major task there would be a parallel paper and pencil test of the content addressed in the task. This aided reliability and validity (by making sure there was info on each student, not just the group). Given the importance of the rule of thumb “triangulate the data” it is important for educators to realize that ALL assessment has error and that multiple measures and events is just sound assessment.

  6. Many of us in Iowa are engaged in authentic intellectual work, using a framework created through Fred Newmann’s work and with him personally. Our work is quite tight with most, if not all, of the points you made. Very satisfying to read and think about.

  7. What I hear from the field is that they do not have time to do “authentic assessment” because they have to cover the content for the prescribed test.
    How do we help people see that “Authentic assessment” does not only cover content, it asks learners to get in and really know the content?

    • Well, this is the key question: how to get people to see that nervous ‘test prep’ has little chance of causing the greatest gains since the work will be superficial and unengaging. To answer your question: it takes calm and committed leadership to say that good teaching, learning, and assessing remains the best way to handle standards and tests.

  8. I was just part of a dialog about what makes for an authentic audience. This is a timely piece regarding authentic assessments that informs that conversation. Looking forward to its continuation.

    • To me it’s really plural: authentic audiences. Some are polite; others not. Some are large; some are small. Some are on your side; others are against you. Some are expert, some are peers, some are very naive and unsophisticated on your topic. Ideally, you are prepared for the range and types, so that you understand how to adapt your ‘message’ to the different contexts. (And all these audiences can be simulated, of course.)

      • One of the many points that stood out to me is the concept that authentic audiences equates to “messy” work. That makes sense when I consider that rarely have I seen any part of an adult project be crisp and lacking bumps along the way.
        I can see the value of simulated experiences that replicate something beyond the classroom walls. Those simulated experiences that are well plan may be mere steps away from becoming truly authentic, and messy of course 🙂

    • 25-year-old 20/20 hindsight would be nice in all aspects of life!
      Actually, it made great sense at the time because almost all tests were ‘unrealistic’ in schools.

  9. Reliability, validity and triangulation are more difficult to score with authentic tasks that are messy and multidimensional. If educators co-construct criteria for assessment with students and provide exemplars for students much better alignment occurs.
    Keeping a portfolio of student work replete with records of formative, summative and self-assessments against the rubric would help to develop multiple sources of evidence by which a consistent level of performance may be discerned.
    The problem may be the sample of one.

    • I am in full; agreement. It’s never a good idea to have a sample of one – whether we are talking blue book exam or complex project. The portfolio reflects beautifully the idea of triangulation. That was supposed to be what happened in VT, but politics altered the selection process in a way that undermined its validity and reliability. But that experience should not stop us from exploring a sound portfolio process. We developed such a process for NC which was piloted a decade ago in 7 districts. I thought it went well and the feedback was great from districts. Alas, the push toward more testing of everything, in a conservative state, undid it. I then floated a modified version of it for NJ – which went nowhere beyond the groups that commissioned it (NJSPA and NJASCD). time to try again?!?!?!

      • Although Vermont didn’t continue the portfolio system as a required form of assessment, many educators saw its value and continued to use it. Ross Brewer, who was instrumental in spearheading the portfolio movement in Vermont, founded the company Exemplars that continues to offer quality tasks with accompanying student work scored using the Exemplars rubric. There will always be politics that will derail good ideas on a large scale. Thankfully, there will also always be those educators who know a good idea when they see it and stick with it…not because it is required, but because it is good practice.

  10. Hi Grant, thanks for the nudge, here and on Twitter, back to this piece. I read it on Sunday and got muddled up in one place in particular:
    One, this post and its framework emphasize assessment, not curriculum. No doubt the two overlap in enough places to be useful but I’m left muddled on the differences. In general, we don’t expect assessment to be the aspect of the learning trajectory that interests a student in a discipline and we don’t expect assessment to be the aspect that teaches the discipline. In my series, I’m responding to people who assume that the only way to interest a student in learning mathematics is through tasks from the “real world.” (Maybe best described as the “material world.”)
    The questions, “What makes for authentic assessment of algebra?” and “What makes algebra interesting to a fourteen-year-old?” are different enough to matter, I think.

    • Hmm, I puzzled by your claim: I do, indeed, expect the assessment to make a difference to both the student’s interest and the way curriculum is thus written to point to the assessment. If the student knows that the grade is dependent upon solving cool problems that will surely affect the student’s motives as wells the teacher’s design.
      That said, I strongly agree with your ‘fake world’ series. There are many fascinating puzzles, problems, issues that are not immediately relevant to kids’ lives. I have always spoken of sports & games in this way: there is nothing ‘practical’ or ‘real world’ about improving one’s soccer or Halo ability. The same is clearly true of Harry Potter and The Matrix: imaginative and gripping fantasy world stories are not ‘relevant’. Finding out what is truly intellectually engaging should be a felt obligation of all teachers. Indeed, our student survey answers are quite interesting in that regard. The ‘most interesting’ learning activity yields fascinating answers. For example, 10 kids at one school identified the project of interviewing an addict for Health class. 12 Kids at another school said robotics project; dozens of kids said “dissection of animals” for science. The pattern is very clear: odd, raw, challenging, puzzling, competence-building experiences are motivating.

      • FWIW, these were the elements that caught my attention:
        3. Are contextualized and complex, not atomized into isolated objectives
        7. Are engaging and educational
        Declaring the task to be “engaging” seems to beg the question under discussion – “how do we make tasks engaging?” Declaring the task to be “contextualized” seems to cut against the conviction you express here that math tasks can still be fascinating and authentic with or without context.

        • Context here applies to issues of audience and purpose. That might still be important in math where the ‘audience’ is mathematicians or would-be employers. I think kids find such simulations/situations engaging.

      • Dan & Grant — In my content area, English, the assessment is often the aspect of learning trajectory that interests the students and is often the vehicle for teaching the discipline, too.
        Consider a case from my class of high school sophomores: in a unit on writing arguments, the final assessment was the students’ published editorials, presented in a New York Times “Room for Debate”-style format. Students worked in groups to generate guiding questions, and then wrote individual editorials in response to this question.
        The summative assessment was each student’s published editorial, which required several weeks of learning activities and formative assessments. These activities fueled student interest in the final products they eventually (and proudly) created
        While this is only one case, I think that assessment should be tied closely enough to the curriculum so that an interesting assessment lends itself naturally to interesting curriculum.

  11. I enjoyed reading this post, as I enjoy reading all of Grant’s writings. He has inspired me to share my thoughts as well, even though I do not have an audience like such a well-followed writer/thinker does. My thoughts can be found at mooreperspective.wordpress.com
    Keep writing and sharing, Grant!

  12. Reblogged this on i-Biology | Reflections and commented:
    For a while I’ve been banging the drum of the importance of definitions and I was reminded of its importance at the weekend as I took part in the #GAFESummit at CA and the whole-school PD session on Learning Principles. We have so much language to use in the educational context that it can get confusing as terms get popular and overlap.
    We need to define – and carefully use – terms on an institutional (or wider) level. What is inquiry? What does authentic really mean? How is it different to ‘real-world’ or ‘hands-on’? What do we really mean when we say ‘meaningful’ or ‘engagement’? Do we understand these terms in the context of someone else’s discipline?
    Read on as Grant Wiggins defines ‘authentic’ in the way that we should all understand it.

Leave a Reply

Your email address will not be published. Required fields are marked *