Advanced Concepts in Creating & Reviewing Units

"It was an exciting and thought provoking day!"

—Barbara Klatt, The Culver Academies, IN

Learn more about our consulting services.

Oh, you can't Measure THAT...
Mar 11, 2010
return to Big Ideas Home

Oh, you can’t measure that...

 I hear this in almost every conversation about long-term educational goals. “Critical and creative thinking – oh, you can’t measure that.” Why are educators so quick to say things like this? On its face the claim is a bit odd: if we can measure AP art portfolios, the quality of an Olympic gymnastics routine, or music performance in New York State through its NYSSMA competitions then we can surely measure critical and creative thinking – or any other goal typically found in school and district Mission statements.

This past month I have been reading a great book that might be of interest to anyone who wants to get beyond a knee-jerk reaction about what can’t be measured. The book makes the point from the git-go, in its title: How to Measure Anything: Finding the Value of Intangibles in Business, by Douglas Hubbard. Don’t let the "in Business" part throw you off. Almost everything in the book speaks to educational outcomes.

So, where does he begin? By tackling our prejudices:

 “For those who believe something to be immeasurable, the concept of measurement – or rather the misconception of measurement – is probably the most important obstacle to overcome.... The error is to assume that measure = certainty. The mere reduction of uncertainty will suffice for [most] measurements.”


Hubbard addresses the key misconception via his definition of measurement: “Measurement: a set of observations that reduce uncertainty, where the result is expressed as a quantity.”

The phrases “reduce uncertainty” and “expressed as a quantity” make clear that the point is improved precision and reduced uncertainty about how we’re doing in some important area. The goal is to get a useful measure of our key goals (critical thinking, creative thinking), not an infallible perfect measure. In fact, there are no infallible measures of anything worth measuring.

 Hubbard offers a set of questions that everyone in educational reform and accountability ought to be considering when designing feedback mechanisms – but usually aren’t:

  1. What are you trying to measure? What is the real meaning of the alleged ‘intangible’?
  2. Why do you care? What’s the decision?
  3. How much do you know now? What ranges or probabilities represent your uncertainty about this?
  4. What is the value of the information? What are the consequences of being wrong and the chance of being wrong?
  5. Which information would confirm or eliminate different possibilities?
  6. How do you conduct the measurement to account for various types of avoidable but common errors?


He cites many examples of people who were consistently measuring the wrong things or trying to measure things that weren’t in the end worth measuring so complexly – once they addressed these questions. This is arguably the current reality of assessment in education!

The core tactic is to do what we always do in workshops: ask people to identify agreed-upon indicators of success or failure at the goal in question using a simple T-chart: what are example and non-examples of critical thinking? What are indicators of creative and uncreative thinking? The answers quickly make clear that the goal is measurable.

If we struggle to develop indicators and encounter people who persist in saying that something can’t be measured in quantitative terms, Hubbard offers us a devilishly clever, simple, yet effective strategy. Can you come up with your own 90% confidence interval for any measure? For example, if I asked you how confident you are about results from a multiple-choice test of creative thinking, you would likely retort: Oh, you can’t measure creative thinking with such questions! But I persist, show you a test, and ask: is there some wrong number of questions from this test that might convince you that the thinker is likely uncreative? You might haltingly respond that anyone who gets “most” wrong perhaps isn’t as likely to be as creative as someone who gets them all right.

What if I then say: how many wrong would it take before you are 90% sure? You might say: most = more than 20.  But then you are acknowledging that the test measures creative thinking in some way. Again, the goal is reduced uncertainty in numerical terms, not perfection. (Note how this relates to all quizzes we teachers give: the score matters less than the cut point at which we are confident we are distinguishing passing from not passing, not just foolishly assuming that our point values are perfect and so 59 = failing.)

Be forewarned: once you get into the middle and late chapters a comfort level with statistical methods is helpful for understanding his arguments and examples. But the book pays dividends for those readers willing to persist. Highly recommended for anyone with a responsibility for educational results ands/or measuring them. (Hint: i.e. all of us.)


Join the conversation and leave a comment!

Your Name

Your Comment

Type the word GOAL below and click submit to complete your comment.
Displaying 1 - 4 of 4 found comments.
Posted by: sean
Feb 27, 2011
This is a very helpful article. I think that most teachers would agree that they certainly can measure creativity and critical thinking. The issue for me is how to explain to a student's parents how we arrived at a particular score/grade on an assignment. Grades are super important given the competition for scholarships and acceptance into 'top' colleges and universities. The extent to which parents agree or disagree with a grade I assign to their child's work is based on their sense of what constitutes meeting an objective. They can argue my interpretation of objectives, for that matter. It is not unusual for me to be in the position of explaining to parents how I arrived at a paricular grade as a measure of their child's work. It is my experience that parents feel much more comfortable with strictly objective grading procedures. They seem to accept a grade more readily when it is based on the answer key ('fact')in a textbook and not my 'opinion'.
Posted by: EarlWhite
May 14, 2018
First time i have read your blog its good but you should post more info about it because this information is not enough. Critical wondering is at the middle of most intellectual interest that involves students getting to know to recognize or develop an argument, use proof in help of that argument, draw reasoned conclusions, and use facts to remedy issues. Examples of critical questioning competencies are decoding, analyzing, evaluating, explaining, sequencing, reasoning, comparing, wondering, inferring, hypothesizing, appraising, checking out and generalizing.There are many of the things by which may describe more over the better thing as well as it also may use as an information.
Posted by: windows 10
Jul 14, 2018
thanks for visit our page and following continue...
Posted by: viparushi roy
Feb 05, 2021
Never saw a particularly reliable post. I'm grateful to you and expect more posts like these.
Copyright © 2015 Authentic Education
P.O. Box 148, Hopewell, NJ 08525