I like Carol Burris and appreciate what she is doing to rally educators against what she sees as the errors being made by New York State officials and the implementers of Common Core Standards in general. As I have often said, it is up to every educator to be pro-active about change, whether it be in their room or nationally.
But a recent piece by her (and John Murphy), published in the Washington Post, does her cause no service at all. They argue that Common Core test questions are developmentally inappropriate. Alas, like too many other educators, she seems not to understand what Piaget meant about “concrete operational” thinking. She incorrectly complains that Piaget showed that pre-adolescents cannot engage in abstract and deductive thinking; therefore, the new Common Core tests have developmentally inappropriate questions.
First, here is the PARCC sample test item that she complains about:
Part B provides students with the following information:
The San Francisco Giant’s Stadium has 41,915 seats, the Washington Nationals’ stadium has 41,888 seats and the San Diego Padres’ stadium has 42,445 seats.
It then asks the following question:
Compare these statements from two students.
Jeff said, “I get the same number when I round all three numbers of seats in these stadiums.”
Sara said, “When I round them, I get the same number for two of the stadiums but a different number for the other stadium.”
Can Jeff and Sara both be correct? Explain how you know.
The authors report on an answer from a student they know:
This response by one of the 6th graders, an 11 year old, provides insight into how this age group thinks: “No, I know this because they all round to 42,000.”
We know her response is typical for her age group (7-11) because of the work of one of the greatest childhood psychologists of all time, Jean Piaget. Piaget carefully and systematically studied the cognitive development of children. Before him, it was assumed that when it came to thinking, kids were not as adept as adults, but their thought processes were essentially the same.
Piaget disagreed. He discovered that the development of thinking is far more complex. He identified distinct stages of cognitive development that children go through as they mature, including, the ‘Concrete Operational’ stage (ages 7-11).
Students in this stage can engage in some inductive logic, but deductive logic, which is needed to solve problems such as the one described above, is beyond them. [emphasis added]
Alas, Burris and Murphy seem to have been taken in by the phrase “concrete operational” and thus seem to believe it to mean that young children cannot think abstractly and logically. On the contrary, “concrete operational” thinking is abstract and deductive, and refers to reversible mental operations like arithmetic and subtraction. (“Formal operational thinking” by contrast involves extended conditional logic e.g. algebraic thinking)
For example, knowing the answer to “How many apples were eaten if the group started with 8 and ended with 2?” is a concrete operational deductive multi-step exercise, typical of math work in grades 3 and 4. In fact, arithmetic, chess, Sudoku, and inferential reading would all be impossible for 11-year-olds if the authors were correct about students at this age not being able to think abstractly and deductively.
The authors then imply that because the test question and others like it are “developmentally inappropriate” children are being needlessly hurt by tests of the Common Core:
Many of the other [sample PARCC] tasks involve less abstraction, but are highly difficult. They are interesting questions that make adults stop and think. But as Piaget told us, children are not “mini-adults.” If a child is not developmentally ready, these problems will likely lead to frustration, discouragement and negative emotional reactions—which is exactly what parents are reporting.
But this is an odd argument. “If a student has to stop and think, then they will likely get discouraged and quit.” Really? Surely this under-estimates the ability of children to accept an intellectual challenge, while letting teachers off the hook for not routinely challenging kids to think about what they learn. By their argument, any ELA question that asks students to draw inferences about a character’s mood or motives is equally inappropriate.
More generally, I find the phrase “developmentally appropriate” a very squishy phrase, used in too many cases to make learning less intellectually rich and challenging than it might be.
Let’s consider an alternative view, grounded in a class I just witnessed. Check out this picture:
The teacher is soliciting from her students the characteristics of a good discussion, prior to asking them to engage in self-sustaining conversation. (A very abstract question). Students gave a bunch of good answers, as shown, and then were assigned the job of safeguarding these rules via a slip of paper given to them that had a symbol for one of the rules (e.g. lightbulb for ‘share ideas’, ear for ‘listen respectfully’, etc.), symbols that the students proposed. The students then proceeded to consider the question “What’s similar and what’s different in this author’s 4 books that we have just finished?” (also abstract) for 20 minutes, without any teacher facilitation except a few comments at “half-time.”
Some of the student answers included: they all had a moral or message. Most involved a change to the main character; finding help from a friend was a common theme. Notably, there were some disagreements that had to be ironed out by the group. Most impressively, students in charge of the ‘compass rose’ symbol – navigation – proposed a few times that they thought the conversation was off topic – a highly abstract inference.
Age?
2nd graders. Doing an author study of Lionni.
We do our kids no service to shield them from intellectual difficulty. We too often wrongly think that most kids cannot think really hard. (The teacher began the discussion by saying “This will be hard, but I think you can do it!”) Indeed, as one of the girls said at the end of this lively and completely student-self-sustained and monitored discussion, “Can we please do this again? This was fun!” leading to approving murmurs from most of her class-mates.
PS: Here are two released tests for 2nd grade that prove my point: kids are expected to do the kinds of problems I said above were appropriate for the grade tested, 4th grade:
California math test
Georgia math test
However, a propos released tests I am very disheartened to learn that Florida no longer releases its tests and item analyses, as it did for 10 years. (The links no longer work to find those tests, either). I will have more to say on test ‘security’ and fair accountability in my next post.
Meanwhile, here is an older test to show how releasing tests and item analysis can be so helpful – and should be done on moral as well as pedagogical grounds as I have long argued:
FL07_G6R_AK_Rel_WT_C003
PPS: And here is a way cool video of high expectations of little kids:
https://vimeo.com/38247060
PPPS: A few readers asked me for some references to Piaget to support my claim, above. A good source is one of Piaget’s early books, Judgment and Reasoning in the Child, which is far more readable than his later works which are laden with Boolean logic and fairly abstract analyses based on a lifetime of work. A second source is his paper on math education, one of the more important (but little-known) articles he wrote.
Here is what Piaget says about deductive ability in pre-formal children:
In formal thought the child reasons about pure possibility. For to reason formally is to take one’s premises as simply given, without inquiring whether they are well-founded or not. Belief in the conclusion will be motivated solely by the deduction…. Between the years of 7-8 and 11-12 there is certainly awareness of implications when reasoning rests upon beliefs and not assumptions, in other words, when it is found on actual observation. But such deduction is still realistic, which means that the child cannot reason from premises without believing in them. [in The Essential Piaget, pp. 114-115.]
In fact, between the age of 7 and 11-12 years an important spontaneous development of deductive operations with their characteristics of conservation, reversibility, etc. can be observed. This allows the elaboration of elementary logic of classes and relations, the operational construction of the whole number series by the synthesis of the notions of inclusion and order, the construction of the notion of measurement, etc… Although there is considerable progress in the child’s logical thinking it is nonetheless fairly limited. At this level the child cannot as yet reason on pure hypotheses, expressed verbally, and in order to arrive at a coherent deduction, he needs to apply his reasoning to manipulatable objects in the real world or in his imagination. [from “Comments on A Mathematical Education” in The Essential Piaget pp. 729-730.]
PPPPS: Here are released items from the MCAS in Massachusetts (with % correct) that show conclusively that, while challenging, such items are not inappropriate even though they demand deductive reasoning of a few steps of logic (and two are similar to the disputed PARCC item in the Post article):
24 Responses
Philosophically, I agree with you completely. In reality though- looking at what happens when we test kids with these instruments that “challenge” them with “intellectual difficulty”. We use the fact that they may not be developmentally ready (on an individual level) against them. We are not using this test to individually assess the strengths and weaknesses with that student (to a level that is helpful anyway). We give a test that results in non-specific labels of the different test areas and the number of items missed in that area on the test. For example, my child took FCAT 2.0 last year in 9th grade. It is only the reading portion since in high school, we have math end of course exams. The report shows the parent that he achieved 6 out of the 7 possible points earned (state mean was 4) on vocabulary. He missed one item. But, was this indicative of anything? Had he been tested on 20 words, for example, maybe he would have missed 50%? Only testing 7 words doesn’t tell me much about his vocabulary. And the only thing the school looks at is the bottom line score. Now, they do plenty of item analysis- especially of the bottom 25th percentile. But that is item analysis on the diagnostic tests from the district. The state will not release the FCAT item results to the teachers. So, if the kids make a certain score- they pass. If not, they fail. No one says, “they did well on all the concrete operational questions and were more challenged on the more abstract ones”. Haha! No way. It doesn’t matter what you miss. If you score low- you are held back. Period. So, why should we hold kids accountable to anything “challenging”? Some are not ready. But maybe they can succeed in a class that challenges them – working hard and learning. A test that determines whether they move on to the next grade in such a manner should be a measure of basic level skills only.
I have long held that the tests are terrible as feedback mechanisms. See Educative Assessment and Assessing Student Performance, 20 years old; and my article for Judah Schwarz’ volume on the immorality of test secrecy (referenced in earlier blog posts). All current large-scale tests can do is provide an audit. So, the paucity of information and the inability to see the items missed, etc. (never mind the states that forbid teachers from seeing the test!) is horrible. But that practice long pre-dates Common Core. It has been the norm in standardized testing for decades. And the reasons are purely money-related: they want to re-use the items – and, especially, companies that make the tests want to protect their ‘intellectual property’.
In short, what educators should demand – and should have demanded decades ago – is that all tests be released as soon as they are given. This was the practice until recently in Massachusetts – not a coincidence, is it, that they have the highest performance in the country? – and has been true of the 100 years of the NY Regents exams. If teachers and schools are expected to improve based on test results, then the whole system has to be far more transparent.
PS: the way FCAT tests were released in the past was very transparent – the item, the %^ of kids who chose each answer, what the item was testing, and the cognitive demand of the item. Are they no longer doing that? That was a model for what I am discussing.
Are you kidding about FCAT being transparent? It has never been that way. They may have said they would be but I have heard parents and teachers complaining and demanding for years and years to release FCAT items so we can see what the children are doing and they refuse- and have always. Maybe the first year they tried that? But I have never heard that before. And I’ve seen parents and teachers in the newspapers, in blogs, in letters, political campaigns, etc asking for transparency. We had mailings, phone call campaigns, etc all to stop this high stakes nonsense. There is a huge group of parents and teachers in FL pushing hard.
So, I know you have written on the topic of assessment. But, don’t you see, PARCC will assess this way too. And that is what that author’s point was all about, imho. How can we hold kids accountable for things that they may or may not be developmentally prepared to do? They may not be articulating that so well, but that is what it comes down to.
This is incorrect. This is only true of FCAT 2.0, recently I have all the tests from 2001 – 2011. Indeed, I often pulled from them to show student performance results in articles and blogs. This is a NEW decision by the current administration to save money; it should be protested. I have posted an old FCAT booklet to the current post.
Actually, I am so outraged by this move to tight test security, it will be the next post.
It has never been the case that teachers nor parents can receive the tests back after they have been scored to see how the kids did on the test. There are sample tests out to show examples of what is tested. There may be some way for a teacher to access past test items- not tests completed by a student though. And this is protested. I have never been able to get my child’s test results. If you know how I can get those, I’m all ears. I have many, many parents who would love to request that. One particular thing that is especially bothersome is FCAT Writes. The way this is graded is so disturbing and we cannot get the graded student essays back. How can we learn ANYTHING about how a student writes without seeing how they write?
Did you see what I posted at the end of this post? The test booklet, with both answers and %s for all questions. That was the norm through 2010.
I am investigating why they and a few other states stopped this practice.
I think we are miscommunicating. I know that sample test items (and it would seem past test items with the answers to the questions) were available to certain people – I don’t know if teachers were able to see this as they seem to say “no” on that.
What I am saying is that we cannot see Little Johnny’s graded test. Not the parent, not the teacher, not the school. And that is a problem. Sample test items and past test questions are meaningless to feedback about Little Johnny. We don’t know anything about my child’s vocabulary skills except on one day in March he was able to identify 6 out of 7 vocabulary correctly. Which? We have no idea. Now, maybe we can look back on 2013’s test and answers (although you are saying we cannot for 2.0). But that doesn’t help my child, his school, nor his teacher. And it really doesn’t give us much to go on.
And I do not see that parents were given any opportunity to review actual test items and answers (besides what you post now). At least they were not immediately following that year’s assessment. I have never seen any note that we could review the test- and I’m fairly informed and involved.
It used to be the case that all the tests were posted within a year. That’s why I have them from 2005 – 2010. But that policy was ended within the last few years. Little Johnny’s test can not be seen, true.
Ok, just to make sure I wasn’t losing my mind, I posted a question to my FL teacher friends. I received 10 replies in about 10 minutes. I’m sure there will be more- I will let you know if any of them are any different. The only reply I am getting is that they do not get to see any test items ever. This answer was the best one in terms of articulating the situation- all agreed: “Sign a security agreement that we will not read it even if student ask us a question about a question on the test. In the early years we were allowed to look at it and had a teacher comment page if we found anything wrong with any of the questions. Only time can see a test it when they are no longer using that version and will post on DOE website. We do not get an item analysis. I can see how they did by strand; ie geometry, measurement, etc.” In fact, 3 others mentioned having to sign security agreements. And many mentioned they have kids and the report the parent and teacher get are the same. (i.e. 6/7 correct on the Vocabulary subsection).
They don’t get to see the tests Mr. Wiggins. They don’t get to see them before, during or after- until way after. They can’t even assist a student when a question is incorrect or the student is confused. They are told to do the best they can only.
I will let you know if I get any further answers that differ from this.
As i noted, this policy ended a few years ago. But they used to have access to it, and that’s why I have the tests and item analysis from 2005 – 2010.
And now one of the teachers I contacted says that Pearson is not allowing teachers or parents access to “Little Johnny’s” test answers due to “copyright infringement. Apparently, there was some lawsuit.
It’s not right. I hope you’ll continue to explore the horrible way we are using tests in this state and in other states. Thank you.
That’s next on the posting!
Sent out a couple of tweets on this post…fact is, I’ve been working with my grandkids on math lately. One, my 11 year-old girl because she struggles–or her teachers have made her feel like a struggler. When she was younger she loved math and her Ts thought her to be exceptional–her grandpa too who is a math teacher. Then, she was a problem solver. Now she struggles because her teacher says there is only one way. Well, her teacher said that until I pointed out that is an untruth. Granddaughter now liking math and able to tell ME how Common Core math is different from her old math. Now that is cool!
And, my 6-year old grandson is all over math concepts, looking at multiplication conceptually and searching for patterns within numbers by trying to understand the relationship of ages among family at a birthday party. And that, my friends, is without coaxing. We need to give our kids a better chance at being successful with math concepts than many of us had…because we were taught the one-way theory that dead-ended too many.
“If a student has to stop and think, then they will likely get discouraged and quit” Surely this under-estimates the ability of children to accept an intellectual challenge, while letting teachers off the hook for not routinely challenging kids to think about what they learn.”
Sitting for a high stakes test (perceived high stakes test) is not conducive to stopping and thinking, taking risks, the give and take feedback that happens in a safe space.
Common Core has become synonymous with testing — too, bad.
Although I agree with Grant on his take that we can raise the bar….the idea that 10 yr old students will look forward to the intellectual challenge of a standardized computerized test, where the room is silent with no possibilities of clarifying questions….something doesn’t seem ok with this scenario. Now…if the states could give elaborative group projects for assessing schools, that might inspire students.
Look, no one thinks tests are fun. The narrower question here, though, is whether they are valid indicators of what has been learned. I think in general they are, in the narrow sense of serving as proxies for level of ability. And it is also the case that when local assessment is more rigorous than the state’s assessments, kids do NOT find these tests stressful, just a tolerable burden. ‘That wasn’t so bad’ is a typical response by kids who come from demanding teachers.
Earlier this year, when my grandson was 5, her brought home something he made for MLK Day. He had cut out a large key with the word LOVE inside. My daughter asked him what this meant and he replied, “LOVE IS THE KEY.” Somewhat abstract but nothing he does surprises me!
Your response to the WaPo article is cogent and important. Young children are capable of much higher thought processes than most believe. They certainly use deductive reasoning when they manipulate their parents!
On a more serious note we also know that judging higher standards by sample assessment items is dangerous and is often used as a means to discredit the standards themselves. Sorta’ like judging Affordable Health Care for all by pointing out everything wrong with healthcare.gov…
Thanks again for your insight and clarification of Piaget.
Great anecdote! Not to mention an important truth.
My last principal, and a long ago math specialist who was REALLY into “brain based” had never heard of Piaget. Where I teach anything is possible. I could tell you stories that would make you laugh and make you cry.
Well, at least anything is possible! That’s better than the opposite 🙂
Most people have never read Piaget; they have only heard the theory, and many of the 2nd hand accounts are inaccurate because the original is sometimes rough going.
These questions are quite boring and non-realistic. The first question about baseball stadiums is absurd and would never be discussed that way in any real-life situation.
I think the more academically-inclined kids will jump through the hoops and be able to answer these questions. Other kids, who might have other interests, may not. So is getting the question wrong do a lack of intellectual development or a lack of motivation?
As an earlier commenter said, how fun is it sitting in an empty room taking a test? Surely, some kids are just looking to get done and not really worried about these artificial academic questions.
I am a 7th and 8th grade reading teacher in Colorado and appreciated your statement that with appropriately rigorous teaching “kids do NOT find these tests stressful, just a tolerable burden. ‘That wasn’t so bad’ is a typical response by kids who come from demanding teachers.” Finding that perfect level of challenge is an constantly adjusted goal as a classroom teacher. Learning to step up to a challenge and, also, how to deal with occasional frustration are important lessons. Students gain confidence and academic skills by digging deep into a text, comparing the text to their own experiences and understandings, forming ideas and opinions about the text, and explaining their reasoning. Will they ever love high stakes tests? Of course not. But they may love the challenge to show what they know.
I do, though, teach one class of students reading more than two years below grade level. An eighth grader reading at a fourth grade level finds most of the school day an ‘intolerable challenge’ and high stakes testing deadly frustrating. The best I can do is provide a lot of formative assessments and data to show each student the growth they are making. We look weekly, but still, they know they will NOT be ‘proficient’ on the days and days of state testing in the spring. That can be very, very hard for them. And middle-school students don’t deal well with frustration.
I understand. We are working in Prince George’s County, MD, a large mostly-poor county where a considerable number of students read well below grade level.
My only additional suggestion (besides your important one of measuring their growth as readers) is to do what the best 4th and 5th grade teachers do: provide plenty of opportunity to choose books on topics of interest and at their comfort level for some of the time. Without the urge to read there is little reason to persist as a struggling reader. I was not a strong reader in middle school; like many boys, I found reading many of the novels boring and girl-centric. It was only when I started reading baseball magazines and Alfred Hitchcock’s Mystery Magazines that I became interested in reading.
I have seen amazing things happen in the 5th-grade-teacher’s classroom that I go to every few weeks locally. 4 very reluctant readers now read so much that the teacher has to gently but firmly ask them to put their books down. i actually watched one boy crash into desks because he was reading while walking out of the classroom!
Getting kids to read and read well is the greatest challenge in education, I believe. Keep the faith!!
Just a story I can’t miss a chance to tell. My poor readers often come from pretty challenging homes and neighborhoods. They don’t feel much connection to happy-people-in-happy-places stories. Then I found a book by Gary Paulsen, “Paintings from the Caves”, written for children like these. Three novellas – I began reading a couple chapters with the lights out to start the class each day. For the first time they began to believe in the reality of a character in a book, to care about a fictional character, to get angry and sad for that character. When a character talked about nightmares and fear in the night, a show of hands confirmed that 3/4 of my class new how that felt. One tough, too-cool-for-school, boy said, “and you can’t sleep again, because you can’t close your eyes, but it doesn’t matter ’cause you keep seeing it anyway in the dark.” His ability to share feeling afraid… magic. Yes, getting the right book in their hands makes all the difference.