Yes, we know: kids “don’t know much about history” in the words of the immortal Sam Cooke. The weak NAEP results are basically unchanged since this test began – ironically, under Diane Ravitch. There might be another irony here: are the poor results perhaps related to history teachers talking too much – the question I raised that generated more lengthy and thoughtful responses than any other post?
What caught my eye, though, was one released item on the test:
Screen Shot 2015-04-30 at 12.41.07 PM
I am not confident we can infer that results on this question correlate with understanding of the 60s. As I noted in my series on the validity of questions and tasks, validity is tricky because “face validity” can mislead. However, it strikes me as likely that an 8th grader in 2015 could know a fair amount about 60s history and still get this wrong (which might call its validity as a question into doubt). All but the first answer are plausible responses IF you do not know this bit of song lyric (and there is no reason why you should).
Knowledge of the lyric is not what is being tested, of course: the challenge for the test-taker is to know, generally, that spirituals and mass singing were common to the Civil Rights era. Yet, there is nothing specific enough in the lyric to link it easily to only that one choice. So, the item seems questionable to me – especially since we are talking 8th-graders here, not 11th graders.
So, I tweeted out my concern. Well, this is the beauty of the Internet: within an hour there were dozens upon dozens of responses, in a lively dialogue.
Let’s continue the dialogue here, shall we? Is this a sound test question or not?**
PS: I have more faith in NAEP than some of its critics. And many of the released items seemed just fine to me, for example:
Screen Shot 2015-04-30 at 12.43.52 PM
Screen Shot 2015-04-30 at 12.44.49 PMBut this is why transparency in testing is so important. All tests should be released after having been given, as I have long argued and recently re-argued. Otherwise, we cannot have faith in the results – or, demand better questions.


** I am fully aware that validity in a test is not actually measured this way. Individual questions and their results have to be correlated with many other results in order to determine if the question is a good one. In other words, what matters is not whether 44% of students got this question right, but which 44% got it right. If the students who got this right were also the same students who got other hard questions right, then validity can be established. Technically, the validity of the question is threatened only if the less able students got this one right and the strong students got it wrong.
This is why the famous pineapple test question in NY ELA could be a valid question even if a majority of kids were totally befuddled by it.

Categories:

Tags:

10 Responses

  1. Thanks for sharing your perspective on this assessment. Our students took the NAEP this year, but we don’t see the results, so it is nice to have this analysis. I suspect you are right on the teachers talking too much. There is also a background knowledge deficit, as you suggest, which is attributed more to students’ lives outside of school.
    Regarding validity of standardized tests, I heard Dylan Wiliam speak earlier this year on the topic. One statement he made continues to spur thought: “There is no such thing as a valid assessment. It is our conclusions that are valid or invalid.” In other words, I took this statement to mean that tests test exactly what they are designed to test.
    Thoughts anyone? (Here is a link to the Storified tweets from his presentation: https://storify.com/MattRenwick/dylan-wiliam-what-assessment-can-and-cannot-do)

    • Dylan is correct – that’s why I put my asterisk in. Validity is about inference from results to some goal we care about measuring for. A question can yield valid inferences for 1 goal and invalid inferences for another goal. For example, you may be able to validly infer student recognition of the language from the Declaration of Independence in one of those other items I added, but you probably cannot validly infer that the student fully understands the Declaration of Independence, simply because they recognize the introduction to it. And that’s why I think the conclusion from the pineapple item in NY is a valid one: you could only get that right if you were not a literal 8th grade reader but, rather, understood the satire of it.

  2. Hi Grant,
    I think this is a great topic for a post (I’m biased as a social studies teacher/curriculum leader, of course). I agree on the issue of validity with the song and question structure. The way I see it, this is also indicative of a larger curriculum issue. I teach in Massachusetts, where the standards-based movement has been around for quite a while. The reality is, even here in the Commonwealth, that most students will not have had a sufficient experience with the civil rights movement until 10th or 11th grade. That is not to say that students don’t learn about U.S. history before high school. The reality is that social studies is quite peripheral during elementary and middle school grades. Especially with the strong accountability emphasis on reading and math scores, less and less time is devoted to social studies instruction. I think this would still be a problem even with highly valid test questions. You can have the most student-centered and engaging lessons on the topic, but if students are only getting history for 1-2 hours a week, they will not be successful on 8th grade NAEP questions such as the example you have provided.
    I think this is also indicative of the larger crisis regarding civic education. In Massachusetts, there is no requirement for civic education in high school, just a suggested government elective. While we do try our best to embed government and civics concepts within the U.S. history curriculum, the reality is that action-based civic learning is mostly absent from our high schools. I cringe when I hear the NAEP results for 11th grade history & civics each time they are released, but certainly not surprised.
    What is the issue here? Is it that we just don’t value social studies and civics as subjects the way we did 30 years ago? Is it that accountability subjects are sucking all the oxygen out of the room when it comes to instructional planning?
    It seems kind of ironic that we’ve just had this conversation about history being so lecture based and “old school,” when it’s quite possible that our students knew more about civics and government back when that instructional model was the norm than they do today.

    • Rob, the findings that kids do not know history are as old as the hills. It was decried in every decade from the 40s on – especially in the 60s. I don’t lecturing or not has much to do with it. It has to do with a coverage mentality that flattens all information to the same level, with no priorities. It’s like a kid’s narrative: And then…and then… and then…
      I think your analysis about the reality of middle school history learning is more to the point – and germane to the NAEP findings.

      • Agreed. I think there’s a persistent belief out there in regards to civics that fluency has declined over the years, though the scores suggest a consistent flatline.
        I find it fascinating (and frustrating) that civics, and history as well, can be fantastic subjects for student-centered learning and improving literacy skills for students…and yet it is still compartmentalized as a curriculum subject.

  3. Back to the thought that history teachers talk too much…perhaps. There is still resistance from many teachers to move to more student centered learning and assessment. Students given the opportunity to review authentic sources, investigate scientific phenomena, or make inferences instead of rote learning can transfer knowledge regardless of the assessment….authentic, norm referenced, or competency based. The reverse…memorization from lecture notes…may not be true. Memorization of dates/facts may not be deep enough learning to transfer to these types of questions. It seems this has been discussed for the past 20 years or more, yet the “sage on the stage” is still revered as a good teacher.

  4. The heading over the question that states what the question is supposed to be measuring (“Associate song with political movement”) suggests that the intent of the item is to see if students understand that a certain political movement used song to advance its cause–not that students can use the lyrics of a song to match the ideals represented with the ideals of a certain political movement. The women’s movement of the 70s does not have that association with song as a means to advance their cause, so regardless of the lyrics, (D) is not a reasonable choice.
    (A) and (B) are not reasonable choices if only because soldiers in WWII and pioneers moving West were not political movements. Maybe that’s intentional, but I think the item would be strengthened–and better-aligned with the stated intent–if (A) and (B) included political movements. That aside, do we want students to have a good sense of various political movements in our history and the “techniques” they used? If so, with revision, I think the spirit behind this item is a good one and wouldn’t require song lyrics with more obvious links to the Civil Rights movement.
    I’m not sure I agree that students could know a great deal about the 60s and not know that the Civil Rights movement used widespread singing and spirituals. I think it’s fair to say that many 8th grade U.S. History teachers find themselves challenged to even GET to the 60s. 🙂 For those that do get there, even in a “drive-by” fashion, the Civil Rights movement is probably the one thing they DO emphasize. Many textbook and video snippets of that time period do at least mention or provide snippets of such songs.
    It would be interesting to know if students who attend more racially diverse schools or majority-minority schools were more likely to answer this question correctly.

    • Keep in mind that many 8th grade students have not studied past the Civil War because that’s how the two years of US History are carved out (cf. NJ).
      Students would NOT see the heading during the test, just the question. You are right: that would be an important clue.

      • I know the students don’t see the heading; the heading gives us (as interpreters) a clue about the designers’ intent and what the item is trying to measure. I see somewhat of a mismatch between that intent and the items response options.
        If some states’ U.S. History curriculum only requires material thru the Civil War at the point that students take the NAEP, then I guess all data re: any items that require post-Civil War knowledge should be interpreted with that in mind, not just this question.

  5. I would say
    A. I agree with Grant that a student could know a lot about Civil Rights Movement but still get the question wrong.
    B. There are better (aka valid) multiple choice questions that could be asked to test students’ knowledge of the Civil Rights Movement.
    C. There are better choices for songs that could have been used in this question (We Shall Overcome).
    D. all of the above
    My answer: “D,” all of the above.

Leave a Reply

Your email address will not be published. Required fields are marked *