In part 1 of my reply to Willingham’s article on reading comprehension strategies published recently in the Washington Post, I took issue with his reasoning and analogies. In Part 2 of my response to Willingham, let me get right to the evidence. I propose that he has quoted very selectively and drawn questionable conclusions from the research he does cite.
In his Washington Post article, he doesn’t cite research specifically; rather, he refers us to a paper he co-authored with Lovette, published in the Teachers College Record. Oddly, that article is almost identical to the Post article. The only difference is that he provides a paragraph of citations to support the claims he wishes to make on duration of intervention related to the strategies. And he changes the baseball analogy to golf.
Here is the key paragraph from his Washington Post article:

Gail Lovette and I (2014) found three quantitative reviews of RCS instruction in typically developing children and five reviews of studies of at-risk children or those with reading disabilities. All eight reviews reported that RCS instruction boosted reading comprehension, but NONE reported that practice of such instruction yielded further benefit.

Here are the two related paragraphs from the TC Record article; note the different, less sweeping conclusion:

RCS instruction has a serious limitation. Its success is not due to the slow‐but‐steady improvement of comprehension skills, but rather to the learning of a bag of tricks. The strategies are helpful but they are quickly learned and don’t require a lot of practice.

And there is actually plenty of data showing that extended practice of RCS instruction yields no benefit compared to briefer review. We know of eight quantitative reviews of RCS instruction, some summarizing studies of typically developing children (Fukkink & de Glopper, 1998; Rosenshine, Meister, & Chapman, 1996; Rosenshine & Meister, 1994) and some summarizing studies of at‐risk children or those identified with a learning disability (Berkeley, Scruggs, & Mastropieri, 2009; Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000; Gajria, Jitendra, Sood, & Sacks, 2007; Suggate, 2010; Talbott, Lloyd, & Tankersley, 1994); none of these reviews show that more practice with a strategy provides an advantage. Ten sessions yield the same benefit as fifty sessions. The implication seems obvious; RCS instruction should be explicit and brief.

Thus, in his Washington Post article he actually overstates what he and Lovette claimed in the original article. There is no justification for his claim in the Post article that “All eight reviews reported that RCS instruction boosted reading comprehension, but NONE reported that practice of such instruction yielded further benefit.” In fact, even the original claim is over-stated, as we shall see.
What the cited studies actually say. Here is what we learn when we actually go to the studies that Willingham cites to make his point:

  • The Rosenshine, Meister, and Chapman study (1996) looked only at one strategy – generating questions of the text – not many reading strategies. Yet, it appears that Willingham’s sweeping conclusion that 10 sessions are as good as 50 about all strategy instruction was drawn from this one analysis. Here is the text and chart from that study:

Length of training. The median length of training for studies that used each type of procedural prompt is shown in Table 4. We uncovered no relationship between length of training and significance of results. The training period ranged from 4 to 25 sessions for studies with significant results, and from 8 to 50 sessions for studies with nonsignificant results.

 Screen Shot 2015-05-24 at 12.07.24 PM

  • Here, from a second study Willingham cites (Gadjria et al 2007), are the cautious comments on amount of time on strategies from the authors:

Unfortunately, the limited database does not allow us to infer the capacity of strategy use to achieve maintenance or transfer. Also, more research is needed to draw conclusions about the duration and length of treatments needed to positively affect maintenance and transfer effects. Although the database is larger for treatment intensity than for maintenance and transfer effects, we cannot make persuasive conclusions about the potential relationship between these variables.

  • Willingham is clearly leaning on the research in Elbaum et al (2000) since that study shows that duration of treatment is not as salient as we might think. (Suggate and Gadjria et al also quote from this study). However, Willingham chose to not mention a critical distinction in that study that bears on his claim. Here is the salient section:

Intervention intensity was examined in two ways: by duration, coded as the number of weeks over which the intervention was carried out, and total instructional time, coded as the number of hours of instruction provided to each student. Information on the duration of the intervention was available for 30 samples of students; information on total instructional time was available for 27 samples. The interventions ranged in duration from 8 to 90 weeks and in total instructional time from 8 to 150 hr. Duration of the intervention was reliably associated with the variation in effect sizes, QB(1) = 7.9; interventions lasting up to 20 weeks had a mean weighted effect size of 0.65, compared with 0.37 for those lasting longer than 20 weeks. Total instructional time, however, was not reliably associated with effect size variation, fiB(l) = 0.35. We further examined the relation between intervention duration and intensity. The mean instructional time for interventions lasting up to 20 weeks was 63 hr; the mean time for interventions lasting longer than 20 weeks was 61 hr. Duration and total instructional time did not significantly covary (r = .116, ns). This finding suggested that the same amount of instructional time, delivered more intensively, tends to have more powerful effects. [emphasis added]

Furthermore, the Elbaum study focused exclusively on one-on-one tutoring in both phonics and strategies, not teacher instruction and student practice of strategies in class. Even so, most of the interventions were far longer than Willingham lets on. For example:

One study that contrasted a standard Reading Recovery program with a modified Reading Recovery program (Iversen & Tunmer, 1993) reported that students in the modified program were discontinued after an average of 41.75 lessons, compared with 57.31 lessons for students in the standard program. The effect size for students in the modified program was comparable to that of students in the standard program, suggesting that it is possible to achieve the same outcomes in a much shorter period of time by modifying the content of instruction. This finding suggests that efficiency, or the amount of progress over time, may be a useful variable to consider in conducting future studies.

That’s a far cry from “10 quick lessons” …
More disconcertingly, not once in either article does Willingham discuss the results of the six most well-known and well-studied interventions using multiple strategies: PALS, POSSE, CSR, TSI, CORI – all of which show significant gains through a significant investment in time, and many of which are highlighted in the various meta-analyses.
Here, for example, is the data on PALS:

  • In the study, 20 teachers implemented PALS for 15 weeks, and another 20 teachers did not. Students in the PALS classrooms demonstrated greater reading progress on all three measures of reading achievement used: words read correctly during a read-aloud, comprehension questions answered correctly, and missing words identified correctly in a cloze (maze) test. The program was effective not only for students with learning disabilities but also for students without disabilities, including low and average achievers.

Michael Pressley, author of Reading Instruction That Works, arguably did more direct and indirect research on reading strategies than anyone, and his work is cited in almost every review of research. Here is what he says about duration and results:

  • As far as policymakers are concerned, however, the gold standard is that an educational intervention make a difference with respect to performance on standardized tests. What was striking in these validations was that a semester to a year of transactional strategies instruction made a definitive impact on standardized tests…

In light of this set of quotes, does the following Willingham conclusion seem warranted to you?

RCS instruction has a serious limitation. Its success is not due to the slow‐but‐steady improvement of comprehension skills, but rather to the learning of a bag of tricks. The strategies are helpful but they are quickly learned and don’t require a lot of practice.

“Tricks” and transfer. Willingham is clearly having some fun referring to the strategies as “tricks” but he might have taken a page from the research he cites instead. Because in Rosenshine, Meister, and Chapman, they say this about the strategies:

In contrast, reading comprehension, writing, and study skills are examples of less-structured tasks. Such a task cannot be broken down into a fixed sequence of subtasks or steps that consistently and unfailingly lead to the desired end result. Unlike well-structured tasks, less-structured tasks are not characterized by fixed sequences of subtasks, and one cannot develop algorithms that students can use to complete these tasks. Because less-structured tasks are generally more difficult, they have also been called higher-level tasks. However, it is possible to make these tasks more manageable by providing students with cognitive strategies and procedures.

A cognitive strategy is a heuristic. That is, a cognitive strategy is not a direct procedure or an algorithm to be followed precisely but rather a guide that serves to support learners as they develop internal procedures that enable them to perform higher-level operations. Generating questions about material that is read is an example of a cognitive strategy. Generating questions does not lead directly, in a step-by-step manner, to comprehension. Rather, in the process of generating questions, students need to search the text and combine information, and these processes help students comprehend what they read.

Such heuristic thinking is essential to transfer; it’s hardly a trick, as we know from all the research on how general ideas and schemas bridge seemingly unique experiences (cf. Chapter 3 in How People Learn). Yet, Willingham does not mention transfer once, though it is indeed worried about in almost every study he cites. Why worry? Because results on the experimental post-test, designed by the researchers to assess their intervention on specific strategies, are typically much higher than results on a standardized test of reading comprehension later, where no prompts or reminders about the particular intervention studied are provided – i.e. transfer.
Here are two relevant quotes, one from the paper by Gadjria et al cited by Willingham, and the second from Allington and McGill-Franzen in the Handbook of Research on Reading Comprehension that I have quoted from before:

Unfortunately, the limited database does not allow us to infer the capacity of strategy use to achieve maintenance or transfer. Also, more research is needed to draw conclusions about the duration and length of treatments needed to positively affect maintenance and transfer effects (Gersten et al., 2001). Although the database is larger for treatment intensity than for maintenance and transfer effects, we cannot make persuasive conclusions about the potential relationship between these variables. Furthermore, few studies helped children develop a deep understanding of complex text by effectively processing structural elements of expository text (e.g., Bakken et al., 1997; Smith & Friend, 1986) or stressed the social aspect of collaborative learning (e.g., Englert & Mariage, 1991; Klingner et al., 2004; Lederer, 2000) that Gersten et al. (2001) noted is critical to mediate learning and transfer effects.

Improving performance is possible. However there is less evidence that comprehension focused interventions produce either autonomous use of comprehension strategies or longer-term improvements in comprehension proficiencies. The lack of evidence stems from the heavy reliance on smaller sample sizes and shorter-term intervention designs as well as limited attention to a gold standard of transfer of training to autonomous use.

Arguably, transfer can only be caused by many interventions, a gradual release model, and lots of practice of multiple strategies simultaneously over a long period of time – as the research repeatedly says and as common sense tell us.
Indeed, to close with one more sports analogy, the drills do not transfer easily to the game. It basically takes a full season of scrimmages, de-briefings, and lots of practice trying to apply the drills to game situations to make that transfer happen. Nor are the drills “tricks” even though they ultimately fade away in fluent automatic performance. And that’s arguably a more apt analogy for reading the research than Willingham’s discussion of sport and furniture-building.
 
PS: I neglected in the first post to copy and paste my comments on one of the other research studies that Willingham cites: Sheri Berkeley, Thomas E. Scruggs and Margo A. Mastropieri (2010).
Here is what they say about intervention duration:

For criterion referenced measures, mean weighted treatment effect sizes were highest for treatments of medium duration (more than 1 week but less than 1 month). Differences among treatments of varying length were statistically different according to a homogeneity test, Q(2, N = 30) = 6.68, p = .04. However, differences on norm-referenced tests by study duration were not statistically significant (p = .83). That treatments of moderate length were associated with higher effect sizes than either shorter or longer treatments is not easily explained.

Screen Shot 2015-05-25 at 3.38.38 PM
Note that only three studies were examined that took place over more than 1 month, due to the parameters of their study (a focus on remedial education for special needs students.) As we have seen many such studies exist for regular students, with strong effect sizes. Nor do this data quite support Willingham’s conclusion about the value of practice.

Categories:

Tags:

18 Responses

  1. Hi Grant,
    I always appreciate your posts. I am pretty sure there is a substantive typo in this post in the fourth paragraph which begins, “Here are two key paragraphs from the TC Record article, note the difference…” My bolded change to “note” is “not” in your original text. I thought you might want to know since this is a post that is likely to elicit a response.
    Best, Pam
    “The meaning of life is to find your gift. The purpose of life is to give it away.” ― Pablo Picasso

  2. The following is taken from the chapter, “The empirical support for direct instruction,” written by Barak Rosenshine in the 2009 book “Constructivist instruction: Success or failure?” edited by Tobias and Duffy:
    “Rosenshine et al. (1996) summarized the results for 26 studies in which student were taught to generate questions about their reading and then practiced using the procedure as they read. Students in the control group continued their normal classroom reading activities…the studies have a median effect of 0.82 when experimenter-developed comprehension tests were used … When standardised tests were used, the median effect size was 0.32…
    Palinscar and Brown (1984, 1989) suggested that reading comprehension might be improved if students learned and practiced four “comprehension-fostering” strategies: asking questions, summarizing, using the test to predict what might happen next, and attempting to clarify unclear words. The students were taught these four strategies and then worked in groups to practice the four strategies on new material using a method they called “reciprocal teaching”…. When experimenter-developed tests were used in these studies the results were usually statistically significant and the average effect was 0.88… (Rosenshine & Meister, 1994). When standardized tests were used, the average effect was 0.32…
    Hirsch (2006), however, noted that in the reciprocal-teaching studies reviewed by Rosenshine and Meister (1994) the number of instructional sessions ranged from 6 to 25, and teaching more sessions did not result in higher achievement gain (see Rosenshine & Meister, 1994, pp. 500, 506). And although 4 to 12 comprehension strategies were taught in the reciprocal-teaching studies, teaching 12 strategies was no more effective than teaching the two strategies of question generation and summarization (Rosenshine & Meister, 1994, pp. 495-496). Hirsch wrote that although teaching reading-comprehension strategies is useful, “Formal comprehension skills can only take students so far. Without broad knowledge, children’s reading comprehension will not improve and their scores on reading comprehension tests will not bulge upwards (Hirsch, 2006, p. 8).”
    I think that this accurately summarises the research. I would also suggest that lots of practice of reading comprehension strategies would be excruciatingly dull; something that might be a fair trade if such strategies provide a big payoff. But they don’t seem to.
    The references to the papers quoted by Rosenshine are below:
    Rosenshine, B., Chapman, S., & Meister, C. ( 1996). Teaching students to generate questions: A review of the intervention studies. Review of Educational Research, 66, 181-221.
    Rosenshine, B., & Meister, C. ( 1994). Reciprocal teaching: A review of the research. Review of Educational Research, 64, 479-531.
    Hirsch E. D. Jr., (2006).The case for bringing content into the language arts block and for a knowledge rich curriculum core for all children. American Educator, 19, 4-13
    The differences between results on experimenter-developed tests and standardised tests might not be due to transfer in the way that you suggest. Standardised tests are designed for a different purpose – to find student differences – and therefore do not record all of the learning that has taken place. Dylan Wiliam notes that:
    “It has long been known that teacher-constructed measures have tended to show greater effect sizes for experimental interventions than obtained with standardized tests, and this has sometimes been regarded as evidence of the invalidity of teacher- constructed measures. However, as has become clear in recent years, assessments vary greatly in their sensitivity to instruction—the extent to which they measure the things that educational processes change (Wiliam, 2007b). In particular, the way that standardized tests are constructed reduces their sensitivity to instruction. The reliability of a test can be increased by replacing items that do not discriminate between candidates with items that do, so items that all students answer correctly, or that all students answer incorrectly, are generally omitted. However, such systematic deletion of items can alter the construct being measured by the test, because items related to aspects of learning that are effectively taught by teachers are less likely to be included than items that are taught ineffectively.”
    The quote is from this blog-post: http://www.learningspy.co.uk/myths/things-know-effect-sizes/
    Again, I commend Willingham’s writing to those who wish to understand transfer: http://www.aft.org/periodical/american-educator/winter-2002/ask-cognitive-scientist

    • Interesting, but none of these quotes really addresses my critique. In fact, as I noted about the Rosenshine study, it focuses on only one strategy in which 20 sessions make a difference.
      Look it’s fine to disagree, but you simply fail to respond to what I wrote; you just keep recycling old lines that don’t bear on this critique. As i said to you on Twitter, there is plenty of evidence to show that Willingham’s sweeping conclusion about practice is not supported by the evidence.

      • I am interested in establishing the truth of the propositions that a small number of RC sessions are as effective as a large number and that a small number of strategies are as effective as a large number. I think this is highly relevant to the argument and that’s what my comment pertains to.

    • As a teacher, I’m not sure I can fully agree with this,
      “Without broad knowledge, children’s reading comprehension will not improve and their scores on reading comprehension tests will not bulge upwards (Hirsch, 2006, p. 8).””
      First, this sounds a lot like the chicken and the egg – which came first? How do you broaden knowledge without reading? How do you read if you don’t have that broad knowledge. Sounds like a no-win situation. I do think background knowledge helps, but maybe more so for the connections and interest to read it to begin with (or stay with it if it’s difficult). So if you are reading a book about particle physics, could you read it and understand it? Yes, but it would take a lot of work. Are students willing to put in that work especially if it is uninteresting or deemed useless? Could I read a book (and when I say read I mean read and understand) about musical notes and composing? Sure, even though I have very limited background in music I could do it. However, I’m quite frankly not going to put forth the energy required for me to learn it as I totally don’t understand notations, pitch, tone, beats, etc..
      I have personally seen many cases where a student simply did not use any reading strategy at all. I have yet to see a struggling reader routinely use comprehension strategies. Maybe someone exists somewhere, but I haven’t seen it outside of Special Education populations.
      I think we need some sort of study where students are given reading material about something they have background knowledge in and they are capable of understanding the words (on their reading level), and see what happens. If they do not use strategies, background knowledge goes out and so does the reading level of the text goes. I’m going to bet that comprehension does not improve and that strategy usage is almost zero. I want to know why something didn’t trigger (set off a red flag) or why they ignored that trigger.

      • The research does not support Hirsch on this. By such an argument, no one could read about dinosaurs or volcanoes for the first time and learn from the experience. Many comprehension studies show that what you are saying is true: even when holding background knowledge equal, comprehension varies considerably in just the ways you cite. The key factor is whether the strategies are self-activated. Most struggling readers do not do so.

        • Yes. I think here that you are vehemently disagreeing with a position that Hirsch does not hold. He claims that you needs to know 90-95% of a text in order to learn new things from it (at least, in terms of vocabulary but I’ve no reason to suspect that general background knowledge functions any differently).
          http://www.aft.org/sites/default/files/periodicals/Hirsch.pdf
          Again, I feel the need to point out that if you control an experiment for background knowledge then it is impossible to demonstrate anything about the relative value of background knowledge.

        • <<< The research does not support Hirsch on this. By such an argument, no one could read about dinosaurs or volcanoes for the first time and learn from the experience.
          This is WAY too narrow of a reading of Hirsch and goes a long way toward explaining to me your POV. Not every word in a piece about dinosaurs is, well, about dinosaurs. The text is the visible bit of the iceberg above the water line. Context is typically assumed (it's below the water line).
          If you encounter the sentence: "Many dinosaurs had deadly, knife-like protuberances that were excellent protection from being eaten" you can make meaning from it if you know not just the words but the context: what it means to be deadly, to have protection, (not just good but excellent), to not be eaten, etc. Replace the word "dinosaur" with "Jabberwock" and you can still make meaning — and learn about Jabberwocks. In this example, your ability to contextualize a dinosaur's defense mechanism also gives you exposure to an unusual word, "protuberances" that with repeated exposure over multiple contexts leads to vocabulary growth.
          In sum, your misreading of Hirsch — and narrow interpretation of knowledge — seems to be leading you to a binary way of thinking about an immensely nuanced thing. The question that really matters, which your posts on Willingham elide completely (no surprise given your hostility to Hirsch) is what is the optimal use of ELA class time? Strategies or knowledge building — which we should call be its proper name: context creation.

  3. Hi Grant
    Thanks for reading my piece with care.
    As you know, my argument has two parts:
    1) When you look at cognitive models of how comprehension works (e.g., Kintsch), you would conclude that it’s impossible to offer general strategies that will support comprehension. That’s because the workhorse of comprehension is connecting ideas in the text, and the connections depend on particulars of the ideas. Thus you can say what the goal is (“connect the ideas”) but not how to do it.
    2) A hallmark of skill is a dosage effect: more practice yields more skill gained. There’s no evidence you see a dosage effect in comprehension strategy instruction. It’s definitely effective, but more does not yield more benefit. Such practice improvements are a hallmark of skill, so the lack is weird.
    I’m probably missing your point, but I don’t see how either of your posts really undercuts either 1 or 2…maybe you agree and it’s the conclusion I draw.
    In post 1 there was a lot about analogies…okay, I probably don’t know enough about baseball, a point I’ll readily grant (nyuck, nyuck). My real point (relevant to #1 above) is that strategies for reading comprehension are necessarily meta-strategies. Meta-strategies help—in my book I actually said that the seemingly stupid IKEA instructions are actually helpful (as they would be in the chair example you gave.) My point is that they don’t do you much good if you don’t have the assembly instructions. Likewise, meta-instructions don’t do the main work of comprehension…they don’t tell you how ideas in the text connect.
    I think maybe you’re suggesting that reading comprehension strategy instruction leads to the development of a general schematic skill…that all skills have this characteristic of generality. Bat-swinging is general in that it’s schematic, and the schema is flexible enough to be adjusted, depending on the particulars of the situation. I see what you’re saying, and I certainly agree that some readers are more generally successful and calling that “reading skill” makes sense.
    Where we differ is in our account of how they got that way (and also, I suspect, in the importance of that general skill). I think you’re suggesting they got that way via reading comprehension strategy instruction, plus lots of reading practice. I’m suggesting that if what you learned in RCS instruction contributed to a general skill, you’d see a dosage effect. As a side note, I’m also not that enthusiastic about the importance of general reading skill because, real as it is, it’s easily swamped by knowledge or lack of knowledge about the topic. That, plus the lack of evidence for dosage, is what makes me suggest that time for RCS instruction be limited.
    I didn’t discuss transfer partly because when it’s been measured, it’s pretty good, but then I would expect it to be, given the way it’s been measured. My guess is that the biggest transfer obstacle would be students remembering to invoke strategies when there is not some cue in the environment prompting them to do so. I don’t know of any research that explicitly addresses this question.
    Lots of detail in post 2, but all of it seems ultimately to support my point: there’s no evidence that more instruction improves things. The reason I consider that so important is a practice effect with a negatively accelerating curve is considered a hallmark of skill; you see if for all types of skills (e.g., Lacroix & Cousineau) and it’s so pervasive it’s often called a law (originally by Newell & Rosenbloom). So it’s odd not to see a practice effect for reading strategy instruction. What’s especially odd is that the beginning of training is when you see the steepest decline, so we’re in the part of the learning curve where it should be easiest to see improvement due to more practice.
    So that’s why I sought an alternative account of the effect of RCS instruction. I called it “trick” which was probably a mistake, because it sounds pejorative, and I don’t think of it that way. I was trying to capture the idea that it’s something that leads to an immediate, robust improvement but doesn’t require (or benefit from) continued instruction.

    • Dan: Thanks for taking the time to read my piece and to respond to it. I really appreciate this opportunity to have a genuine debate in education that doesn’t hinge on one’s politics or mere beliefs but evidence and reasoning on a matter of importance. I am sure the readers feel similarly, regardless of whose account they think is the better one.
      Yes, I think the schema matters. We know it is central to transfer – leaving aside whether or not the fidelity/implementation of all these interventions is adequate to cause transfer. (Pressley thinks not, and I tend to agree, having sat in for the last two years in a 2nd, 4th, and 5th grade classroom: no hope of transfer by the overly-prompted and superficial way it was approached.)
      You and I also agree on the fundamentals of comprehension: the active quest for logical coherence across the text (as opposed to the very common habit of reading word by word, noted by your co-author in the TC piece). You and I also agree that a great danger in the strategies – noted in some of my earlier blog posts – is that the teacher and readers will lose sight of the text. (The same is true generally in education and in sports: we easily become fixated on techniques, drills,worksheets, and lose sight of their role as means.)
      So much of the discussion hinges on what we mean, therefore,by skill and strategy. As I noted in my original post two years ago that started this line of inquiry for me, I think the reading “strategies” people do themselves a great disservice by not using the word strategy (and tactics and skill) more precisely. (Boy, did I hear it from them…) I take the word at its original military meaning: executive control over a repertoire in the face of a goal and constant challenges. Sports make the distinction between strategy, tactic, and skill much easier to see; I believe it has its correlate in reading and math problem solving in the face of novel non-routine challenges. So, I agree that these “strategies” are not direct sub-skills or components of comprehension per se or we would see gains over increasing time. But for me the problem of duration/effectiveness is somewhat different than how you envision it: I think failing to design backward from self-regulated comprehension – transfer – is the cause of both diminishing returns and lower scores in the transfer standardized test vs. the treatment test.
      I guess that I remain unconvinced by the conclusion in second to last paragraph, therefore. Rosenshine’s data did not surprise me: 20 interventions on question-asking is probably enough. But, again, that is just 1 of the core “strategies” and that particular one helps build the schema of active questioning of the text/author (same with Beck and QtA). Thus, for me “practice” would include cold-testing of comprehension, debriefing the effort, discussing what approaches were used in the face of difficulties, etc. to get at real issues of strategy and schemas about reading. I don’t think it’s odd, by the way, that performance drops off early since you are requiring the student to re-define reading as comprehension of an entire text instead of a more passive scan, word-to-word. Surely, if the student still thinks reading is a more passive scanning of words, more practice is going to help those particular students – until they get it. Then, we can downplay the strategies. It’s not unlike the slump batters go into when they change their stance.
      My own bottom-line view? Far too little is still understood about full-text comprehension and how to advance it. Why would we keep seeing poor performance all the way through college if this were not so? Why would NAEP scores be so flat for 30 years? And I can tell you from spending many hours in English classes and ELA blocks that the so-called “teaching” and “practice” of the reading strategies is pretty cursory – as Durkin reported 30 years ago and Pressley and Allington and others noted is still the case. The TCRWP approach is an outlier: far more time devoted to such strategies than most programs. But the effectiveness of the TCRWP is a debate for another day!!
      Again, thanks for the thoughtful reply. Always great to dialogue.

  4. What would you and Dr Willingham agree on for what we could do for our below grade level readers in upper elementary school?
    Also, how does background knowledge play into understanding fiction? What would Hirsch and Willingham say?

  5. “And there is actually plenty of data showing that extended practice of RCS instruction yields no benefit compared to briefer review. We know of eight quantitative reviews of RCS instruction, some summarizing studies of typically developing children (Fukkink & de Glopper, 1998; Rosenshine, Meister, & Chapman, 1996; Rosenshine & Meister, 1994) ”
    I’m going to agree with this part and take it literally. Looking at a lot of RCS instruction, I could see why. Sure they would get an initial bump followed by a flat line (or slight increase) and it would indeed be a trick like crossing out wrong answers after they initially cover a strategy and pretty much stop using it except by brief mention. The study I quoted showed that most middle school teachers and all of the high school teachers did not go over RCS at all. Those that did choses text features overwhelmingly as the only one covered (and it was very rarely used by the way). Now I’m no rocket scientist, but I’ll bet most of those teachers would have said that they covered RCS in their classes. Sure that helps, But telling a student to look at the pictures and highlighted words is only going to take a student so far. Just telling a student to look at text features may not help if nobody is actively involved in helping students generate meaning from the text. RCS instruction should be scaffolded into the curriculum and applied – not just working on the skills in isolation or telling them to use their strategies. Those skills need to be scaffolded, modeled, and constantly reinforced with clear goals in mind. This also needs to occur grade after grade – not just learn it once in elementary school.
    The study I quoted earlier showed it (RCS) basically wasn’t used much in middle school and high school. Well, we all know in math you have to constantly review the basics, so why wouldn’t we assume you have to constantly review reading comp skills in context? I also think that RCS is not a cure all. I think some people think that teachers can just throw a strategy out there and that this is magically going to cause comprehension. It takes a lot of work and feedback to make this progress. Once the RCS skills are mastered, they need to be applied to new content and new contexts. This still may require scaffolding, modeling, and lots of feedback.You might get diminishing marginal returns, but that is just a fact of life.
    What do you all think?

  6. Saddened by the loss of Grant Wiggins. I will renew my efforts to follow his teachings and continue to advocate for students and learning.

  7. I know you are resting in peace, Grant, and I just want to say you have inspired countless of us educators to expect better for our kids and to live up to your standard. I simply can’t express how much you’ve meant to me. My condolences to your family.

  8. I appreciate your detail and logic — and wonder how well I understood your metaphor where you suggest that it takes a full season of drills, etc, for the actual transfer of ability to take place? In my head I compare this to my old-days of teaching when I was given a full year to take my students through the act of composition, working slowly and methodically toward a final outcome — an outcome which only came to fruition after many drills and restarts and reviews. Nowadays, however, there is little patience with anything but producing instantaneous magic while teaching. So, really, very little is ultimately learned.

Leave a Reply

Your email address will not be published. Required fields are marked *