My first grader read aloud to me from a decodable text they selected from our book bin, their pace clipped, distracted. One page into the story, I observed an alarming number of errors as they consistently misread many short vowel sounds in words. I glumly concluded that after weeks of structured phonics intervention, they hadn’t mastered short vowel sounds yet. Later that day as I sat planning more explicit review and repetition with short vowels, I stopped to reflect on the other lessons and activities I’d done with this student in the past few weeks and wondered if the evidence I had gotten from listening to my student read that morning was enough to make this conclusion about their reading skills. I thought to myself: “Is it possible something else is going on? How do I know this student hasn’t mastered short vowel sounds?”

I recalled this salient classroom moment after reading a very thoughtful and interesting Carol Black essay (a great read on learning styles, evidence-based education and scientific racism). It reminded me of how important evidence is in teaching and how we learn to interrogate evidence we come across in articles, research, workshops, and even in our own classrooms. In thinking about evidence, and my own teaching experience, I reflected on how curiosity led me to perhaps my favorite question. And if you read my last post about knowing our students as readers, you’ve likely noticed just how much I value asking questions. Surprise — I saved the best for last! One of the most powerful tools educators have is asking ourselves “How do I know?” 

This question can guide us in determining whether our instructional approaches are sound and effective as well as in examining what we think we understand about our students’ learning. In asking this question we can start to see how intentional the work of teaching reading can be. We get to the heart of how our instruction can work better for all kinds of learners.

Teachers are inherently researchers, something I began to contemplate more explicitly after reading Glenn Whitman and Ian Kelleher’s excellent book, Neuroteach: Brain science and the future of education. They write about educators: “They collect enormous amounts of data each day, and they rapidly evaluate and make decisions based on those data. Some of this is numerical, but much is qualitative. They may be second only to doctors in doing this” (p.149). As educators, we have access to students every day allowing us to practice the iterative nature of teaching — make a hypothesis, use a strategy, see what happens, then adjust our hypothesis and strategy to try again. So, what do we DO with all this information? And what does all this have to do with reading? 

Often, classroom teachers are directed to implement a specific reading curriculum or informed about the “best” strategies and instructional approaches for teaching reading. All of this is well intentioned, but some of it may be wrong. At best, it may be inadequate or ineffective for many students. Certainly, it’s important to seek out and pay attention to research-based information, especially advances in neuroscience that apply to education and learning, as it can clarify or solidify our understanding of how students learn and specifically how they learn to read. However, it’s also incredibly important to think critically about the information we consume in our efforts to serve ALL our students. 

Where are the teacher workshops on considering the source of information or contemplating the perspectives that take a different/opposing view (or show conflicting evidence)? Where is the PD day devoted to interrogating and discussing the click-bait headlines or article titles that are often very different from actual, detailed and multifaceted contexts or research findings? Where is the guidance on teachers as researchers who should trust their own instincts about students and use the huge amounts of data and evidence they can access daily in their classrooms? 

Sometimes teachers’ personal experiences with and observations of their students are dismissed as anecdotal evidence. As though a conclusion a teacher bases on information that is more subjective rather than on more objective data like scientific research, is bound to be wrong. One example is the fact that homework is still widely seen as useful and necessary, despite little evidence from research that it is an effective practice. In contrast, the evidence teachers can collect on how students respond to instruction and what might be happening for different students during the learning process can be incredibly powerful and accurate and should not be dismissed. The tricky part is consciously and continuously working to balance and integrate many important sources of information. To not just believe it when someone says a curriculum or instructional approach is right or best, but to ask ourselves “how do I know?” Perhaps because there are multiple studies showing its efficacy. Perhaps because I’ve collected evidence over time with my own students and can pinpoint what works and how I know it works. Ideally, both of these are true.

There are many brilliant ways I’ve seen teachers answer the question “How do I know?” when trying to determine what reading skills a student has mastered and what instruction they are ready for next. We often get to observe or see on paper what a student can do or demonstrate independently, like on a traditional test, leading us to make assumptions about what a student has learned. But sometimes those assumptions are not the complete picture. If we want to dig deeper into what a student KNOWS and not just what we see them do, asking ourselves how we know our conclusion about their reading skills is accurate is key. 

With the first grade student I wrote about at the beginning of this post, my first assumption was they still needed more practice with learning short vowels. Asking “How do I know?” led me to seek out more evidence. I showed the student vowel letter cards and asked for their short sounds — my first grader was 100% accurate. I asked them to read three letter words with short vowels in isolation — 100% accurate. I asked them to read short vowel words that also had beginning consonant clusters or common suffixes — their accuracy declined slightly. I asked them to read short vowel words in the context of a sentence, then two sentences — their accuracy declined further. All this evidence led me to a very different conclusion. This student wasn’t struggling with short vowel sounds specifically, even though it was easy to make that assumption at first. The breakdown was in integrating a number of skills to read words accurately at the sentence level. This drastically changed what I knew about this student as a reader as well as how I approached my instructional plan for supporting their growth.  

Other teachers showed me how to collect and compare observations of student reading skills in whole class settings, small group times and during one-on-one instructional moments. Seeing what things shift or stay the same over time and across these different settings provides great insight and better helps answer “How do I know” for each student. Learning how to use different assessments and note-taking templates to dive deeper into a student’s discreet reading skills did the same. Over time, I’ve seen teachers consider other factors affecting what we think we know about students’ reading such as the size of a book’s font, how much text is on a page, the number of novel vocabulary words versus decodable words in a text, whether a student is successfully reading a text based on a guided reading level or a Lexile level. All these things matter and they change how many different kinds of learners demonstrate what they know as well as whether they can access appropriate and effective reading instruction. 

When teachers systematically collect and analyze evidence they are able to get a clearer picture of their students and their needs. As intentional researchers, they are empowered to be more nimble, differentiate more effectively, and more purposefully support student learning. And in doing so, teachers create space to question what they know and what they’ve been told. When we pause to ask ourselves “How do I know?” we give ourselves permission to get curious — leveraging our expertise while simultaneously creating space for new ideas, new solutions, and new approaches that see our students as individuals.
This is the third part of Emily’s six-part blog series. Emily’s fourth post, titled “Learning to look critically at our curriculum,” will be published in just a few weeks. Click below to sign up for email alerts on Emily’s upcoming posts.

Categories:

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *