Measuring the Pandemic’s Devastating Effect on Schoolchildren


Last week, one of the most highly regarded tests for measuring education outcomes in the United States was released, and the results were bleak. The National Assessment of Educational Progress (NAEP) showed huge drops in math and English proficiency for nine-year-olds across the country during the pandemic. The scale was shocking: in both areas, students lost more ground than they had in decades. The declines differed by racial groups, with Black students showing, on average, worse losses; across racial groups, students who were already struggling in school showed the steepest declines.

To understand how the test functions, and what the results mean, I recently spoke by phone with Susanna Loeb, the director of the Annenberg Institute for School Reform, at Brown University, and an education-policy researcher. During our conversation, which has been edited for length and clarity, we discussed the students who suffered most during the pandemic, the different types of learning loss that kids experienced, and why the pandemic should lead people to reimagine schooling.

What is your biggest takeaway from this report?

As a result of the pandemic, we’ve seen a big drop in the academic achievements of nine-year-olds. But this drop has been uneven—the biggest drop has been for the lowest-performing students, which isn’t necessarily surprising because the lowest-performing students were probably less engaged in school. When school is disrupted, they’re less likely to have the skills and inclinations to really delve in themselves, and they also may need additional support. It’s clear from this data that they suffered the most during the pandemic.

Can you talk a little bit about the NAEP, how it conducts research, and what this data is exactly for people who aren’t familiar with this field?

Every few years, the federal government does a sampling of students from across the country and gives them assessments in math, reading, and other subject areas. There are two main types. One of them is called the main NAEP, which covers quite a number of students. You can see the results separately by state, and you can see the results separately for some of the big urban districts. Then there is the long-term NAEP, which allows us to see trends over time. The recent data is from the trends over time. What’s interesting is that while we can’t see [the long-term data] in a way that says one state or one district has done better than another, we can really compare student achievement on a very similar test over a long time period, a trend for many decades.

So there is nothing here about which COVID policies did or didn’t work with students—it’s a test about how we did as a country on the whole?

That’s right. It lets us see how students are doing now, relative to how they were doing in the past. It’s not very good for saying one policy was better than another, but it does allow us to say how these nine-year-old students compare to prior years. And then you can break it up, so we can look at how the lowest-achieving kids compared to the lowest-achieving kids in prior years, or the highest-achieving kids, and we can do the same by gender, we can do the same by some racial and ethnic groups, and a few other things.

Can we tell from the data when the most severe learning loss during the pandemic occurred?

Not from this data, but trends that we’re seeing in the NAEP data were actually seen in some of the state data that we’ve already been looking at. We have data from districts and states around the country that give tests quite often, at least once a year, but often three times a year. That data also showed us that it was the low-performing students that lost the most or made the fewest gains during this time. So, this isn’t new. What’s new about the NAEP is that it’s really representative of the country. Sometimes we might say, “Here’s a district that has really good data, and we could see what has been going on in this district, but we didn’t really know whether that was what it was like in the rest of the country.”

The district and state data is sometimes used to figure out what to do in terms of policy, where to focus our efforts. And because of its use in practice, people might respond to it in ways that make it less accurate.

What do you mean by that?

If I’m held accountable for how my students do, and my salary is dependent on it, or other resources that I get are dependent on it, I might be overly focussed on increasing those test scores. But none of that happens with the NAEP test. It’s not used to evaluate anyone. It’s really used to see the trends. The fact that it’s representative and that it’s separate from any kind of accountability policy is really why it’s an interesting test to take a look at.

What can we learn about racial and/or gender disparities from this test in terms of the pandemic?

The biggest takeaway is the results by the percentiles where the lowest percentile lost the most. Sometimes we think, Oh, there are these big differences across racial and ethnic groups, or by gender; that means that the lowest-performing kids are just from the lowest-performing groups. That’s not true. Every group has a big distribution in it. And the differences across groups aren’t nearly as big as the differences across students, but in the current data we can see some differences for some of these groups that we think of as structurally getting different educational opportunities—by race, ethnicity, income, or something like that.

We see drops in mathematics for white, Hispanic, and Black students. All three of the particularly large racial or ethnic groups saw a drop. Of those three groups, the drop was substantially bigger for Black students, and then for Hispanic students, and then white students—five points for white students, eight points for Hispanic students, and thirteen points for Black students. Not surprisingly, we see bigger drops for students who tend to have less educational opportunities anyway. In reading, the drops were really quite similar in terms of points across those three racial or ethnic groups. But math is a subject that is taught really primarily in school. And so it is often the one we see most affected by changes in schooling, more so than we do in reading.

You’re saying that reading occurs outside of school as well as in school? And so if there’s a big disruption in school, it’s going to disrupt math more because kids aren’t doing math at home?

It tends to, yes. I mean, we see drops in both reading and math, but, in general, when we change schooling—and usually we’re changing schooling to try to improve outcomes—it tends to be easier to improve outcomes in math than it is to improve outcomes in reading. And so thinking about this in the opposite way, when other things change in schools, it tends to affect math more than reading.


Source link

Add a Comment

Your email address will not be published. Required fields are marked *