The DC Report Card and people here seem to put a lot of stock in the student growth metrics (Median Growth Percentile and Growth to Proficiency). I don't really get them and would love to hear others' thoughts on what, exactly, they mean and why they matter.
With respect to both metrics, there are several steps of calculations, some somewhat arbitrary (growth floor and target values?) that make it difficult to get a sense of what is actually being represented. See p. 33-36 of the technical guide here: https://osse.dc.gov/sites/default/files/dc/sites/osse/page_content/attachments/2024%20Report%20Card%20Technical%20Guide%20-%20November%20Updates.pdf At the elementary level, I also wonder how much these metrics are skewed by feeder patterns. Since the metrics require two years of student data, you're only capturing the current 4th and 5th grade classes. The demographics and prior year enrollment of a 5th grade class can look very different depending on if the elementary school has a desirable or undesirable feeder middle school. |
They are very much skewed by feeder patterns. And also many schools have a small population in the testing grades so it isn't very meaningful.
Nevertheless I do think it's important that we look at growth, and this seems to be the best metric we have. I think schools come under criticism here for poor performance and growth stats relative to their demographics, mainly. |
My understanding is that they look at individual students, so the feeder pattern skew just means they have less/different info at some schools. |
The growth score can tell you a lot about the quality of teaching. It's not that hard to keep an on-grade-level kid on grade. Much harder to have below-grade kids catch up, because they have to make more than a year's growth in a year's time. So if you look at two schools with the same proficiency stats but one has better growth, obviously the latter is more impressive. |
But what if one school has a kind of normal distribution of proficiency and the other has one that's more bimodal due to demographics? |
Again, it’s a year to year match for individual kids. Yes, a different skill is necessary to appropriate grow high performers vs mid performers vs low performers, so an overall growth measure doesn’t tell you everything, but the actual measure is derived from looking at how each individual kid did in Y1 v Y2 not some overall average mishmash. |
It's based on two years of scores for individual students but I don't think it matters where the student went to school in the baseline year. |
The metrics do ultimately rely on medians and averages. I can appreciate that it's better that that's happening toward the end of the calculations, but I'm still not sure if it's generating anything that's stand-alone meaningful without the additional context of subscores by proficiency level, grade, demographics. Since they're calculating the median growth percentiles for each level of proficiency in year 1, why don't they present them at that level of granularity? Seems like it could be somewhat more insightful. |
It favors the poorly performing schools. When you are at the bottom, the only way is to go up.
When you are at the top, there is no more growth and you are either stagnant or go down. That is why the high performing schools with many more higher performing kids don’t have as good growth scores. It doesn’t mean kids are not learning or doing well. My kid scores 95-98% on math on standardized testing. Highest percentile in group but terrible growth scores. No surprises there. Not much area to improve. I don’t look at growth scores at all. I look at the total percentages of kids above grade level to see if there is a cohort of kids and I look at total percentages of kids on/above grade level to see if that represents majority of kids (so at least grade level content can be taught). |
I agree generally that schools with lower overall test scores tend to post better growth scores. I would not describe these schools as "poorly performing." I would say they serve kids who are more likely to be below grade level for a variety of reasons. Often these are Title 1 schools with a larger percentage of at risk kids. Schools with a large percentage of at risk kids that are able to achieve large growth scores are high performing. Moving students from below proficiency to proficiency requires strong teaching no matter what, but the less support kids are receiving at home the harder it is. So the student growth metric actually helps identify high performing schools. What you are actually pointing out is that at a school where the majority of kids begin at or above grade level, the student growth metric is not as useful, especially if your child is also already at or above grade level. However, even at schools where the majority of students are at or above grade level, there will still be students who are before grade level. The student growth metric will measure how well the school facilitates growth in these students. If a school is not helping these students achieve significant growth to proficiency, that is vital information for families evaluating the school for kids who are not already performing at or above grade level. It is an indication the school may not serve kids who are below grade level well. that doesn't mean the school doesn't do other things well, but that's a very important metric for many parents. |
If you do not like this metric, you might like the empower DC BOLD schools one. It looks specifically at which schools are doing well in absolute terms with at -risk kids |
It actually doesn’t conceptually or statistically favor poorly performing schools. It favors schools with the highest percentage of kids right at grade level and therefore best suited to be receiving the mandatory DCPS curriculum, especially in math where there are stricter rules about assignment adjusting. Additionally, all the evidence is that it’s hardest to grow the poorest performers, which makes sense: there are probably innate reasons they are performing poorly that are difficult to overcome (LDs). But in practice, it’s lots of “good” schools (those with high performing kids) that have good growth metrics too. Take a look at the rich UNW schools. My kids are at a CH school with a bimodal distribution and lots of 5th grade flight. Our growth scores are still decent: great teachers & parents of the high achievers are supplementing. |
None of the alternatives are good either, from an “easy to understand” perspective. Like should you log transform it? Some sort of poisson based on clearing the “3” threshold? |
But can you actually see that in the metric? At schools like this, doesn't the growth (or lack thereof) of the majority of students who are at grade level wash out what's happening with the minority of students who are not at grade level? |
I will also point out that a low growth year is an indicator that the following year's performance may take a hit. Low growth can compound over time and a few years of slightly under par growth can pull a kid down from a 4 to a 3 or whatever. So if you see bad growth, trouble ahead.
Some people will just look at how kids that demographically match their kid are doing. And it's almost always non-at-risk white kids, bc that's the only subgroup big enough to break out at most schools. And conclude that if that subgroup is doing fine, then their school is fine for their kid. I get that this is an attempt to control for demographics across schools, but I think it's a real mistake to look at it that way. A school that is not effective serving a portion of its population will have to reallocate resources towards that portion, or the teachers will have to differentiate across an ever-widening range. Either way, it will affect everyone. Much better to choose a school that's serving all its kids well. |