Anonymous wrote:Anonymous wrote:See the "DC Report Card Technical Guide" here: https://osse.dc.gov/page/dc-school-report-card-resource-library
p. 53 Student Group Weights
"The accountability system calculates each metric for each student group present in the school. Student
groups with fewer than 10 students for that metric are not included. In these cases, the student groups are
dropped from the overall metric scores. After calculating the student group metric scores, the are
aggregated based on the weights listed in Table 12 below to come up with a single metric score used in the
accountability score calculation.
Table 12: Student Group Weights
Student Group Percentage of Overall Score
All Students 30%
Economically Disadvantaged 40%
Race/Ethnicity 15%
Students with Disabilities 10%
English Learners 5%"
WOAH. So the achievement of economically disadvantaged kids is not only double counted, but actually alone counts more than the achievement of ALL kids??? (But they don't count at all if there aren't 10 students. So the group that's the hardest to grow just doesn't get counted at rich schools?) On top of that, what does "race/ethnicity" mean here? Does it mean Black? Because all of the other variables are a specific group of kids, whereas "Race/ethnicity" is a category with different groups of kids... not a single group. That is, "race/ethnicity" does not have a single measure and so is not able to be averaged in like the other categories. So it may be that on top of everything else, they are counting only the achievements of one specific racial/ethnic group, but they won't even come out and say that. That is INSANE.
Anonymous wrote:See the "DC Report Card Technical Guide" here: https://osse.dc.gov/page/dc-school-report-card-resource-library
p. 53 Student Group Weights
"The accountability system calculates each metric for each student group present in the school. Student
groups with fewer than 10 students for that metric are not included. In these cases, the student groups are
dropped from the overall metric scores. After calculating the student group metric scores, the are
aggregated based on the weights listed in Table 12 below to come up with a single metric score used in the
accountability score calculation.
Table 12: Student Group Weights
Student Group Percentage of Overall Score
All Students 30%
Economically Disadvantaged 40%
Race/Ethnicity 15%
Students with Disabilities 10%
English Learners 5%"
Anonymous wrote:Aspire DEFINITELY works this way. There are floors and ceilings for each subgroup which leads to a complicated scoring system that emphasizes at-risk, SpED, ELL populations. Is this also true for the Report Card?
Anonymous wrote:There's something screwy with the data or the data is not actually just showing overall achievement/growth, but is somehow being measured relative to population/emphasizing the performance of certain subpopulations. If this is true, so that even achievement is not actually measuring achievement, then this Report Card is really not terribly valuable for UMC parents.
Here's an example:
L-T is at 70.3 for approaching, meets, exceeds for math.
L-T is at 52.6 for meets, exceeds for math.
Watkins is at 65.9 for approaching, meets, exceeds for math.
Watkins is at 37.3 for meets, exceeds for math.
So, L-T is a little better for approaching meets, exceeds for math (4.4% better) and considerably better for meets, exceeds (15.3% better). If I were a UMC parent looking for a cohort of high performers, L-T clearly has a much bigger one.
However, when you then look at the scores the Report Card assigns that go to the overall rating, L-T gets a 2.4 for approaching meets, exceeds for math and Watkins gets a 3.9. So Watkins does a little worse and gets more than 50% higher. That seems... wrong. But then we get to meets, exceeds, where L-T was MUCH better... and we see L-T get a 5.5 and Watkins get a 7.7! So despite L-T having a noticeably higher pass rate, Watkins gets nearly a 50% higher score again.
It's also not some fluke of distribution being counted somehow, because for Watkins:
ELA: 8% 5s and 19% 1s.
Math: 10% 5s and 15% 1s.
For L-T:
ELA: 28% 5s (!) and 11% 1s.
Math: 16% 5s and 17% 1s.
So, at most, L-T has slightly more 1s in math; but way fewer 1s in ELA and WAY more 5s.
L-T does much better in ELA and does a bit better on points, so that at least makes sense.
Then let's look at growth... Raw numbers wise, the two schools appear to perform similarly, but somehow Watkins ends up with 34 points and L-T 27.1. The most perplexing one is median growth percentile for ELA, where L-T gets 62nd percentile, which gets them 8 points; meanwhile, Watkins gets 57th percentile, which gets them 8.6 points. WHAT? Can anyone makes sense of this?
At the end of the day, L-T has clearly better actual achievement numbers (especially in ELA, where they aren't close) and the schools have very similar growth and absentee numbers. L-T has more kids entering than exiting; Watkins has the reverse. The soft percentage factors I take it don't really count, but they are similar for the schools too... except that L-T is notably better in sense of belonging (76 v 69) and WAY better in safety (62 v 45). Everything else is within 1 point either way; 2 to L-T and 1 to Watkins.
And then we get to the overall scores and Watkins is at 76% and L-T is at 59%... even though the raw numbers clearly give L-T an advantage in achievement and basically a draw in everything else.
How is this possible? Does that mean that the "scores" we're seeing for achievement and growth are actually themselves relative in some totally unexplained way? Or some subpopulation scores are counting more? Because that makes this pretty useless for a family trying to use these Report Cards to make decisions.
Anonymous wrote:There's something screwy with the data or the data is not actually just showing overall achievement/growth, but is somehow being measured relative to population/emphasizing the performance of certain subpopulations. If this is true, so that even achievement is not actually measuring achievement, then this Report Card is really not terribly valuable for UMC parents.
Here's an example:
L-T is at 70.3 for approaching, meets, exceeds for math.
L-T is at 52.6 for meets, exceeds for math.
Watkins is at 65.9 for approaching, meets, exceeds for math.
Watkins is at 37.3 for meets, exceeds for math.
So, L-T is a little better for approaching meets, exceeds for math (4.4% better) and considerably better for meets, exceeds (15.3% better). If I were a UMC parent looking for a cohort of high performers, L-T clearly has a much bigger one.
However, when you then look at the scores the Report Card assigns that go to the overall rating, L-T gets a 2.4 for approaching meets, exceeds for math and Watkins gets a 3.9. So Watkins does a little worse and gets more than 50% higher. That seems... wrong. But then we get to meets, exceeds, where L-T was MUCH better... and we see L-T get a 5.5 and Watkins get a 7.7! So despite L-T having a noticeably higher pass rate, Watkins gets nearly a 50% higher score again.
It's also not some fluke of distribution being counted somehow, because for Watkins:
ELA: 8% 5s and 19% 1s.
Math: 10% 5s and 15% 1s.
For L-T:
ELA: 28% 5s (!) and 11% 1s.
Math: 16% 5s and 17% 1s.
So, at most, L-T has slightly more 1s in math; but way fewer 1s in ELA and WAY more 5s.
L-T does much better in ELA and does a bit better on points, so that at least makes sense.
Then let's look at growth... Raw numbers wise, the two schools appear to perform similarly, but somehow Watkins ends up with 34 points and L-T 27.1. The most perplexing one is median growth percentile for ELA, where L-T gets 62nd percentile, which gets them 8 points; meanwhile, Watkins gets 57th percentile, which gets them 8.6 points. WHAT? Can anyone makes sense of this?
At the end of the day, L-T has clearly better actual achievement numbers (especially in ELA, where they aren't close) and the schools have very similar growth and absentee numbers. L-T has more kids entering than exiting; Watkins has the reverse. The soft percentage factors I take it don't really count, but they are similar for the schools too... except that L-T is notably better in sense of belonging (76 v 69) and WAY better in safety (62 v 45). Everything else is within 1 point either way; 2 to L-T and 1 to Watkins.
And then we get to the overall scores and Watkins is at 76% and L-T is at 59%... even though the raw numbers clearly give L-T an advantage in achievement and basically a draw in everything else.
How is this possible? Does that mean that the "scores" we're seeing for achievement and growth are actually themselves relative in some totally unexplained way? Or some subpopulation scores are counting more? Because that makes this pretty useless for a family trying to use these Report Cards to make decisions.
Anonymous wrote:Anonymous wrote:I am surprised by how badly Eliot-Hine performed. There are a lot of threads on DCUM trying to convince me that EH is the equivalent of SH, which this data does not support in the least... SH gets 84%ile (more or less equivalent to Hardy) and EH gets 21st%ile.
EH was substantially behind SH in both scores (basically SH is +20% in every measure) and growth (SH above average for both; EH below for both).
EH also had 35%(!!!) of students chronically absent.
I genuinely do not mean this to bash EH and I am glad it is getting increased neighborhood buy-in, but this Report Card presents a totally different reality than DCUM. EH actually came out behind Jefferson, but those are much closer and seems to be more about how you weight student achievement vs growth.
I was also surprised by EH. Those growth scores are… not good. In the context of having a good chunk of kids with room to grow, it suggests the school is not doing a great job. Hopefully it’s just an anomaly.
Anonymous wrote:All of the movement is because the growth scores only measure change in one year, which is a really silly measure as a blip year can send you skyrocketing and then crashing or vice versa. Given that 2/3rds of points are based on CAPE scores in one year as a result, the results are always going to be crazily volatile and the percentages somewhat meaningless. Achievement and growth should both be 3 year measures; perhaps double weighting the most recent year.