PARCC Comparison for MIDDLE SCHOOLS in MoCo and Upper Northwest

Anonymous
This post compares the PARCC performance at Pyle, Westland, Deal and Hardy. Unlike the analysis for elementary schools, where the demographic makeup of the schools was relatively similar, the demographics of the student bodies differ markedly at the middle school level. The differences are so extreme that any analysis that does not adjust for these differences is worthless.

Main conclusions after adjusting for demographics: Deal is strongest of the four schools in English Language Arts and Pyle is the best for All Math [includes both grade-level math and advanced math]. Looking just at grade-level math and reading, Hardy is the weakest, although in math it is not dramatically different from the MoCo schools. When we examine performance in Algebra1/2 and Geometry [henceforth, Advanced Math], Hardy is the strongest school and Deal the weakest. This may surprise you, but it probably shouldn’t. Advanced students at Hardy perform phenomenally. The issue is that -- traditionally -- there aren’t many of them. Advanced math students may perform just as well at Deal, but Deal seems to place many more students (in percentage terms) into the advanced math classes than the other schools. Some of these students appeared unprepared and bring down the school’s performance in advanced math. Finally, an odd note: multiple race (two or more) advanced math students perform better at the MoCo schools than at the DC schools. This may confirm the fears of parents of mixed-race children that their children fall into the “wrong” peer group when surrounded by a more diverse student body. It is certainly something worth monitoring going-forward. Lastly, Pyle and Westland are not equal: Pyle performs better than Westland for all subjects.


The general theme of the demographic differences is that Westland and, especially, Pyle are considerably whiter than Deal. Deal, itself, is considerably whiter than Hardy. Conversely, Hardy is much blacker and has a much higher FARMS rate than the other schools. (I do not adjust for FARMS rates.) Without getting into the reasons why black and Hispanic students traditionally perform worse on standardized tests, I calculate performance by racial subgroup for each school. These performance metrics get reweighted at Deal and Hardy to enable an apples-to-apples comparison with the MoCo schools. (Demographic data are at the end of this post.)

In summary, I calculate PARCC performance for each subgroup at each school and then construct a summary measure that is comparable across schools. The summary measure evaluates the PARCC performance at the schools as if they had the same demographics.



ALL MATH

Comparison to Deal:
If Deal had the same demographics as Pyle, Deal would have 56.9% of students scoring a 4 or 5 while Pyle would have 65.9%.

If Deal had the same demographics as Westland, Deal would have 52.1% of students scoring a 4 or 5 while Westland would have 48.8%.

Comparison to Hardy:
If Hardy had the same demographics as Pyle, Hardy would have 41.5% of students scoring a 4 or 5 while Pyle would have 65.6%.

If Hardy had the same demographics as Westland, Hardy would have 38% of students scoring a 4 or 5 while Westland would have 48.3%.

ADVANCED MATH (ALGEBRA AND GEOMETRY)

Comparison to Deal:
If Deal had the same demographics as Pyle, Deal would have 58.3% of students scoring a 4 or 5 while Pyle would have 91.6%.

If Deal had the same demographics as Westland, Deal would have 55.8% of students scoring a 4 or 5 while Westland would have 62.2%.

Comparison to Hardy:
If Hardy had the same demographics as Pyle, Hardy would have 93.9% of students scoring a 4 or 5 while Pyle would have 91.9%.

If Hardy had the same demographics as Westland, Hardy would have 89.7% of students scoring a 4 or 5 while Westland would have 62.2%.

Here it seems relevant to mention that Pyle and Westland put about 25% of their students into advanced math while Deal puts over 45%. Hardy, by contrast, puts only 7.3% of their students in advanced math. The performance data suggest to me that Deal is too generous with placing students into advanced math while Hardy could stand to place more students in advanced math.


ENGLISH LANGAGE ARTS
Comparison to Deal:
If Deal had the same demographics as Pyle, Deal would have 81.8% of students scoring a 4 or 5 while Pyle would have 72%.

If Deal had the same demographics as Westland, Deal would have 76.1% of students scoring a 4 or 5 while Westland would have 60.7%.

Comparison to Hardy:
If Hardy had the same demographics as Pyle, Hardy would have 37.5% of students scoring a 4 or 5 while Pyle would have 72.2%.

If Hardy had the same demographics as Westland, Hardy would have 34.7% of students scoring a 4 or 5 while Westland would have 60.0%.

It seems relevant to mention that Hardy appears to have about twice the percentage of English-Language Learners as the other three schools. (Westland is the next highest percentage.) I doubt that this accounts for much of Hardy’s weaker performance in English Language Arts, but I imagine it does account for some of it.

DEMOGRAPHIC DATA:
PYLE:
73% White
2% Black
12% Asian
8% Hispanic
<5% FARMS

WESTLAND:
63% White
11% Black
6% Asian
16% Hispanic
13% FARMS

DEAL:
45% White
31% Black
6% Asian
13% Hispanic
19% FARMS

HARDY:
13% White
59% Black
11% Asian
13% Hispanic
52% FARMS

References:
DCPS PARCC Data: http://osse.dc.gov/node/1120842
DCPS Demographic Data: http://profiles.dcps.dc.gov/ and PARCC results
MoCo Data: http://reportcard.msde.maryland.gov/rschool.aspx?K=15AAAA&WDATA=school

Anonymous
Interesting conclusions but I would say your methodology is more than a little shaky. There are proven statistical methods to examine y as a function of a series of explanatory variables (x) while isolating some unknown variable, here presumably "instruction". If that's over your head or if you don't have a large enough dataset, I don't see why creating some convoluted measure is any better than simply comparing subgroups. How well does each school do teaching a particular subgroup, such as all affluent students, or all white students?
Anonymous
Anonymous wrote:Interesting conclusions but I would say your methodology is more than a little shaky. There are proven statistical methods to examine y as a function of a series of explanatory variables (x) while isolating some unknown variable, here presumably "instruction". If that's over your head or if you don't have a large enough dataset, I don't see why creating some convoluted measure is any better than simply comparing subgroups. How well does each school do teaching a particular subgroup, such as all affluent students, or all white students?


No offense, but I have a PhD in economics and taught econometrics at an R1 research university. You can shove your patronizing tone up your read-end.

There is no need to do anything fancier. The methodology is both sound and appropriate; it is exactly what one would and should do upon first-reflection. I have tried to avoid mucking the discussion up and talking over most readers' heads. In keeping the overview light, I may have lost you, dear reader.

Incidentally, comparing subgroups is exactly what is at the heart of the analysis. The subgroup results are then aggregated-up in a straightforward manner with the same weights used for each school. That's the summary measure that's reported.
Anonymous
Before anyone gets too excited about "mixed race" kids doing so much better in bcc and Whitman land, know that that typically means "white + Asian/ southeast Asian" in those parts. Carry on
Anonymous
Whoever you are OP, thanks.


Also, my real estate agent thanks you, too
Anonymous
Anonymous wrote:
Anonymous wrote:Interesting conclusions but I would say your methodology is more than a little shaky. There are proven statistical methods to examine y as a function of a series of explanatory variables (x) while isolating some unknown variable, here presumably "instruction". If that's over your head or if you don't have a large enough dataset, I don't see why creating some convoluted measure is any better than simply comparing subgroups. How well does each school do teaching a particular subgroup, such as all affluent students, or all white students?


No offense, but I have a PhD in economics and taught econometrics at an R1 research university. You can shove your patronizing tone up your read-end.

There is no need to do anything fancier. The methodology is both sound and appropriate; it is exactly what one would and should do upon first-reflection. I have tried to avoid mucking the discussion up and talking over most readers' heads. In keeping the overview light, I may have lost you, dear reader.

Incidentally, comparing subgroups is exactly what is at the heart of the analysis. The subgroup results are then aggregated-up in a straightforward manner with the same weights used for each school. That's the summary measure that's reported.


First, thank you for this work. Can you compare DC to schools in MA?
Anonymous
Anonymous wrote:

Comparison to Hardy:
If Hardy had the same demographics as Pyle, Hardy would have 93.9% of students scoring a 4 or 5 while Pyle would have 91.9%.

If Hardy had the same demographics as Westland, Hardy would have 89.7% of students scoring a 4 or 5 while Westland would have 62.2%.




Is data available to tell us how Pyle or Westland would perform if they had the same demographics as Hardy? Certain kids will perform well wherever they go, but it would be much more impressive IMO if one of these schools is serving their minority/FARM populations better than the other.
Anonymous
Anonymous wrote:Before anyone gets too excited about "mixed race" kids doing so much better in bcc and Whitman land, know that that typically means "white + Asian/ southeast Asian" in those parts. Carry on


Mixed race parent with mixed race kid here. Agree with this.
Anonymous
Anonymous wrote:
Anonymous wrote:Interesting conclusions but I would say your methodology is more than a little shaky. There are proven statistical methods to examine y as a function of a series of explanatory variables (x) while isolating some unknown variable, here presumably "instruction". If that's over your head or if you don't have a large enough dataset, I don't see why creating some convoluted measure is any better than simply comparing subgroups. How well does each school do teaching a particular subgroup, such as all affluent students, or all white students?


No offense, but I have a PhD in economics and taught econometrics at an R1 research university. You can shove your patronizing tone up your read-end.

There is no need to do anything fancier. The methodology is both sound and appropriate; it is exactly what one would and should do upon first-reflection. I have tried to avoid mucking the discussion up and talking over most readers' heads. In keeping the overview light, I may have lost you, dear reader.

Incidentally, comparing subgroups is exactly what is at the heart of the analysis. The subgroup results are then aggregated-up in a straightforward manner with the same weights used for each school. That's the summary measure that's reported.


Sick burn.

(And thanks, OP--your analysis is awesome.)
Anonymous
Can you compare DC to schools in MA?


I think you're on the wrong website.



Here are a couple of options:

http://bostonparentbloggers.com

http://www.bpsfamilies.org/parentuniversity

http://www.pplace.org
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Interesting conclusions but I would say your methodology is more than a little shaky. There are proven statistical methods to examine y as a function of a series of explanatory variables (x) while isolating some unknown variable, here presumably "instruction". If that's over your head or if you don't have a large enough dataset, I don't see why creating some convoluted measure is any better than simply comparing subgroups. How well does each school do teaching a particular subgroup, such as all affluent students, or all white students?


No offense, but I have a PhD in economics and taught econometrics at an R1 research university. You can shove your patronizing tone up your read-end.

There is no need to do anything fancier. The methodology is both sound and appropriate; it is exactly what one would and should do upon first-reflection. I have tried to avoid mucking the discussion up and talking over most readers' heads. In keeping the overview light, I may have lost you, dear reader.

Incidentally, comparing subgroups is exactly what is at the heart of the analysis. The subgroup results are then aggregated-up in a straightforward manner with the same weights used for each school. That's the summary measure that's reported.


First, thank you for this work. Can you compare DC to schools in MA?


Yes, thank you for your work. Can you compare Ward 8 schools to schools in Liberia?
Anonymous
OP, thanks for this as well as one yesterday. It's not just Deal and WOTP elementaries that do well in DC "comparing demographics" as you say. You'll find that schooks like Shepherd, Oyster, Ross, Brent, Maury, and Tyler do just as well. You showing only these schools only perpetuates the assumption that you have to be WOTP for your kid to do well. There are also a handful or two of charters that do just as well, if not better than WOTP.
Anonymous
Anonymous wrote:
Anonymous wrote:Before anyone gets too excited about "mixed race" kids doing so much better in bcc and Whitman land, know that that typically means "white + Asian/ southeast Asian" in those parts. Carry on


Mixed race parent with mixed race kid here. Agree with this.


+1000 Exactly.
Anonymous
Op here.

Several replies: I don't know about the composition of mixed-race children in DC versus MoCo. Implicitly, I assumed it was the same. This could be wrong, but I have no data either way. If the anecdotes are correct, and I have no reason to doubt them, then to the extent that mixed race students in MoCo are from a better-testing composition, it suggests that we should exclude the category from comparisons. The effect would be to improve Deal's standing relative to MoCo.

There are plenty of good schools in DCPS. I acknowledged this yesterday when saying that I view 95 out of the 120 DCPS schools as abject failures. This leaves 25 functional schools, a number far larger than just those in upper northwest. They're not all equivalent, of course. I think Brent is pretty comparable to the schools discussed. Powell seems to be solid already and there is reason for optimism with some of the other Petworth schools (Barnard and Bruce Monroe), though they're not as far along as Powell.

I chose a limited number of schools to compare since this is a time-consuming process. I had some time to kill during my last few lunch breaks (while programs ran) so I figured I'd dig around. There's no grand agenda here.
Anonymous
Anonymous wrote:Op here.

Several replies: I don't know about the composition of mixed-race children in DC versus MoCo. Implicitly, I assumed it was the same. This could be wrong, but I have no data either way. If the anecdotes are correct, and I have no reason to doubt them, then to the extent that mixed race students in MoCo are from a better-testing composition, it suggests that we should exclude the category from comparisons. The effect would be to improve Deal's standing relative to MoCo.

There are plenty of good schools in DCPS. I acknowledged this yesterday when saying that I view 95 out of the 120 DCPS schools as abject failures. This leaves 25 functional schools, a number far larger than just those in upper northwest. They're not all equivalent, of course. I think Brent is pretty comparable to the schools discussed. Powell seems to be solid already and there is reason for optimism with some of the other Petworth schools (Barnard and Bruce Monroe), though they're not as far along as Powell.

I chose a limited number of schools to compare since this is a time-consuming process. I had some time to kill during my last few lunch breaks (while programs ran) so I figured I'd dig around. There's no grand agenda here.


While you may not have a "grand agenda" - I totally believe that, economists typically don't - you're the proven subject of someone else's grand agenda that tells us on an almost daily basis that tests are the be all and end all measure of school 'success' and that there is some magic threshold at which point a school should be deemed a "failure". Maybe during one of the lunch breaks you spend churning programs and data you would be well advised to take a walk and pay a visit to one or the other of those abject failures you so haughtily declare as realities. But, yes, carry on.
post reply Forum Index » DC Public and Public Charter Schools
Message Quick Reply
Go to: