DC School Report Cards are up

Anonymous
Anonymous wrote:So what's the bottom line on thes utility of these things for parents?

It seems like it's primarily designed to determine where to direct extra funds and not to determine the comparable quality of the various schools. I shudder to think how much DCPS spent on consultants to develop this.



Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.

I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.

Also DCPS didn’t develop these; OSSE did. And the weighting was largely dictated by the US Dept of Education (test scores had to comprise bulk of ratings for ES and MS, and for high schools the graduation rate was heavily factored in).
Anonymous
Anonymous wrote:

Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.

I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.


I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.

However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.

I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.

What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.
Anonymous
Anonymous wrote:
Anonymous wrote:

Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.

I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.


I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.

However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.

I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.

What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.


Yes. Schools with strong administrations will focus on how to get to get 5 stars. The things they focus on may or may not actually improve student learning. Schools with weak administrations won't be as strategic and may therefor be better quality than their rating indicates.
Anonymous
Anonymous wrote:
Anonymous wrote:

Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.

I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.


I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.

However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.

I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.

What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.


Yes. That's what I'm seeing, and also how it feels to me. I'm also not hearing really any buzz about these stars at our (charter) school, as opposed to the Tiers. It seems to have fallen flat as parents do not really know what to make of this mess of numbers.

The extra weighting of at-risk and disabled students, as you said, might make sense from a policy perspective but feels very weird as a parent. And, it might make schools put extra services into these groups (especially disabled) but only around their PARCC scores... I'm not sure if that's ideal. And the stars allow all the other kids to completely fall through the cracks.
Anonymous
The stars roughly shook out the same way the charter Tiers do, but with 5 levels instead of 3. So it wasn't really a change.

And DCPS has had its own categories - reward, rising, focus, etc for a while.

Each used slightly different inputs but all of them centered around PARCC performance.

I think it is useful to be able to compare charters and DCPS head to head although if you think the test score data is meaningless it doesn't help you much.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:

Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.

I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.


I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.

However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.

I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.

What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.


Yes. That's what I'm seeing, and also how it feels to me. I'm also not hearing really any buzz about these stars at our (charter) school, as opposed to the Tiers. It seems to have fallen flat as parents do not really know what to make of this mess of numbers.

The extra weighting of at-risk and disabled students, as you said, might make sense from a policy perspective but feels very weird as a parent. And, it might make schools put extra services into these groups (especially disabled) but only around their PARCC scores... I'm not sure if that's ideal. And the stars allow all the other kids to completely fall through the cracks.


The weighting system is deeply flawed. Only at-risk and disabled students matter?

I'd understand if some agency for the disabled used this system to assess schools and prioritize budgets, but as a tool to evaluate the general quality of schools it's frankly stupid and probably racist.
Anonymous
At risk kids aren't weighted any more than the other student groups.

Students with disabilities are - but only an extra 5%.

If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).

What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.
Anonymous
Anonymous wrote:At risk kids aren't weighted any more than the other student groups.

Students with disabilities are - but only an extra 5%.

If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).

What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.


Exactly, having such a weighting for growth penalizes proficiency. As a system administrator, growth is an important metric. But as a parent, proficiency is what matters.

It's the same thing with demographics (which can get widely skewed by population size, is dependent on accurate sorting, and utilizes race as a stand in for harder to determine categories such as household income, parental education, etc).
Anonymous
Anonymous wrote:
Anonymous wrote:At risk kids aren't weighted any more than the other student groups.

Students with disabilities are - but only an extra 5%.

If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).

What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.


Exactly, having such a weighting for growth penalizes proficiency. As a system administrator, growth is an important metric. But as a parent, proficiency is what matters.

It's the same thing with demographics (which can get widely skewed by population size, is dependent on accurate sorting, and utilizes race as a stand in for harder to determine categories such as household income, parental education, etc).


Um proficiency is directly correlated to SES and at-risk levels

Anonymous
There were literally dozens of public meetings with parents and experts debating the relative weighting of proficiency vs growth.

OSSE's first proposed rating didn't weight growth more than proficiency, and they changed it based on overwhelming feedback from most people who bothered to show up.
Anonymous
Anonymous wrote:

Um proficiency is directly correlated to SES and at-risk levels



Yes, of course, and (on a macro level) when it comes to directing resources/rewarding performance that is highly important. But (on a micro level) that doesn't help determine the comparable quality of an education from an individual school.

In other words, what is the intended audience/use of these scores?

A deep dive into the social constructs of these neighborhoods highlights the problems of an overreliance on top level demographics. For instance: English learners at JKLM = diplomat/IFI kids. In other parts of the city it equals 1st Gen immigrants. The background of those two population groups is vastly different.

By creating a system reliant on top level, acontextual, demographics we develop a vicious cycle of circular data that illuminates far less than we think
Anonymous
Question - for students who received 5s on PARCC for 2 years running, is their score penalized for growth?
Anonymous
Anonymous wrote:There were literally dozens of public meetings with parents and experts debating the relative weighting of proficiency vs growth.

OSSE's first proposed rating didn't weight growth more than proficiency, and they changed it based on overwhelming feedback from most people who bothered to show up.


That doesn't surprise me. I'm sure the universe of people who knew both that the meetings were happening and the issues actually being discussed was quite small.

It's the classic conundrum of special interests versus the common good overlaid with lots of jargon/terms of art.
Anonymous
Anonymous wrote:Question - for students who received 5s on PARCC for 2 years running, is their score penalized for growth?


Not the final number but growth on scores within range for that number.
Anonymous
Anonymous wrote:
Anonymous wrote:At risk kids aren't weighted any more than the other student groups.

Students with disabilities are - but only an extra 5%.

If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).

What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.


Exactly, having such a weighting for growth penalizes proficiency. As a system administrator, growth is an important metric. But as a parent, proficiency is what matters.

It's the same thing with demographics (which can get widely skewed by population size, is dependent on accurate sorting, and utilizes race as a stand in for harder to determine categories such as household income, parental education, etc).


proficiency is easier to understand - just look at parc scores. I don't need to see teh same measure twice. As a parent of a struggling high SES student, I'm glad to see the focus on growth. I will say that growth measures before Parc are fudgeable though. They should only be measuring 3rd grade on not whatever random measures schools choose to use on their own before 3rd.
post reply Forum Index » DC Public and Public Charter Schools
Message Quick Reply
Go to: