Anonymous wrote:Anonymous wrote:At risk kids aren't weighted any more than the other student groups.
Students with disabilities are - but only an extra 5%.
If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).
What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.
Exactly, having such a weighting for growth penalizes proficiency. As a system administrator, growth is an important metric. But as a parent, proficiency is what matters.
It's the same thing with demographics (which can get widely skewed by population size, is dependent on accurate sorting, and utilizes race as a stand in for harder to determine categories such as household income, parental education, etc).
Anonymous wrote:Question - for students who received 5s on PARCC for 2 years running, is their score penalized for growth?
Anonymous wrote:There were literally dozens of public meetings with parents and experts debating the relative weighting of proficiency vs growth.
OSSE's first proposed rating didn't weight growth more than proficiency, and they changed it based on overwhelming feedback from most people who bothered to show up.
Anonymous wrote:
Um proficiency is directly correlated to SES and at-risk levels
Anonymous wrote:Anonymous wrote:At risk kids aren't weighted any more than the other student groups.
Students with disabilities are - but only an extra 5%.
If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).
What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.
Exactly, having such a weighting for growth penalizes proficiency. As a system administrator, growth is an important metric. But as a parent, proficiency is what matters.
It's the same thing with demographics (which can get widely skewed by population size, is dependent on accurate sorting, and utilizes race as a stand in for harder to determine categories such as household income, parental education, etc).
Anonymous wrote:At risk kids aren't weighted any more than the other student groups.
Students with disabilities are - but only an extra 5%.
If you run the scores with the all student metric for both performance and growth, and not student groups the conclusions dont' change (empower group did this over the weekend).
What is making these ratings look different in spots is that student growth is weighed more heavily that proficiency.
Anonymous wrote:Anonymous wrote:Anonymous wrote:
Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.
I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.
I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.
However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.
I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.
What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.
Yes. That's what I'm seeing, and also how it feels to me. I'm also not hearing really any buzz about these stars at our (charter) school, as opposed to the Tiers. It seems to have fallen flat as parents do not really know what to make of this mess of numbers.
The extra weighting of at-risk and disabled students, as you said, might make sense from a policy perspective but feels very weird as a parent. And, it might make schools put extra services into these groups (especially disabled) but only around their PARCC scores... I'm not sure if that's ideal. And the stars allow all the other kids to completely fall through the cracks.
Anonymous wrote:Anonymous wrote:
Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.
I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.
I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.
However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.
I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.
What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.
Anonymous wrote:Anonymous wrote:
Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.
I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.
I agree but I think that is my issue. From a public policy standpoint it provides decent data on where finite resources should be deployed and helps identifying schools that are slipping.
However, from a parental perspective the weighting of at risk kids, test score growth, and demographic comparables make it less useful for identifying the absolute quality of a particular school. In essence the signal gets lost in the noise.
I do think that this makes some of the unfavored neighborhood schools look better, which will help counter the emotional factors driving people away (which is a good public policy goal) but worry that by making some of the favored schools look superficially worse it will have negative consequences.
What will the inevitable gaming of the stars lead to? In some cases it will lead to beneficial outcomes by encouraging admittance of at risk/disabilities but in others it will do the exact opposite. If will be interesting to see if there is an explosion of IEPs rolling over into the testing years.
Anonymous wrote:
Disagree. In a city with so many students at risk, I think having a way to measure student growth across both sectors and by student group is helpful (achievement was available before). I like being able to teacher experience and student retention by subgroup too.
I think it should make people a little more open to their neighborhood elementary schools and a bit more wary of some of the DCUM-favored charters.
Anonymous wrote:So what's the bottom line on thes utility of these things for parents?
It seems like it's primarily designed to determine where to direct extra funds and not to determine the comparable quality of the various schools. I shudder to think how much DCPS spent on consultants to develop this.