|
PP again. Here's one more write-up of a useful study:
http://blogs.edweek.org/teachers/teaching_now/2016/03/bias.html "Published in the journal Economics of Education Review, the "Who Believes in Me?" study was compiled to investigate how teachers form expectations for students, whether those expectations are systematically biased, and whether they are affected by racial differences. The findings are based largely on data from the Educational Longitudinal Study of 2002, an ongoing study following 8,400 10th grade public school students. For the survey, two different math or reading teachers, who each taught the same student, were asked to guess how far that one student would go in school. The findings show that with white students, evaluations from both teachers were about the same. But for black students, white teachers had lower expectations than black teachers." |
Great question. But didn't the county do the same with the magnet acceptance process? Took a lot of kids who they single-handedly defined as "strong potential" based on inconsistent test scores and home schools without peer cohort. |
bottom line recs are biased and this should be based on objective criteria |
Do you know they used inconsistent test scores or is this just an assumption? |
This data hasn't been released so this is purely speculation and unfounded. |
MCPS's own statistics shows which group has high test scores and which don't. Apply probability and statistics to the numbers. If the admittance breakdown to magnets is out of whack with MCPS's own statistics, then that means they are not accepting top performers, and indeed, MCPS has admitted that they look at peer cohort. There is no question about this fact. |
Yes, this. No one is saying recs are perfect, but they should be a part of an evaluation. If studies show there is racial bias, consider them different for students of color and work on dealing with bias. But, ignoring the evaluation of the person/people who we pay to evaluate our kids academically is not effective. Tests are biased. Should we throw out tests? How do we evaluate? We need a variety of data and the wisdom to interpret it. |
but a "group" is not an individual. |
But that's not true if there truly is a deficit of seats compared to highly qualified students. If, for example, you have 100 Asian students evaluated and 20 score at whatever your target threshold is (99% or whatever raw score) and you have 100 of some other race evaluated and only 5 score at your SAME target threshold, but you have 10 seats in the program, you can offer those seats to the 5 students of some other race and to 5 of the Asian students and everyone admitted will have met the same threshold for high performance. There will just be 15 Asian kids who also met the same threshold but didn't get admitted. And if those 15 all go to the same school, they will hopefully be a peer cohort in class with each other encouraging each other to excel. Obviously no real life situation is as simple as a stripped-down example, but if there are not enough seats for all of the highly able students (which everyone seems to agree is the case) then it is entirely possible for the student body selected not to mirror the racial percentages of the entire pool (either of MCPS students or of MCPS students who score X on any particular metric) and for each admitted student to still be eminently qualified and not erode the quality of the program at all. |
oh yes, I keep hearing this argument, but when it's pointed out that MCPS is looking at "group", ie cohort, for the "individual" high achieving student from the west side, well, that's ok to use group statistics then. You want to use group statistics to have a different admittance criteria to magnets (ie not enough URM representation), but you don't want to use group statistics to show which group has a higher probability/statistics to be represented in the magnets. In other words, only use those statistics when it favors your side but not when it doesn't. You can't have it both ways. |
? you just made the argument for me regarding "peer cohort" vs individual performance, and that they are using location as a proxy for race. |
Where did you get your information? Do you have a credible source inside the Central Office? |
This is crazytown.
|
The MS that were most affected by the "peer cohort" nonsense are the ones with low URM and FARMs. It's not really that hard to connect those dots. It's kind of like how the Rs removed the SALT Deductions. It's not hard to connect the dots that they took away this deduction to hurt the coastal blue states, , which disproportionately has high state income and property tax. I don't think MCPS did this deliberately to go after wealthy Asian/white students. However, it just so happens that many of these students are wealthy white/asian. In this case, race and location are closely linked together, like how high state and property tax are closely linked with liberal coastal elites. |
Why did MCPS put most of the magnets in the lower income (and predominately URM) schools if not because of using location (and race) as the reason? If MCPS can use location and race as a qualifier for putting in magnets programs in those places, why wouldn't they use location/race as a qualifier for students to be admitted in those magnets? |