Toggle navigation
Toggle navigation
Home
DCUM Forums
Nanny Forums
Events
About DCUM
Advertising
Search
Recent Topics
Hottest Topics
FAQs and Guidelines
Privacy Policy
Your current identity is: Anonymous
Login
Preview
Subject:
Forum Index
»
Montgomery County Public Schools (MCPS)
Reply to "5th graders taking 6th grade map m"
Subject:
Emoticons
More smilies
Text Color:
Default
Dark Red
Red
Orange
Brown
Yellow
Green
Olive
Cyan
Blue
Dark Blue
Violet
White
Black
Font:
Very Small
Small
Normal
Big
Giant
Close Marks
[quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous][quote=Anonymous]I’m guessing there will be a separate lottery for the CM students and the regular, each being in the top 15 percentile in each group? Otherwise it’s apples and oranges and wouldn’t make sense. This is pure speculation—I have no idea how this will affect the lottery.[/quote] Not really; NWEA RIT scores have percentages based on grade level. I haven't seen data for 5th graders taking the 6th grade test but it likely exists and can be applied. [/quote] NWEA doesn't differentiate percentile based on which version of the test is given, relying on the adaptive nature of the test to produce a continuum of RIT scores. That may work when considering averages across large populations or, to some degree, when looking at a longitudinal series of tests for an individual. However, there appears to be high [i]individual[/i] variation seen at single test points when adding higher-level questions to the mix (such as at the grade 6+ version of the test). This showcases the infidelity to underlying ability/achievement of these kinds of tests when utilizing a single-point-in-time score (beyond any concern related to utilization of single data points as litmus tests for such decisions) -- the adaptive algorithm might present something it considers, say, 7th-grade level, which the student might not know, and then will "shift down" even if the student knows other sub-subject content at that or a higher level, which then doesn't get tested in the first place. Yes, I know that adaptive tests may throw more than one question in at a particular level to counter this tendency, but, given the relatively few questions asked/short time allocated for these "untimed" tests and the 4 sub-subject areas covered, MAP doesn't really achieve adequate statistical certainty for an individual. This is among the reasons that, when not for a testing period used for selection criteria, families shouldn't worry too much about a one-off lower MAP score than expected. At the same time, families that can coach their children on test-taking skills (that tend to optimize expression of mastered content) create a distinct advantage towards selection. It's not as if MAP is a terrible tool. It can be quite good as a guide for teaching if not relied upon in the absence of good classroom observation, and broad results (county-wide or school-wide averages, where that variation can be viewed though a proper stochastic lens) can help evaluate, say, curricular effectiveness. It's just a poor choice to be used in placement decisions (especially absent other system-independent measures) as MCPS does for their magnets. By the way, those kids taking Math 5/6? MCPS also does not take their more rigorous course of study into account when reviewing the grade litmus used for inclusion in the criteria-based Math/Science/Computer Science magnet middle school lottery pool (and local-school AIM placement). This is only another of the several things that contribute to their approach failing to distinguish apples from oranges (and nectarines, and pears, and...). Presuming no change from last year (they won't have OSA review until this coming spring), a student needs to get an A this quarter whether taking Math 5 or Math 5/6. That's along with an A in Science, an ON/ABV report card reading level, and hitting the required FARMS-rate-based, locally-normed MAP %ile, the currently used tables for which can be found, here: resources.njgifted.org/wp-content/uploads/2021/05/2020-NWEA-Math-Norms.pdf There are adjustments MCPS makes on an individual basis for IEP, 504, EML and FARMS (collectively, "students receiving services"), but they do not reveal what these adjustments entail.[/quote] Exactly the chart has grade level and gives a percent. So, for 5th graders who got score X, they are at percent Y. [b]This would be different for 5th graders who took the 6th-grade test than for 5th graders who took the more straightforward test.[/b] [i]The percentage would fairly reflect their knowledge, so it seems perfectly fair.[/i][/quote] Please reconcile these dissonant statements with additional detail so we can more clearly understand your point. The chart shows RIT score percentiles vs. students who took the test [i]in a particular grade[/i], not students who took a particular grade's test. How is it fair if a student in Math 5/6 took the MAP-M 6+ and scored 230 due to the noted adaptive-testing variability at that switchover, but would have scored a 245 on the MAP-M that is given to students in their same grade, but in Math 5 (which we've seen -- just looking at spring scores from the prior year, which track closely to the following year's fall scores). It's also not fair the other way, of course. Students in Math 5 had only been exposed to 4th-grade math (and a week or two of 5th) by the time they take the test that serves as a gate to the magnet. Since scores are highly driven by exposure, that puts [i]them[/i] at a disadvantage. MCPS using that kind of back and forth advantage to suggest some kind of overall fairness ("It'll come out in the wash!") simpy is ludicrous and a systemic disservice to individuals. They shouldn't be trying to wrangle MAP into serving a purpose for which it was never intended. They [i]had to[/u] do so for [i]one year[/i] at the beginning of the pandemic, since they couldn't administer CogAT and use the prior paradigm, and since the pandemic created such unreliable conditions from which to draw evaluations in the fist place. Even then, they used test scores from multiple testing periods to try to counter the known downdide of the approach. But then they saw it as a cost-savings that might help achieve a demographic goal, which is yet to be seen and rife with uncertainty due to the high gameability of exposure-based testing. And they eliminated the cushion of considering scores across multiple testing periods for the sake of simplifying their work. They need either to adopt an entirely new paradigm or at least to introduce more robust (AND FULLY TRANSPARENT) adjustments to individual conditions. They have time to make the latter happen [i]this year.[/i] Or they can continue on their merry way, claiming victory but ignoring the faults, indifferent to the consequences for individual students...[/quote]
Options
Disable HTML in this message
Disable BB Code in this message
Disable smilies in this message
Review message
Search
Recent Topics
Hottest Topics