5th graders taking 6th grade map m

Anonymous
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.
Anonymous
Anonymous wrote:
Anonymous wrote:I’m guessing there will be a separate lottery for the CM students and the regular, each being in the top 15 percentile in each group? Otherwise it’s apples and oranges and wouldn’t make sense. This is pure speculation—I have no idea how this will affect the lottery.


Not really; NWEA RIT scores have percentages based on grade level. I haven't seen data for 5th graders taking the 6th grade test but it likely exists and can be applied.


NWEA doesn't differentiate percentile based on which version of the test is given, relying on the adaptive nature of the test to produce a continuum of RIT scores. That may work when considering averages across large populations or, to some degree, when looking at a longitudinal series of tests for an individual. However, there appears to be high individual variation seen at single test points when adding higher-level questions to the mix (such as at the grade 6+ version of the test). This showcases the infidelity to underlying ability/achievement of these kinds of tests when utilizing a single-point-in-time score (beyond any concern related to utilization of single data points as litmus tests for such decisions) -- the adaptive algorithm might present something it considers, say, 7th-grade level, which the student might not know, and then will "shift down" even if the student knows other sub-subject content at that or a higher level, which then doesn't get tested in the first place.

Yes, I know that adaptive tests may throw more than one question in at a particular level to counter this tendency, but, given the relatively few questions asked/short time allocated for these "untimed" tests and the 4 sub-subject areas covered, MAP doesn't really achieve adequate statistical certainty for an individual. This is among the reasons that, when not for a testing period used for selection criteria, families shouldn't worry too much about a one-off lower MAP score than expected. At the same time, families that can coach their children on test-taking skills (that tend to optimize expression of mastered content) create a distinct advantage towards selection.

It's not as if MAP is a terrible tool. It can be quite good as a guide for teaching if not relied upon in the absence of good classroom observation, and broad results (county-wide or school-wide averages, where that variation can be viewed though a proper stochastic lens) can help evaluate, say, curricular effectiveness. It's just a poor choice to be used in placement decisions (especially absent other system-independent measures) as MCPS does for their magnets.

By the way, those kids taking Math 5/6? MCPS also does not take their more rigorous course of study into account when reviewing the grade litmus used for inclusion in the criteria-based Math/Science/Computer Science magnet middle school lottery pool (and local-school AIM placement). This is only another of the several things that contribute to their approach failing to distinguish apples from oranges (and nectarines, and pears, and...).

Presuming no change from last year (they won't have OSA review until this coming spring), a student needs to get an A this quarter whether taking Math 5 or Math 5/6. That's along with an A in Science, an ON/ABV report card reading level, and hitting the required FARMS-rate-based, locally-normed MAP %ile, the currently used tables for which can be found, here:

resources.njgifted.org/wp-content/uploads/2021/05/2020-NWEA-Math-Norms.pdf

There are adjustments MCPS makes on an individual basis for IEP, 504, EML and FARMS (collectively, "students receiving services"), but they do not reveal what these adjustments entail.
Anonymous
Anonymous wrote:So there is one MAP test for fifth graders and another MAP test for 6th graders and up?

What does exposure based test mean? Kids need to answer questions about lots of things that they haven’t seen yet?


It tests basic questions from the non-honors curriculum. It doesn't have hard / tricky problems that use less knowledge. So, for a common example, a highly able kid might score lower because they have never seen the division symbol even though they could solve a division word problem using their own strategy.
Anonymous
Anonymous wrote:
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.


Except that those in Math 5/6 haven't yet encountered 6th grade math. They'll be covering 5th grade content through December.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:My child is in fifth grade and taking compacted math. He took the grade 6 map test this week. Does anyone know how this will affect placement into middle school magnets? How will they compare those who took the 6th grade test against those who took the 5th grade test?


This is all part of an MCPS conspiracy to help justify sending lower-performing students to the magnets over the high-scoring students. They know there is a correlation between affluence and standardized test scores so the is one of the ways they can improve selection diversity.


+1. They even say in all of their meetings that their goal for the magnets is to achieve proportionality from a race/ethnicity perspective. Meaning that the racial make up of the students in the magnets mirrors the racial make up of the students county-wide. But this ignores all of the data they present that clearly shows that certain racial groups are overwhelmingly testing and performing very very below grade level in aggregate. Until they intervene and remediate that reality at earlier grades, you do not have students prepared for these programs in equal ratios to their proportions in the system. And it’s incredible also, because the process of selecting students is supposed to be race blind! Their agenda isn’t to serve the biggest outliers or the kids without a cohort. Their agenda is to manipulate the admissions to demonstrate momentum toward racial proportionality. I would encourage them to actually work harder to improve their teaching methods in earlier grades because clearly a student’s race or ethnicity is not determinative of giftedness or ability.


To this and the point above about statistical uncertainty for individual scores, the data that OSA will use to review the selection criteria and present to the BOE almost certainly will be exlusively aggregate, reducing the uncertainty across the whole test-taking poulation, but not evidencing the underlying uncertainty (and likely injustices) when applying the selection methodology to individuals. If so, the review would tend to support continuation of the current paradigm more than it otherwise should.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.



Except that those in Math 5/6 haven't yet encountered 6th grade math. They'll be covering 5th grade content through December.


It's a computer adaptive test.
There is massive overlap between 5th and 6th grade math curriculum.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:My child is in fifth grade and taking compacted math. He took the grade 6 map test this week. Does anyone know how this will affect placement into middle school magnets? How will they compare those who took the 6th grade test against those who took the 5th grade test?




This is all part of an MCPS conspiracy to help justify sending lower-performing students to the magnets over the high-scoring students. They know there is a correlation between affluence and standardized test scores so the is one of the ways they can improve selection diversity.


+1. They even say in all of their meetings that their goal for the magnets is to achieve proportionality from a race/ethnicity perspective. Meaning that the racial make up of the students in the magnets mirrors the racial make up of the students county-wide. But this ignores all of the data they present that clearly shows that certain racial groups are overwhelmingly testing and performing very very below grade level in aggregate. Until they intervene and remediate that reality at earlier grades, you do not have students prepared for these programs in equal ratios to their proportions in the system. And it’s incredible also, because the process of selecting students is supposed to be race blind! Their agenda isn’t to serve the biggest outliers or the kids without a cohort. Their agenda is to manipulate the admissions to demonstrate momentum toward racial proportionality. I would encourage them to actually work harder to improve their teaching methods in earlier grades because clearly a student’s race or ethnicity is not determinative of giftedness or ability.


To this and the point above about statistical uncertainty for individual scores, the data that OSA will use to review the selection criteria and present to the BOE almost certainly will be exlusively aggregate, reducing the uncertainty across the whole test-taking poulation, but not evidencing the underlying uncertainty (and likely injustices) when applying the selection methodology to individuals. If so, the review would tend to support continuation of the current paradigm more than it otherwise should.


The main problem with the current paradigm MAP-based paradigm is that the test content doesn't go anywhere near testing for capability or motivation for an enriched curriculum.

So the very low lottery bar adds more noise to an already noisy process. "Justice" was never on the table for a program that only admits a tiny group of kids and makes them ride a long bus ride if they live far away.

The magnet enhanced curriculum should be available to kids (self-service) in every school, and they should do a 1 day a week in-person or virtual pullout for enrichment topics for qualified kids if they can't support a full dedicated class for the school's cohort.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m guessing there will be a separate lottery for the CM students and the regular, each being in the top 15 percentile in each group? Otherwise it’s apples and oranges and wouldn’t make sense. This is pure speculation—I have no idea how this will affect the lottery.


Not really; NWEA RIT scores have percentages based on grade level. I haven't seen data for 5th graders taking the 6th grade test but it likely exists and can be applied.


NWEA doesn't differentiate percentile based on which version of the test is given, relying on the adaptive nature of the test to produce a continuum of RIT scores. That may work when considering averages across large populations or, to some degree, when looking at a longitudinal series of tests for an individual. However, there appears to be high individual variation seen at single test points when adding higher-level questions to the mix (such as at the grade 6+ version of the test). This showcases the infidelity to underlying ability/achievement of these kinds of tests when utilizing a single-point-in-time score (beyond any concern related to utilization of single data points as litmus tests for such decisions) -- the adaptive algorithm might present something it considers, say, 7th-grade level, which the student might not know, and then will "shift down" even if the student knows other sub-subject content at that or a higher level, which then doesn't get tested in the first place.

Yes, I know that adaptive tests may throw more than one question in at a particular level to counter this tendency, but, given the relatively few questions asked/short time allocated for these "untimed" tests and the 4 sub-subject areas covered, MAP doesn't really achieve adequate statistical certainty for an individual. This is among the reasons that, when not for a testing period used for selection criteria, families shouldn't worry too much about a one-off lower MAP score than expected. At the same time, families that can coach their children on test-taking skills (that tend to optimize expression of mastered content) create a distinct advantage towards selection.

It's not as if MAP is a terrible tool. It can be quite good as a guide for teaching if not relied upon in the absence of good classroom observation, and broad results (county-wide or school-wide averages, where that variation can be viewed though a proper stochastic lens) can help evaluate, say, curricular effectiveness. It's just a poor choice to be used in placement decisions (especially absent other system-independent measures) as MCPS does for their magnets.

By the way, those kids taking Math 5/6? MCPS also does not take their more rigorous course of study into account when reviewing the grade litmus used for inclusion in the criteria-based Math/Science/Computer Science magnet middle school lottery pool (and local-school AIM placement). This is only another of the several things that contribute to their approach failing to distinguish apples from oranges (and nectarines, and pears, and...).

Presuming no change from last year (they won't have OSA review until this coming spring), a student needs to get an A this quarter whether taking Math 5 or Math 5/6. That's along with an A in Science, an ON/ABV report card reading level, and hitting the required FARMS-rate-based, locally-normed MAP %ile, the currently used tables for which can be found, here:

resources.njgifted.org/wp-content/uploads/2021/05/2020-NWEA-Math-Norms.pdf

There are adjustments MCPS makes on an individual basis for IEP, 504, EML and FARMS (collectively, "students receiving services"), but they do not reveal what these adjustments entail.


Exactly the chart has grade level and gives a percent. So, for 5th graders who got score X, they are at percent Y. This would be different for 5th graders who took the 6th-grade test than for 5th graders who took the more straightforward test. The percentage would fairly reflect their knowledge, so it seems perfectly fair.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.



Except that those in Math 5/6 haven't yet encountered 6th grade math. They'll be covering 5th grade content through December.


It's a computer adaptive test.
There is massive overlap between 5th and 6th grade math curriculum.


Except that the response, above, was to the statement that,

"Kids in a 6th grade math class are taking the MAP 6+. It doesn't take a Galaxy Brain to understand.'

They aren't in 6th grade math yet. Sure, there's overlap, but they shouldn't be taking MAP 6+ until they are, especially when there is evidence of a temporary decline in score with the shift in the grade level of the test and when that test, unlike the others for this year, is going to be used as an admissions gate to magnet programming.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.



Except that those in Math 5/6 haven't yet encountered 6th grade math. They'll be covering 5th grade content through December.


It's a computer adaptive test.
There is massive overlap between 5th and 6th grade math curriculum.


Except that the response, above, was to the statement that,

"Kids in a 6th grade math class are taking the MAP 6+. It doesn't take a Galaxy Brain to understand.'

They aren't in 6th grade math yet. Sure, there's overlap, but they shouldn't be taking MAP 6+ until they are, especially when there is evidence of a temporary decline in score with the shift in the grade level of the test and when that test, unlike the others for this year, is going to be used as an admissions gate to magnet programming.


It doesn't matter. Magnet admissions is a massive lottery with few winners. Kids whose scores dropped were bouncing the ceiling where the score was basically random (within the range) anyway. If you are that obessee with getting a higher score, grab a middle school math book from the library.
Anonymous
Just curious. Does that mean they took Map M - 5 last year in 4th grade?
Anonymous
Anonymous wrote:Just curious. Does that mean they took Map M - 5 last year in 4th grade?


Well, it is MAP 3-5, then 6+

The test is adaptive, so by having kids currently in 5th grade take the 6+ test, they are effectively raising the ceiling on the content that those kids could be exposed to.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I’m guessing there will be a separate lottery for the CM students and the regular, each being in the top 15 percentile in each group? Otherwise it’s apples and oranges and wouldn’t make sense. This is pure speculation—I have no idea how this will affect the lottery.


Not really; NWEA RIT scores have percentages based on grade level. I haven't seen data for 5th graders taking the 6th grade test but it likely exists and can be applied.


NWEA doesn't differentiate percentile based on which version of the test is given, relying on the adaptive nature of the test to produce a continuum of RIT scores. That may work when considering averages across large populations or, to some degree, when looking at a longitudinal series of tests for an individual. However, there appears to be high individual variation seen at single test points when adding higher-level questions to the mix (such as at the grade 6+ version of the test). This showcases the infidelity to underlying ability/achievement of these kinds of tests when utilizing a single-point-in-time score (beyond any concern related to utilization of single data points as litmus tests for such decisions) -- the adaptive algorithm might present something it considers, say, 7th-grade level, which the student might not know, and then will "shift down" even if the student knows other sub-subject content at that or a higher level, which then doesn't get tested in the first place.

Yes, I know that adaptive tests may throw more than one question in at a particular level to counter this tendency, but, given the relatively few questions asked/short time allocated for these "untimed" tests and the 4 sub-subject areas covered, MAP doesn't really achieve adequate statistical certainty for an individual. This is among the reasons that, when not for a testing period used for selection criteria, families shouldn't worry too much about a one-off lower MAP score than expected. At the same time, families that can coach their children on test-taking skills (that tend to optimize expression of mastered content) create a distinct advantage towards selection.

It's not as if MAP is a terrible tool. It can be quite good as a guide for teaching if not relied upon in the absence of good classroom observation, and broad results (county-wide or school-wide averages, where that variation can be viewed though a proper stochastic lens) can help evaluate, say, curricular effectiveness. It's just a poor choice to be used in placement decisions (especially absent other system-independent measures) as MCPS does for their magnets.

By the way, those kids taking Math 5/6? MCPS also does not take their more rigorous course of study into account when reviewing the grade litmus used for inclusion in the criteria-based Math/Science/Computer Science magnet middle school lottery pool (and local-school AIM placement). This is only another of the several things that contribute to their approach failing to distinguish apples from oranges (and nectarines, and pears, and...).

Presuming no change from last year (they won't have OSA review until this coming spring), a student needs to get an A this quarter whether taking Math 5 or Math 5/6. That's along with an A in Science, an ON/ABV report card reading level, and hitting the required FARMS-rate-based, locally-normed MAP %ile, the currently used tables for which can be found, here:

resources.njgifted.org/wp-content/uploads/2021/05/2020-NWEA-Math-Norms.pdf

There are adjustments MCPS makes on an individual basis for IEP, 504, EML and FARMS (collectively, "students receiving services"), but they do not reveal what these adjustments entail.


Exactly the chart has grade level and gives a percent. So, for 5th graders who got score X, they are at percent Y. This would be different for 5th graders who took the 6th-grade test than for 5th graders who took the more straightforward test. The percentage would fairly reflect their knowledge, so it seems perfectly fair.


Please reconcile these dissonant statements with additional detail so we can more clearly understand your point.

The chart shows RIT score percentiles vs. students who took the test in a particular grade, not students who took a particular grade's test. How is it fair if a student in Math 5/6 took the MAP-M 6+ and scored 230 due to the noted adaptive-testing variability at that switchover, but would have scored a 245 on the MAP-M that is given to students in their same grade, but in Math 5 (which we've seen -- just looking at spring scores from the prior year, which track closely to the following year's fall scores).

It's also not fair the other way, of course. Students in Math 5 had only been exposed to 4th-grade math (and a week or two of 5th) by the time they take the test that serves as a gate to the magnet. Since scores are highly driven by exposure, that puts them at a disadvantage. MCPS using that kind of back and forth advantage to suggest some kind of overall fairness ("It'll come out in the wash!") simpy is ludicrous and a systemic disservice to individuals.

They shouldn't be trying to wrangle MAP into serving a purpose for which it was never intended. They had to[/u] do so for [i]one year at the beginning of the pandemic, since they couldn't administer CogAT and use the prior paradigm, and since the pandemic created such unreliable conditions from which to draw evaluations in the fist place. Even then, they used test scores from multiple testing periods to try to counter the known downdide of the approach. But then they saw it as a cost-savings that might help achieve a demographic goal, which is yet to be seen and rife with uncertainty due to the high gameability of exposure-based testing. And they eliminated the cushion of considering scores across multiple testing periods for the sake of simplifying their work.

They need either to adopt an entirely new paradigm or at least to introduce more robust (AND FULLY TRANSPARENT) adjustments to individual conditions. They have time to make the latter happen this year. Or they can continue on their merry way, claiming victory but ignoring the faults, indifferent to the consequences for individual students...
Anonymous

The chart shows RIT score percentiles vs. students who took the test in a particular grade, not students who took a particular grade's test. How is it fair if a student in Math 5/6 took the MAP-M 6+ and scored 230 due to the noted adaptive-testing variability at that switchover, but would have scored a 245 on the MAP-M that is given to students in their same grade, but in Math 5 (which we've seen -- just looking at spring scores from the prior year, which track closely to the following year's fall scores).


That child is still above the 85th percentile however. There's no actual harm done because everything above that percentile is a lottery.

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:Do you think there is a chance they are only considering students in compacted math for the math/science magnet?


No, stop it.
Kids in a 6th grade math class are taking the MAP 6+.
It doesn't take a Galaxy Brain to understand.



Except that those in Math 5/6 haven't yet encountered 6th grade math. They'll be covering 5th grade content through December.


It's a computer adaptive test.
There is massive overlap between 5th and 6th grade math curriculum.


Except that the response, above, was to the statement that,

"Kids in a 6th grade math class are taking the MAP 6+. It doesn't take a Galaxy Brain to understand.'

They aren't in 6th grade math yet. Sure, there's overlap, but they shouldn't be taking MAP 6+ until they are, especially when there is evidence of a temporary decline in score with the shift in the grade level of the test and when that test, unlike the others for this year, is going to be used as an admissions gate to magnet programming.


It doesn't matter. Magnet admissions is a massive lottery with few winners. Kids whose scores dropped were bouncing the ceiling where the score was basically random (within the range) anyway. If you are that obessee with getting a higher score, grab a middle school math book from the library.


Of course it matters. "It doesn't matter" doesn't follow, logically, from the fact that it's a lottery with few winners (another problem, entirely). And there's the broader consequence of local placement in higher MS math courses, for which MCPS uses the lottery inclusion (for those not selected for the magnet) as the basis for eligibility.

MCPS shouldn't be looking to identify need in their system based on who has accessed outside enrichment, whether library or paid prep/tutoring.
post reply Forum Index » Montgomery County Public Schools (MCPS)
Message Quick Reply
Go to: