Anonymous wrote:Anonymous wrote:Anonymous wrote:I think this selection model sounds fairer than it used to be. Several years ago my son took the test when it was on paper, and he said that somewhere toward the end he got off by one number between the question booklet and the answer sheet. So he put the answer to #35 on the bubble for #36, etc. obviously he didn't get in and I'll never know if he would have, but it seems fairer to look at their overall test data through the years.
Excuses? You only hear them for those who did not get in.
Yes, it's odd but true that people who did get in do not provide reasons for why they did not get in.
Anonymous wrote:Anonymous wrote:I think this selection model sounds fairer than it used to be. Several years ago my son took the test when it was on paper, and he said that somewhere toward the end he got off by one number between the question booklet and the answer sheet. So he put the answer to #35 on the bubble for #36, etc. obviously he didn't get in and I'll never know if he would have, but it seems fairer to look at their overall test data through the years.
Excuses? You only hear them for those who did not get in.
Anonymous wrote:I think this selection model sounds fairer than it used to be. Several years ago my son took the test when it was on paper, and he said that somewhere toward the end he got off by one number between the question booklet and the answer sheet. So he put the answer to #35 on the bubble for #36, etc. obviously he didn't get in and I'll never know if he would have, but it seems fairer to look at their overall test data through the years.
Anonymous wrote:Do you think MCPS plans to expand the middle school magnets?
Anonymous wrote:Anonymous wrote:Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.
It wasn't teacher identification; selection was done by a centralized committee looking at student data. So if you are implying classroom teacher bias in applicant pool selection, that doesn't seem to be at play here.
the applicant pool is quite different from the selection pool. Something is wrong somewhere.
Anonymous wrote:Anonymous wrote:Anonymous wrote:My child went with 7 other kids from our neighborhood ES
It is based on numbers in each school. We were told 6 max (3 girs, 3 boys) in our ES with 100 kids in 4th grade. Schools have between 40-150 kids per 3rd grade. That grade had so many smarter girls too but it didn't matter.
This is not true.
In my child's grade, there were 5 kids admitted, 4 boys and 1 girl. There were over 100 kids in the grade.
Anonymous wrote:Anonymous wrote:Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.
It wasn't teacher identification; selection was done by a centralized committee looking at student data. So if you are implying classroom teacher bias in applicant pool selection, that doesn't seem to be at play here.
the applicant pool is quite different from the selection pool. Something is wrong somewhere.
Anonymous wrote:Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.
It wasn't teacher identification; selection was done by a centralized committee looking at student data. So if you are implying classroom teacher bias in applicant pool selection, that doesn't seem to be at play here.
Anonymous wrote:Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.
The pilot model works better than the previous model and is cheaper than testing all students. If it's not possible to test all students, then MCPS should use the pilot model.
Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.
Anonymous wrote:Is it just that those schools are have more black students?
Anonymous wrote:comparing the racial fractions of the applicant pool vs. selected pool in the pilot schools 2016 datat: the teachers identified a lot more black students than Asian students as candidates, but more Asian students were selected. can we draw the conclusion that teacher identification is unreliable at all? Expanding the pilot model to all schools doesn't make sense. I say testing all students except for those opted out the way to go.