It's not about participation trophies. This is how they are differentiating relatively small differences in students' capabilities *with a shorter test.*. The old test was a good deal longer and had more easy questions and more hard questions. The many easy questions didn't do anything to differentiate a 700 from an 800 because both 700 and 800 scoring kids got them 100% correct. Same for low scoring kids-- they got 75% of the hard questions incorrect (since they got 25% of their guesses correct). By having the two paths, they can give the 450-550 kids more easy questions to differentiate the 450 from the 480 from the 520 from the 550. And can give the higher scoring kids more hard questions to differentiate the 700 from 730 from 760 from 800. |
| My kid says the more difficult Bluebook practice tests were pretty close to the real tests in terms of difficulty. Scored mostly1600, with all tests 1580+, on the practice tests, then 1560 on the real deal in the spring, 1600 in August (we picked the same testing location to reduce stress). No other prep than the practice tests provided by College Board. |
Which one are the more difficult blue book tests? |
| It is a lot like sports where performance in practice and games can differ. Some people are gamers and handle pressure very well. Others don't and will almost always score lower than their practice tests. |
Was this during digital testing? |
|
FWIW, I have two kids who took the SAT--one was paper and one was digital-- and both kids scored exactly 10 points lower than their highest practice test. Both kids scored in mid 1500s and were one and done. From my perspective, the practice tests are pretty accurate. But my kids had a track record of being good standardized test-takers across the board (MAP, ERB, etc.).
That said, both had at least one practice test where they scored a 1490 even after a lot of practice. So there definitely is variation from one version of the test to another. |