|
How do they compare? Is your practice score indicative of the real thing if taken under test conditions?
These are the college board practices |
|
With the caveat that there is variability among actual tests, the practice tests tend to be easier both for difficulty and for scoring than the actual tests.
My guess is that there is something goofy going on with adaptability, whether a student gets the hard section 2. Students are reporting big increases and big decreases even with a short time between test dates, which shouldn't happen to this degree if the test is well standardized. Essentially, there seems to be some luck involved. My kid scored 180 pts lower on the August test than on the practice tests. The difficulty was slightly harder, but student thought they did ok, so we don't know what happened - maybe didn't get the hard section 2? A student posting on reddit had a similar score to my kid's for August and a 160 pt increase in September, so certainly there is hope. Multiple retakes will be the name of the game for this inconsistent digital test. |
That’s wild. Your kid must have been really disappointed. |
Yes. And then later told me that their friends who took August were also disappointed with their scores. We don't know why it worked out this way. Retaking is the only thing to do. I was formerly against lots of retakes, back in the days of the paper test. I'd say just don't take the real thing until your practice test scores are where you want them to be, because back then the practice tests were more representative. I'd wonder if College Board were trying to use surprise difficulty to differentiate among top scorers, perhaps, but the variability between test dates may point more to quality control in their equating or standardization processes. It's a shorter test, which may make consistency in construction of the test more difficult for College Board. If multiple retakes now turns out to be beneficial, that is going to benefit certain students who are already advantaged, i.e., those who know to retake and those who can afford to do so. My kid is advantaged, but I can already hear the complaining to come. |
My friend’s kid went from a 36 in English and a 28 in science in the ACT and the following month a 20 in English and a 34 in science. It is absurd. |
Why would paper vs digital make any difference in consistency? |
Because they were fewer versions of the paper test. With the digital test, two students sitting in the same room for the same test date for the same module can get different questions, because the test pulls from a pool of potential questions. |
|
My kid’s first SAT was in line with practice tests, second SAT much higher.
Over 700 on both sections every time, so for DC it wasn’t about whether they got the harder second module. |
The test construction is completely different. With the paper test, each question held the same weight for points. There would be a scoring scale based on the total number correct, but particular questions did not have different weights, so missing an easy one or missing a hard one did not matter. With the digital test, the question difficulty comes into play. In addition, the digital test is adaptive. If you miss too many on the easy section 1, you are given an easy section 2, and your score is capped somewhere in the high 600s. I am wondering what the heck happened with my kid's test and suspect the low score may have something to do with this aspect of the scoring system. |
| Honestly, the more I think about it, the more it seems to me that the supposed adaptive aspect of the test introduces an element of unfairness and calls into question the quality of standardization. |
| They are not as nervous in practice. |
| My DS did worse on actual test vs practice since he has anxiety. |
In that case, why are folks claiming that the august SAT was so difficult and the September one wasn’t? Surely if it is based on a bank of questions experiences will vary? |
Experiences do vary, so one can never be completely certain that they got the same test as someone else, yet there seems to be a large group who got certain questions. It is more difficult to make these kinds of guesses on difficulty than it was with the paper test, when a limited number (say, 10-20) of test forms were in play on a given test date, with the vast majority getting a single form. The scoring and difficulty is far more obscure than it already was, quite the black box. |
|
I’d love to know whether August really was scored harshly.
My kid took a diagnostic test with a national company over the summer and scored a 1550. He then did a month of 1v1 tutoring and took August. Scored a 1480. He’s a senior and is just done with the test and is letting the chips fall where they may. |