Still way cheaper (and thus more accessible) than paying tutors to "help" your child with their homework throughout all four years of high school.Anonymous wrote:How do you explain students faking disabilities for extra time and doing better because of this? It’s not merely college aptitude. Everyone would do better w more time, especially on the ACT. Things are not standardized, unfortunately; and college’s don’t who has had extra time. It tests parental aggressiveness and wealth—who can pay $7k for neurological testing to shady doctors.
Anonymous wrote:Everyone wants their kid to be exceptional. Most aren't. High schools have relented to parents and made grades meaningless due to this pressure; take a look at MCPS, where most kids who show up for class have 4.0+ GPAs. They mean almost nothing. The SAT is a highly accurate, hard-to-fake indicator of underlying G (general intelligence), geared towards a 17-year old brain. For those who complain about test prep, there's a fairly simple response: Only motivated kids actually do the preparation, and the preparation usually gets you a few percentage points out on the normal curve. A 1300 simply will not become a 1500. One more point; you do not want your 4.3 GPA / 1300 SAT kid at a place like UChicago. They will fail out and be miserable. There's a reason these tests exist, it's not just to make you feel bad.
Anonymous wrote:There are 1600 or perfect 36 one-and-done kids who have nowhere to go, score-wise. Once the ceiling is hit, we have no way of measuring their actual capacity. Suggesting that an applicant with a 1600 / 36 and an applicant with a 1400 / 31 are viewed the same by anyone (other than the loved ones of the 1400 / 31 applicant) is laughably off the mark.
Anonymous wrote:Anonymous wrote:How do you explain students faking disabilities for extra time and doing better because of this? It’s not merely college aptitude. Everyone would do better w more time, especially on the ACT. Things are not standardized, unfortunately; and college’s don’t who has had extra time. It tests parental aggressiveness and wealth—who can pay $7k for neurological testing to shady doctors.
Knock it off. Doctors are not this unethical as a whole. No one wants their kid to have a diagnosis if they don't have a disability and the number of people, including teachers, who have input in the evaluation process makes it highly unlikely that are as many "cheaters" as you want to believe there are.
Anonymous wrote:Anonymous wrote:How do you explain students faking disabilities for extra time and doing better because of this? It’s not merely college aptitude. Everyone would do better w more time, especially on the ACT. Things are not standardized, unfortunately; and college’s don’t who has had extra time. It tests parental aggressiveness and wealth—who can pay $7k for neurological testing to shady doctors.
Knock it off. Doctors are not this unethical as a whole. No one wants their kid to have a diagnosis if they don't have a disability and the number of people, including teachers, who have input in the evaluation process makes it highly unlikely that are as many "cheaters" as you want to believe there are.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I don’t so much have an axe to grind about standardizing tests per se, but I think both the format and the arms race mentality are problematic. The MCQ format is a terrible way to assess a student pedagogically. It benefits only the test administrators because it’s fast and cheap to grade. But it’s susceptible to being gamed out. A student can improve dramatically by getting better at the test taking strategies that aren’t related to understanding the underlying content.
As for the second point, standardized tests are best used as one datapoint to make sure the applicant has the baseline knowledge set and skills. Not as a competition to get a perfect score. The average SAT score at Harvard in the early 1990s was UNDER 1400. Now people on this board scoff at scores like that. Scores are not linear. In reality there is a minuscule difference between 1400 and a 1600. There is a bigger difference between 1200 and 1300 than there is between 1300 and 1600.
Simply untrue. If you won’t consider the ceiling effect, you’ll never understand why the difference between a 1400 and a 1600 is far from minuscule.
College is not an SAT academy training professional SAT athletes. Being better at the SAT at the extremes is not meaningful. If you’re looking for distinguishing academic prowess at the 99+ percentile, you need a different test than the one that determines readiness for NVCC.
The sat measures more than how well you can take the sat. It measures cognitive ability.
Don't believe the princeton review ad, they are trying to get you to spend a lot of money on your mid kid.
So if a kid starts at 1480 and studies up to 1560, you believe he has increased his cognitive ability?
DP they have increased their cognitive ability from 97th to 99th percentile, which is not implausible.
+1 that's basically a rounding error. lol
1480 to 1560 is a rounding error? Them’s fightin’ words on DCUM.
Anonymous wrote:How do you explain students faking disabilities for extra time and doing better because of this? It’s not merely college aptitude. Everyone would do better w more time, especially on the ACT. Things are not standardized, unfortunately; and college’s don’t who has had extra time. It tests parental aggressiveness and wealth—who can pay $7k for neurological testing to shady doctors.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I don’t so much have an axe to grind about standardizing tests per se, but I think both the format and the arms race mentality are problematic. The MCQ format is a terrible way to assess a student pedagogically. It benefits only the test administrators because it’s fast and cheap to grade. But it’s susceptible to being gamed out. A student can improve dramatically by getting better at the test taking strategies that aren’t related to understanding the underlying content.
As for the second point, standardized tests are best used as one datapoint to make sure the applicant has the baseline knowledge set and skills. Not as a competition to get a perfect score. The average SAT score at Harvard in the early 1990s was UNDER 1400. Now people on this board scoff at scores like that. Scores are not linear. In reality there is a minuscule difference between 1400 and a 1600. There is a bigger difference between 1200 and 1300 than there is between 1300 and 1600.
Simply untrue. If you won’t consider the ceiling effect, you’ll never understand why the difference between a 1400 and a 1600 is far from minuscule.
College is not an SAT academy training professional SAT athletes. Being better at the SAT at the extremes is not meaningful. If you’re looking for distinguishing academic prowess at the 99+ percentile, you need a different test than the one that determines readiness for NVCC.
The sat measures more than how well you can take the sat. It measures cognitive ability.
Don't believe the princeton review ad, they are trying to get you to spend a lot of money on your mid kid.
So if a kid starts at 1480 and studies up to 1560, you believe he has increased his cognitive ability?
DP they have increased their cognitive ability from 97th to 99th percentile, which is not implausible.
+1 that's basically a rounding error. lol
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I don’t so much have an axe to grind about standardizing tests per se, but I think both the format and the arms race mentality are problematic. The MCQ format is a terrible way to assess a student pedagogically. It benefits only the test administrators because it’s fast and cheap to grade. But it’s susceptible to being gamed out. A student can improve dramatically by getting better at the test taking strategies that aren’t related to understanding the underlying content.
As for the second point, standardized tests are best used as one datapoint to make sure the applicant has the baseline knowledge set and skills. Not as a competition to get a perfect score. The average SAT score at Harvard in the early 1990s was UNDER 1400. Now people on this board scoff at scores like that. Scores are not linear. In reality there is a minuscule difference between 1400 and a 1600. There is a bigger difference between 1200 and 1300 than there is between 1300 and 1600.
Simply untrue. If you won’t consider the ceiling effect, you’ll never understand why the difference between a 1400 and a 1600 is far from minuscule.
College is not an SAT academy training professional SAT athletes. Being better at the SAT at the extremes is not meaningful. If you’re looking for distinguishing academic prowess at the 99+ percentile, you need a different test than the one that determines readiness for NVCC.
The sat measures more than how well you can take the sat. It measures cognitive ability.
Don't believe the princeton review ad, they are trying to get you to spend a lot of money on your mid kid.
So if a kid starts at 1480 and studies up to 1560, you believe he has increased his cognitive ability?
DP they have increased their cognitive ability from 97th to 99th percentile, which is not implausible.
Anonymous wrote:Anonymous wrote:Anonymous wrote:No question grades are inflated but colleges use them much more than test scores at the moment. Even the schools going back to testing seem to view testing as a way to validate grades as opposed to an individual variable.
This is the vibe I’m getting too
If everyone has a 4.0 then the test score isn't validating another metric, it is the metric.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I don’t so much have an axe to grind about standardizing tests per se, but I think both the format and the arms race mentality are problematic. The MCQ format is a terrible way to assess a student pedagogically. It benefits only the test administrators because it’s fast and cheap to grade. But it’s susceptible to being gamed out. A student can improve dramatically by getting better at the test taking strategies that aren’t related to understanding the underlying content.
As for the second point, standardized tests are best used as one datapoint to make sure the applicant has the baseline knowledge set and skills. Not as a competition to get a perfect score. The average SAT score at Harvard in the early 1990s was UNDER 1400. Now people on this board scoff at scores like that. Scores are not linear. In reality there is a minuscule difference between 1400 and a 1600. There is a bigger difference between 1200 and 1300 than there is between 1300 and 1600.
Simply untrue. If you won’t consider the ceiling effect, you’ll never understand why the difference between a 1400 and a 1600 is far from minuscule.
College is not an SAT academy training professional SAT athletes. Being better at the SAT at the extremes is not meaningful. If you’re looking for distinguishing academic prowess at the 99+ percentile, you need a different test than the one that determines readiness for NVCC.
The sat measures more than how well you can take the sat. It measures cognitive ability.
Don't believe the princeton review ad, they are trying to get you to spend a lot of money on your mid kid.
So if a kid starts at 1480 and studies up to 1560, you believe he has increased his cognitive ability?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I don’t so much have an axe to grind about standardizing tests per se, but I think both the format and the arms race mentality are problematic. The MCQ format is a terrible way to assess a student pedagogically. It benefits only the test administrators because it’s fast and cheap to grade. But it’s susceptible to being gamed out. A student can improve dramatically by getting better at the test taking strategies that aren’t related to understanding the underlying content.
As for the second point, standardized tests are best used as one datapoint to make sure the applicant has the baseline knowledge set and skills. Not as a competition to get a perfect score. The average SAT score at Harvard in the early 1990s was UNDER 1400. Now people on this board scoff at scores like that. Scores are not linear. In reality there is a minuscule difference between 1400 and a 1600. There is a bigger difference between 1200 and 1300 than there is between 1300 and 1600.
Simply untrue. If you won’t consider the ceiling effect, you’ll never understand why the difference between a 1400 and a 1600 is far from minuscule.
College is not an SAT academy training professional SAT athletes. Being better at the SAT at the extremes is not meaningful. If you’re looking for distinguishing academic prowess at the 99+ percentile, you need a different test than the one that determines readiness for NVCC.
The sat measures more than how well you can take the sat. It measures cognitive ability.
Don't believe the princeton review ad, they are trying to get you to spend a lot of money on your mid kid.