You are lying again. Kids don’t usually get a 100-200+ bump. Anecdotally, maybe. Statistically, the benefit from prepping is less than the standard deviation of the test. Since you probably don’t have a full grasp of these concepts I’ll boil it down for you: prepping doesn’t help. |
If your kids are well beyond ES, it may not be as good as you remember. Now 20% of all kids get in and not all are subject to the same cutoffs due to more focus on local school norms, equity, etc. But the hardest thing for most kids in high SES areas may well be getting in, hence the need to prep as a little extra insurance. 10-20 points may matter. Or not. Because the process now is shrouded in "holistic" mystery. |
DP The studies cited in the article are based on limited information and their findings should not be assumed for all students. Study limitations: 1) Students self-report whether they prepared for the SAT. Self-reporting is known to introduce many biases. Often students that do well don't want to acknowledge they prepared and students that didn't do well may want to show they that tried. 2) There is no way to know the intensity of use. Some students may have purchased a book but opened it once. Some may have enrolled in a course and used it once. Other students may have prepared for hours with their materials. Thus, a yes/no for preparation with no indication of usage is a poor proxy. 3) Studies look at the improvement from PSAT to SAT. But many students begin to prepare before the PSAT so the test scores would miss the initial improvement coming into the PSAT. Individual students are best placed to know if these courses work. They know their test scores before preparing and those afterward. Large scale studies miss all the sub details needed to make that assessment. The popularity of test preparation indicates that it is more useful than what the article suggests. Families listen to friends' experiences; if they're shelling out money, it's because they have heard friends report that it works. |
The kids from higher SES schools and centers need higher scores because of prep. Parents were already prepping the kids which is why their scores were so much higher then other schools. The big difference between schools were 20% of the kids were scoring in the 140s and schools where 20% of the kids were score in the 130s is the amount of prep. The schools with a lot of 130s tend to have fewer parents prepping. Once you add in workbooks and/or classes, you get your 140s because prep tends to lead to a 10 point bump. Your schools with a lot of 130’s tend to me middle class and upper middle class families where parents have been reading to kids and doing things with them that reinforce reading, math, and curiosity. These are the kids who tend to show up to K knowing their letters and numbers and sounds and being able to write or read a little bit. They do well on the NNAT and CoGAT because they have been exposed to more. The kids at the lower SES schools are at a disadvantage because their parents are less likely to read to them or expose them to basic math. They tend to show up to K not knowing their letters, numbers, shapes, colors, or sounds. That shows up in their test scores as well as their classroom performance. Being upset because the County gets that kids who are advanced at each school is going to be different based on their backgrounds is a bit silly. The schools with a lot of 140’s who are not in pool will have peers in the gen ed classroom. The kids with 130s at a Title 1 school will not have peers in their classroom. Those kids need AAP at their school because they don’t have a peer base. Even if that means that they would not qualify for AAP at the higher SES school. Recognizing the schools in the County end up with different types of programs based on the kids at those schools is common sense. Your 135 kid at a school with a ton of 140s is going to be fine in Gen Ed. |
I don’t think you read carefully the references, or you don’t know what control for variables methodology means. This reference is quite compelling: https://www.researchgate.net/profile/Benjamin-Domingue/publication/228337033_Using_Linear_Regression_and_Propensity_Score_Matching_to_Estimate_the_Effect_of_Coaching_on_the_SAT/links/5486ec840cf268d28f06a133/Using-Linear-Regression-and-Propensity-Score-Matching-to-Estimate-the-Effect-of-Coaching-on-the-SAT.pdf?origin=publication_detail For your own information asking friends and acquaintances is also a form of self reporting, likely way more prone to bias than anonymous self reporting. Authors control for intensity of use or motivation using proxies such as intent to apply and requesting information from college, GPA, SES etc. Contrary to what you claim they didn’t look at improvement from PSAT to SAT, but evaluated the effect of coaching on students that took or didn’t take the PSAT, ie control for another variable. But sure, let’s not believe the study done by a Stanford professor, expert in educational testing, and let’s go by what auntie told you about the prep class her daughter took. Feel free to waste as much time and energy obsessing over who preps and if it is cheating. |
These studies are based on student self-reporting of test preparation which introduces well known biases. Generally, authors try to avoid self-reported data where possible. Here it is not possible so they use the best data set they could access in ELS. Starting from that point, authors try to control for factors but can't capture nuances. They attempt to control for intensity of usage by using college intent, GPAs, and SES. But students can look similar on paper for these criteria and still prepare very differently for standardized tests - some buy a book and never use it because they're too busy, others prepare extensively. One student's intensity of preparation can also vary over time depending on their other activities, whether pre-PSAT, pre-SAT, from first SAT take to second SAT take, or other. That is why it is unwise to use a generalized result from these large scale studies and argue this applies to all students. If a student is willing to put in long hours preparing (whether from paid service or free like Khan), that is a critical factor. |
Without having even the most basic understanding on statistics and how testing works, you should refrain from participating in this conversation. If you don’t approve of self reporting, how exactly do you propose collecting that data, stealing personal information? Or just not even parse through the data and do the study since your aunty already told you prepping works well and you can easily improve your score by 300 points. Basically you’re saying disregard the statistical trends of thousands of students and look at the few outliers that spend long hours studying, never mind that you’re actually looking at the dose response, ie the more you study the higher the score because you’re actually learning the material that is tested. You really can’t be any dumber than this! |
Very soon studying will be considered cheating! |
It’s not equitable to the ones that didn’t study. |
The name-calling and thread-policing are funny, but that line is the funniest. lol |
Thank you, thank you very much! Insulting dumb idiots on DCUM is my one guilty pleasure. |
No, dear, I was laughing at you. Not with you. |
Study results are only as good as the data used. These studies are using self-reported, yes/no answers as to whether students prepared for the SAT with books, courses, etc. There is no objective confirmation as to whether these responses are accurate and no objective or subjective data provided on time spent on preparation. View the study results accordingly. |
Your comebacks are not as witty as you imagine them. |
It’s truly amazing how complete ignoramuses like yourself, decide that they know better than experts in the field who account for more than 30 different variables that track math credits, hours of homework per week etc., AP coursework, parent involvement in SAT prep and college application etc. The paper even surveys five other studies, on different populations, with different analysis methods and the score improvement due to prepping is less than 20 points on math, even less on verbal. That’s smaller than 5% of the test range and below the margin of error of the test. But, yeah, DCUM lady with her vast expertise, decides this self reporting data is not reliable, and instead relies on self reported data from friends and family that for sure show score improvements of 100-200, even 300 points. The contradiction is completely lost on her, but no surprise here! To wrap it up, kids scoring higher then your child didn’t cheat, they simply work harder have more aptitude and talent. Just accept this and don’t try to smear other kids accomplishments with silly accusations if cheating. |