Sone classes are hard for almost everyone, regardless of who you are. Anecdotally I think some students think Princeton will be nurturing because it’s small, the way SLACs are, and they are surprised to find it’s not. |
|
American and UK universities both have a track record of nurturing immense talent. They both also have a record of students who fail to thrive. It’s hard to predict sometimes based on the high school record, and there is no crystal ball. But it’s partly the responsibility of the parents and students to find a good match as well. Don’t just go to a place for prestige. The OP is absolutely right in thinking that just because you can get into a high rigor school doesn’t mean it’s the best decision for you. |
I’m an academic in STEM and the above is just flat out incorrect. Must be basing this on what grad admissions were like 25 years ago and before grade inflation. |
There are some reliable predictors. The SATs are a good predictor but they are not reliable on their own. GPA at high rigor high schools combined with SAT scores are much better predictor. Add in a USAMO/USAJMO (possibly even AIME) qualification or other science Olympiad qualification and you can pretty reliably predict positive academic outcomes. |
I wouldn't consider math competitions. If that's the case AI should go to college not your kids. |
Maybe so, and that's great if a kid does well at those, no knock against that. But that's a pretty narrow qualification. Most of the elite STEM academics I know never touched an olympiad in their life. |
I stopped equating prestige with competence a long time ago—speaking as a hiring manager. |
I hope you are being facetious or snarky. The Math Olympiads are still a benchmark for the latest AI models, which cost billions in capex to set up and train. Even the bleeding edge ones need millions of tokens, long chains-of-thought and test time compute and not insignificant human interaction to approach gold medal level performance after running for hours (wall clock and not CPU/GPU time). High schoolers have to solve these in roughly 1.5h per problem. There are about 250 high schoolers in the US that qualify for USAMO (which arguably is at least as difficult if not more than the IMO) and about 150 or so get honorable mentions or medals. Most of these could medal in IMO as well (were it not for the super-high competitiveness of the US IMO selection process). So, while college success (or in academia) does not require one to be good at Olympiad level math, being good at them is a good indicator of college success. |
Sure—except most people in these competitions aren’t participating purely out of a love for math. Many are using them as stepping stones to firms like Jane Street or Citadel. For them, it’s more about strategic positioning than genuine curiosity—competition is for racehorses. Solving problems quickly isn’t what drives humanity forward; deep, original thinking does. In engineering fields, a solid command of calculus is a reliable indicator that a student can handle the rigor of college-level work. For liberal arts, however, qualities like critical thinking and creativity are far harder to quantify—and as an engineer, that’s outside my area of expertise. |
SAT is the most objective standard. And you are off for the rest of points. GPA is meaningful only if it’s from high rigorous courses at high rigor high schools. Math competition is irrelevant as many students don’t participate in it. |
|
And speaking of intelligence—if elite college professors can propose ways to measure AGI like what HLE does, they should also be able to articulate what kind of students they actually want to teach.
For reference, see Humanity’s Last Exam from the Center for AI Safety: https://agi.safe.ai/ Intelligence isn’t just about solving math problems quickly. Math is simply a convenient tool—not the full measure of human capability. |
That's moving the goalposts by a lot. The point was whether Olympiads are a good predictor of college success, more so than SAT. And objectively that is correct. Whether the xMO kids do it for strategic positioning for quant jobs is not relevant. And there are reliable studies that show that IMO medalists end up publishing more impactful research than the non-medalist/non-IMO counterparts. Also, if you have met these kids, they are well beyond calculus in 9th/10th grades so they can handle the rigor. And yes, calculus errors can cause a bridge to collapse but then again mucking up volatility in Black-Scholes can lead to the LTCM crash. Rigor is required in both aspects. |
Agree, very very hard to quantify. Perhaps some of these competitions like bio or chem olympiads may signal that you would ace the MCAT and do well in medical school. But when it comes to basic science, I cannot think of a simple signal for research potential. For example, I think the ISEF competition is a poor signal of scientific promise. I have nothing against it--it's a nice activity and a good way for kids to get some exposure to research. It is an especially good motivator to beef up your presentation and scientific communication skills. But not everyone has the means to participate, and I hardly think doing well means you have more promise than someone who didn't do well, or someone who didn't even participate. I wish we could measure genuine curiosity, creativity, and passion, but we can't. |
| Over 95% of all colleges are test optional, and we are talking about math competitions now? If we can’t even agree on the most reliable predictor, how is it realistic to use a ultra niche one? |