
Median SATs. I've occasionally seen those numbers for some schools (e.g. in magazine articles), but I don't know how they get them. Presumably it's self-reporting on the part of the school (which would explain why, whenever I've seen them, not every school has an entry!!). Re APs. After somebody mentioned that Sidwell dropped AP, I went to look at their online curriculum document and the approach they've taken is not to call any courses "AP Whatever" but to indicate in the course description that a certain course would be good preparation for the AP Whatever test. Reading through the course list, you could come up with an AP list for Sidwell -- but that would be offerings rather than # of test takers/distribution of scores. Obviously, Jay Mathews gets the latter data for public schools, but I think they provide it rather than SAT releases it on a school-by-school basis. At our school, parents do see grade-level medians for the school on ERBs. Is that the norm? (BTW, the reason we know cut-off scores for NMSF by state is that there's a website that collects data from kids/schools who made and missed the cut. So sometimes individuals do get together to unravel some of these sorts of mysteries... But we probably don't have enough potential reporters here to establish the validity of the data.) |
I might niggle at the margins (e.g 90th percentile of Whitman will be seen differently than 90th percentile of Wilson), but I think that this is a great question and, accepting your premises, I suspect that the difference then comes down to how your kid does in college (vs. college admissions). If the private school has, for example, helped your DC become a very good writer or accustomed your DC to a heavy reading load, college grades will reflect that. And that matters when you get to graduate admissions. |
Here are my best ideas: (1) The various studies of how many students attend which colleges. Yes, there is unwanted noise and error-rate in those statistics because of legacy issues, but probably no more than in any other measure you might study. (2) AP exam scores. I think I recall seeing some yearly report put out by the AP exam people that recognizes various schools whose students achieved high scores on the AP exams. I recall that someone from NCS got the highest score in the nation on some French AP. Perhaps they give out some other statistics that might be illuminating. (3) Math competition scores. Here are two sites that give results of math competitions: http://www.math.umd.edu/highschool/mathcomp/ and http://www.unl.edu/amc/d-publication/d1-pubarchive/2006-7pub/2007-1012Summary.pdf. I'm sure there are others. You'd want to total up numbers and somehow account for the varying class sizes at different schools to ensure any comparison is meaningful. (4) SAT scores. Doesn't SAT publish some general averaged statistics about SAT scores at schools? Or is it only the schools that release that info? Alternatively, I think the Washington Examiner article that some people have linked provides some SAT data for most schools. (5) Diversity %. Almost all schools report this on their websites. (6) Endowment. I'm not sure what this shows, but I'm brainstorming here. (7) Sports success. I'm sure it would not be hard to find and tally up the sports successes at each school, and then compare them to one another. I think a lot depends on what aspects of the schools you want to compare. Do you really care how well the very top math students perform in competitions? Do you really care what win/loss record the soccer team had last year? Do you really care how many graduates went to Princeton? I'm enjoying this thread as much as anyone, mainly because I enjoy the exercise of identifying and contrasting data sets. But is any of this really useful in real life? Is anyone here really trying to rank/tier these schools based on this data? I just don't think that's a very useful goal. Who really cares if one school is 1.37% "better" than another according to some metric? Indeed, if I challenged everyone here to rank the top 20 schools into five tiers of four schools each (and we all set aside biases and answered honestly), then I suspect most tierings would look very similar. And if we could magically obtain perfect data from all schools on all metrics that we care about, I suspect the schools would tier up pretty closely to our collective gut instinct. So we all already sort of know all we need to about how the schools compare. Beyond setting those very broad tiers, which arguably might be semi-useful in targeting which schools you might want to research further for your own child, what benefit is there to ranking/tiering? As I noted earlier in the thread, I do not yet have a child at any of these schools. So while I am interested in the results, I don't have any significant interest in how particular schools stack up against one another. Perhaps if my children start going to one of these schools, I will be more interested in ranking/tiering because I will feel competitive? I don't know. Are others on this thread motivated by some competitive spirit to prove your child's school? Is that what's driving all this? Sorry to distract the data discussion with this meta-concern, but I'm just not understanding some of the motivation. |
My stake: I like numbers and I hate BS/gossip/assertions with nothing to back them up. From a self-interested perspective, I don't care about the rankings. My kid is getting a great education at a well-known school. She's been there for years, so I'm not at a stage where my opinion about the quality of the school is up for grabs based on the collective opinion of anonymous internet posters. LOL! At this point, DC's fate will be a function of her own efforts. No one's going to hold her school against her. No one's going to admit her to college just because she goes to that school. If her school ceases to meet her needs when she gets to HS, I'm delighted that I live in a place where there are attractive alternatives. |
Well said PP. Which goes back to the OP's point that you cannot rank these schools by which is "best". You can, however, develop a list/ranking/tiering/etc of which schools are in the most demand. As others have argued, that should not be used as a direct proxy to which schools are best, but it is useful data for parents and students to know when deciding which schools to further investigate. Remember, there are plenty of parents who find there way to DCUM who have no preconceived notion or prior knowledge/experience of the schools in this area - so they often come here looking for a place to start. Without having some basic data on which private schools tend to be the most in demand, it would be like applying to colleges and having no idea whatsoever of how, say, Brown compares to the University of Iowa. For this reason, I think understanding the relative demand for a school is a very useful metric for new parents - but we must guard against such data, or other discrete data points, being misused to try and convey that a particular school is "best" for their child. |
Here's the like to the Washington Examiner's top private school picks - which includes the median SAT for most schools. Looks like the BIG 5 are all in the same relevant range (with SFS being a little out ahead of the pack). STA does not report their score. http://www.washingtonexaminer.com/local/Regions-best-private-schools_02_26-40314672.html |
Oh and one somewhat bizarre but interesting insight with IRL application:
Don't take the WPPSI seriously as a measure of your kid's intelligence! Given all the claims about everyone's DC being in the 99.9%, you'd expect a helluva lot more top 1% (NMSF) scorers in the top privates! By age 16-17, there's much more differentiation than at age 3-4. |
I think I agree with what you're saying here -- it's sort of like me seeing that every other family with a minivan is buying a Honda Odyssey, so I can assume that those minivans must be doing something right. My search for a new car might not end there, but that's a good starting point. I think what irks me most about this thread is the title: Big 5 = "most desired" schools, not necessarily "the best." It seems to reduce the analysis of top schools to a popularity contest, which I don't think is fair to the schools or to the parents looking at them. I suspect the reason certain schools are viewed as top-tier is because those schools offer a lot of benefits. They may be highly desired as well, but I suspect the desirability flows from the many benefits they offer, and ultimately is secondary. |
I agree. An economist or statistician could probably explain this better than I... but given a large enough admissions data set you should be able to ascertain which schools the majority of people have rationally concluded offer the benefits that are most in demand by the total pool of applicants. A large enough data set will control for individual bias (ie my DD is a top lacrosse player so I'm only targeting the schools with the best lacrosse programs). As you said above, just like you can assume that most people are acting rationally in choosing the Honda Odyssey b/c it offers the benefits that are most in demand, so too can you assume that large numbers of applicants are also acting rationally when deciding which schools to submit their applications based on their own conclusions of the relative benefits. |
I am not biased against private schools at all. In fact, my kids went to well-known DC private schools from prek through high school and had first-rate educations and enriching relationships and got into top colleges. I'm simply saying that comparing private schools on the basis of national merit/ivy/SAT statistics is meaningless because some schools assemble their student body based on certain factors (scores at entry points, counseling out certain kids as weaknesses emerge rather than working with them innovatively, weighing SSAT scores more heavily than personal strengths, favoring ivy legacies, etc) that will make these statistics look good and lead parents to naively assume that they are better schools. Graduating more merit scholars does not necessarily mean that the school is better at creating merit scholars or that your child would have a greater chance there of becoming one; it means the school is more focused on collecting kids with that potential. Similarly, graduating more ivy students doesn't mean your kid would have a greater chance of going to an ivy league school (in fact, her competition would be steeper), but it may mean there were more ivy legacies there to begin with. A much better way to choose a private school is to look at what opportunities the child will have to pursue her interests (do only stars get to play sports or act or write for the paper?) and expand her sense of who she is in the world. |
Not necessarily. The NMSF population consists of PSAT takers --- who are college bound kids and not drop outs etc. WPPSI is normed across the general population. There is also the geographical effect -- you have to have a higher score to be in the top 1% in NMSF in DC than in most other places. So, it is possible to be in the 99th percentile in WPPSI and lower in NMSF. Of course, WPPSI at age 3-4 might not be very accurate, but that is a separate question. |
Yeah, but the comparison here is between kids who supposedly score in the top .1% of the WPPSI vs the top 1% of the PSAT. So, in theory, what we're seeing is that scores of 1 in 1000 preschoolers yield only a couple dozen 1 in 100 teens.
Of course, It could just mean that people are lying about WPPSI scores, LOL! But, yeah, IQ tests at age 3-4 are generally unreliable (more reflective of home environment than native intelligence) and the WPPSI in particular is not a good test for sorting out differences among gifted kids. |
No, they are not "1 in 100 teens". They are 1 in 100 among those who are college bound in DC which is a very different population. Besides, in the last several years I have met a bunch of people who claimed that their kids were 99th percentile, and only who claimed 99.9, so it may not be as common as you perceive it. |
|
|