Big 5 = "most desired" schools, not necessarily "the best"

Anonymous
Anonymous wrote:
Anonymous wrote:Last year's "big 5" National Merit semifinalists

Sidwell 18
Gds 14
St Albans 10
NCS 9
Potomac 6
Maret 2

Drop in the bucket compared to TJ (or Blair which had 43, I think). Supporting the point that public magnet is probably the way to go if what you are looking for is a critical mass of high-performing kids with similar interests.

Also suggests that Big 3 makes more sense than the Big 5. But that's only if you're asking where the kids who do best on aptitude tests go/come from.




a PP just made an excellent point that you should expect the DC based schools to have more Merit Scholars - as they are not competing for smart talent like the privates in MD and VA. As to the above, sure seems that if TJ did not exist, Potomac would get a higher number of kids who are focused on the Merit scholarship.... given that TJ had like 150+ semifinalists. (by the way, the guy who ran and built TJ into a top school is now running Potomac).

Where are the 150+ "TJ like" semifinalists in DC??? Sidwell, GDS, STA, NCS, and Maret only have around 50 combined.

Shows the danger in trying to tier or rank schools as "best" based on one independent criteria alone.



Not sure I fully understand how the semifinalist designation works. According to the posts above the cutoff for a semifinalist in DC is 223, wheras in VA it is 217. Does that mean that 1% of the DC student population taking the test scored above 223, whereas 1% of the VA population scored above 217. If that is true, then all the DC students scoring 217-222 would have been semifinalists in VA but are not semifinalists in DC.

I don't know what the curve looks like between 217 and 222, but it is conceivable that you would have to double the DC numbers to make them comparable to the TJ semifinalist numbers. Does anyone know?
Anonymous
Seems like a problem this thread is having (and similar threads) is that certain people want to push for an approach to analyzing the data that favors their "pet" schools.
Anonymous
Anonymous wrote:Seems like a problem this thread is having (and similar threads) is that certain people want to push for an approach to analyzing the data that favors their "pet" schools.


I don't know that that is the case (I haven't posted before on this thread, but have an interest in all things to do with measurement). I think the problem is trying to do this in a thread format. The fact is, as already pointed out SOMEWHERE (seems we have a few threads going on this), measuring the 'goodness' of a school should be based upon a wide variety of measures. No one would send their child to a school in which they become proficient in math but learn nothing else. So there needs to be a variety of criteria to analyze the school's success.

Secondly, and most important in my book, is determing what exactly you are trying to measure. Are you trying to determine which schools have the best students or which schools create the best students? As someone pointed out, there is a reason to want to send to a school in which there are a critical mass of high thinkers. Then looking at TJ's number of NMS finalists (or some other metric) would be fine for simply knowing where your child might be able to befriend a bunch of smart kids. However, I think more relevant is asking what school can take an average kid, or a smart kid, or a struggling kid and achieve such and such. For that, you need to control for the student body at entry.

In any event, I appreciate the effort to at least look at these terms (Big 5, etc) with regard to actual results. Thanks to the people contributing.
Anonymous
9:21 - what you are seeing is that the score it takes to get to a particular threshold (commended, semi, finalist) is based on your state. When I was in school in MD, I remember a classmate whose legal residence was in WV got finalist with a lower score than got me to semifinalist and tied a friend of mine who got commended. It is easier to get to these awards as a DC resident b/c of the lower average caliber of ALL students in DC (crappy publics) than VA and MD. Not sure whether the factor counties in as well, in which case Montgomery and Fairfax kids are really at a disadvantage.
Anonymous
10:06

I don't think that you can begin to answer those kind of questions using the data we have. Also, the answers are so child specific.

What these numbers can show you is (i) whether your child will be able to find others with similar ability and (ii) whether the administration is used to dealing with smart kids. Most of the specifics you won't know till you are in the school.
Anonymous
Anonymous wrote:9:21 - what you are seeing is that the score it takes to get to a particular threshold (commended, semi, finalist) is based on your state. When I was in school in MD, I remember a classmate whose legal residence was in WV got finalist with a lower score than got me to semifinalist and tied a friend of mine who got commended. It is easier to get to these awards as a DC resident b/c of the lower average caliber of ALL students in DC (crappy publics) than VA and MD. Not sure whether the factor counties in as well, in which case Montgomery and Fairfax kids are really at a disadvantage.


Not disagreeing with the general argument, but I read it that the cutoff was HIGHER for DC. That you needed a 223, not a 217. So there must be an overabundance of high performing kids in DC, relative to MD and VA.
Anonymous
Hmmm - I have a hard time envisioning the average DC score as higher. Seems odd, but maybe that much has changed since I was in school and some combination of NCLB, charters, etc has meant sizeable improvements in DC. Or maybe the low performers in DC are so low they aren't even taking the test? Only other way I can see this making sense is if they are using different thresholds for publics vs. privates, which would definitely also explain the TJ numbers.
Anonymous
Anonymous wrote:Hmmm - I have a hard time envisioning the average DC score as higher. Seems odd, but maybe that much has changed since I was in school and some combination of NCLB, charters, etc has meant sizeable improvements in DC. Or maybe the low performers in DC are so low they aren't even taking the test? Only other way I can see this making sense is if they are using different thresholds for publics vs. privates, which would definitely also explain the TJ numbers.


I think they qualify based on the PSAT/NMSQT scores, and I think the kids have to arrange with their councelors to take the test. I can't imagine these very low performing schools in the District administering the test to many students.
Anonymous
A few helpful tidbits from the National Merit website:

"Commended Students are named on the basis of a nationally applied Selection Index score that may vary from year to year and is below the level required for participants to be named Semifinalists in their respective states. "
"Semifinalists are designated on a state representational basis. They are the highest scoring entrants in each state. "
"To be considered for a National Merit® Scholarship, Semifinalists must advance to Finalist standing in the competition by meeting high academic standards and all other requirements explained in the materials provided to each Semifinalist."
Anonymous
Anonymous wrote:9:21 - what you are seeing is that the score it takes to get to a particular threshold (commended, semi, finalist) is based on your state. When I was in school in MD, I remember a classmate whose legal residence was in WV got finalist with a lower score than got me to semifinalist and tied a friend of mine who got commended. It is easier to get to these awards as a DC resident b/c of the lower average caliber of ALL students in DC (crappy publics) than VA and MD. Not sure whether the factor counties in as well, in which case Montgomery and Fairfax kids are really at a disadvantage.


The cutoffs only apply to those taking the test --- that would be either college bound kids or even only kids who are applying to selective colleges. So the cutoffs say nothing about the general population of students in those areas.
Anonymous
Anonymous wrote:
Anonymous wrote:9:21 - what you are seeing is that the score it takes to get to a particular threshold (commended, semi, finalist) is based on your state. When I was in school in MD, I remember a classmate whose legal residence was in WV got finalist with a lower score than got me to semifinalist and tied a friend of mine who got commended. It is easier to get to these awards as a DC resident b/c of the lower average caliber of ALL students in DC (crappy publics) than VA and MD. Not sure whether the factor counties in as well, in which case Montgomery and Fairfax kids are really at a disadvantage.


The cutoffs only apply to those taking the test --- that would be either college bound kids or even only kids who are applying to selective colleges. So the cutoffs say nothing about the general population of students in those areas.


That's what I figured (I wasn't the PP you quoted).

Anyway, I need to remark how pleasant it is to have this kind of analytical discussion about schools. So much more helpful than the normal name calling that somehow became the fashion on the private school forum.
Anonymous
Anonymous wrote:Again, Blair has 700 graduates for this year. St. Albans has 80. I mention that because that's the one I know, but I know Sidwell is nowhere close to 225, which is where it would need to be to give the same ratio of NMS finalists as Blair. Why are you thinking the comparison needs to be about raw numbers and not percentages?


Two reasons I (not the quoted PP) would focus on numbers rather than percentages:

1. Scores at this level are probably more a function of intelligence than training (TJ would be the counterargument -- sounds like 1/3 of its kids are high-scorers and I suspect that demographics alone don't explain that outcome.)

2. From a "what's your kid's experience" perspective, absolute numbers are what matters -- not percentages. If there's only one other high-scoring kid (because you're in a tiny school), and that other high-scoring kid is a math kid and your kid is a verbal kid, your kid is alone. On the other hand, once you have a few dozen high-scoring kids (e.g. Blair), there's a real cohort.

Private schools with a little over a dozen kids may compensate for the lack of a robust cohort by allowing some extremely small (2-4 kid) classes for advanced kids -- which probably explains why GDS and Sidwell have math classes beyond 1st year college calculus. But it would be harder to justify that kind of decision for classes (e.g. some sciences) that require significant investments beyond teacher time. (Or even for many classes that involve nothing more than instructional time). In the humanities, course offerings strike me as less of an issue (and I'm a humanities person) -- there the issue is excellent teachers (and small enough class sizes) who can meet a kid at whatever level s/he's at. What you learn about doesn't change -- how deeply and subtly you understand it is what's at stake.
Anonymous
Anonymous wrote:Hmmm - I have a hard time envisioning the average DC score as higher. Seems odd, but maybe that much has changed since I was in school and some combination of NCLB, charters, etc has meant sizeable improvements in DC. Or maybe the low performers in DC are so low they aren't even taking the test? Only other way I can see this making sense is if they are using different thresholds for publics vs. privates, which would definitely also explain the TJ numbers.


The average DC score isn't higher. The DC cut-off point is set by the National Merit people to be equal to the most rigorous cut-off that emerges in one of the 50 states. The couple of years I checked, that state (i.e. the state with the highest top scores) was MA, so MD schools (not students) ends up having a slightly lower cut-off than DC schools. And VA seems to be lower than MD. That said, the differences last year (which is the year for which I gave # of SF) are a point each -- the larger spread in cut-offs was the year before.

FWIW, it looks like the SF cutoff would be kids who test in the top 1% (of college bound juniors) on the PSAT. "Commended" would be about the top 3%. Which raises the question of whether kids who scored in the top 10% would be just fine as a cohort and we don't (yet) have (standardized test) data to compare local schools on that basis.

Anonymous
Anonymous wrote:Seems like a problem this thread is having (and similar threads) is that certain people want to push for an approach to analyzing the data that favors their "pet" schools.


So find some other data and argue it's a better measure than NMSFs. Otherwise this just looks like a gripe that objective data doesn't put your "pet" school where you think it rightfully belongs.
Anonymous
11.21:


That strikes me as about right. The question that I am still struggling with is whether the small class sizes and general support that kids get in a small school outweighs the lack of appropriate classes? The problem is that it is very costly to reverse a school decision, especially if it involves relocating.
Forum Index » Private & Independent Schools
Go to: