
A few months ago, I got sick of people making unsubstantiated claims about which schools have good or bad records of National Merit Semifinalists. So I did some research and plugged the results into a spreadsheet. I just recently cleaned it up, and I'm posting it here: http://spreadsheets.google.com/ccc?key=0AqmbmeXDw6gxdG91ZGVpdjRTaTU0SUtfSk5WLS1CY0E&hl=en. I'm not interested in starting a discussion about how different schools stack up on NMSF data. My hope is only that others who want to argue about this stuff will at least be able to talk from a common set of facts, and will be less likely to make up false info. Also, if someone does start to make stuff up, people can call bullshit more easily by referencing this spreadsheet. Let me be clear that I'm not suggesting NMSF data can be used to claim one school is better or worse than any other. At best, it's just one way of looking at schools, and there certainly are many other more significant factors for most families.
I gathered most of the NMSF numbers from newspaper listings or from the schools' own press releases. For a few assorted years, I used data from other websites that seemed reliable. To calculate the number of students in each class, I averaged three class years of data from NAEP. I'm pretty confident most of the data is valid. However, people should recognize that the sample sizes are so small that there likely is still a large amount of variability in the final numbers. If anyone has more years of data I can add to the spreadsheet, please post a link with the data source. I'm not going to add data just based on some anonymous person's say-so, but if you link to a credible source, I'd like to add the extra data. I hope this helps. |
This is great, especially given the discussion over on the Saint Albans thread recently. Thanks so much for your efforts, OP. Is similar info available for SAT and/or AP scores? |
This is an excellent chart which transforms piecemeal data into information. Can you add other schools ? Specifically those in FCPS - Langley, Mclean, Marshall, etc? It would be great if you clould update this chart and share it with a yahoo group , FCAG [fairfax county association for the gifted]. The PSAT results say a lot about the student body and quality of instruction received in grades 7-10. Note most of the TJ students come from GT centers in FCPS. |
OP here. I'm happy to add other schools. Just find and post links to NMSF info for those schools for me. The hardest part of this process is sifting through all the search results to find reliable data.
I don't think similar data is available for SAT & AP scores. Some sites post average SAT numbers, but it's hard to know whether those numbers are current or reliable. I've never seen anything useful on AP scores, but I think that would be excellent data. |
I was just talking with dh today about whether lifers at schools get the same number of National Merits as students who come in in 9th grade. Did you find any research about this? |
Probably depends a bit on the school's philosophy. At Sidwell they counsel people out at various points so the lifer pool represents the stronger group from the original entering group. I gather STA/NCS does the same thing with the 3rd-4th transition. Other schools probably counsel out fewer than these two. |
A couple of links that will help you fill in some missing data from last year (don't know whether you're calling that 2008 -- which is the testing/announcement year or 2009 -- which is what NM seems to call it -- i.e. it's the class of 2009 results):
VA list: http://chesterfield.k12.va.us/ccps/news/awards/09%20VA%20Semifinalists-NatlMeritProgram.pdf DC list: http://www.currentnewspapers.com/admin/uploadfiles/N0910080120%201.pdf p. 4 |
I don't know who you are OP but you rock |
OP you do really rock! One thing I notice is that the figures change a lot over the years. Some years some schools have no NMSFs, some years the same schools have tons. It looks like doing an average over the past five years, instead of the past 10, would change the results. An argument can also be made that things have changed a lot in 10 years in some of these schools, so the older results shouldn't be used. Not sure if last five years, or getting in the strong early 2000's in some schools, would be more appropriate. But then this year's results, when they arrive, could change everything again. |
Thanks, OP! The private school percentages really are stunning given the early admission test (wppsi & wisc) scores of 99.99 that everyone seemed to have but my child. I was really worried. My son scored only in the high 80s for both and is a NMSF with high SAT scores, go figure. |
Great report OP - even if there are major changes in 2009 for certain schools, there are a few here that really stand out hands down - I would not have guessed such big gaps in some cases. Thanks for all the work. |
This is really great. Are you doing semifinalists only or commended students also or what? I have to admit I don't really understand all the different levels so forgive me if that's a dumb question! |
. I agree with your "gentle" points made here to this work product. There is so much missing data in different eras in different schools that the calculated means or averages are really meaningless and detract from simple description. For example, the numbers of these finalists took an important plunge at Sidwell and STA over the last 5 years. Therefore, the data propping up those averages are from the remote past. On the other hand, the data for these finalists at Blair and TJ (the magnet division is separate from the school as is the IB program and RM) are rising steadily over the last 5 years, and conversely, the lower numbers in the remote past bring their average down. A better analytic plan, despite missing data and holes over the place, is to recalculate based on era as this poster has suggested. Take the last 5 contemporary years and redo the means, averages and percentages. Will you arrive at the same conclusion? I'm neither a mathematician nor statistician; but, I suspect the findings will be reversed (diametrically opposed to what is presented here). Since, Blair is a magnet program, and that is the issue at hand, a fairer analysis would look at the results with a denominator that reflects numbers in the program (much like TJ) and not the "sleight of hand" dilution with the rest of Blair since the finalists emanate from the magnet cohort. There are no important inferences that can be made from these calculations for a number of reasons: 1. unclear the source of the data (validity and completeness) 2. there are so many holes and missing data (how was this handled in the final calculations?) 3. the data is crunched over a 10 year-time span with missing data/holes 4. the denominator used to calculate the Blair numbers are off many fold -- erroneous 5. this is probably the same for Richard Montgomery 6. the 10 -year span is very misleading given the missing data and the very glaring upward trend in finalists at TJ and Blair magnets over the last 5 years and the correponding plunge in NMFs at Sidwell and STA during the same period. Thus, the way the numbers are analyzed is biased in favor of STA and Sidwell. This is the type of 10 year analysis for exmissions some schools present when they have few students entering Ivies over the last few years as compared to the beginning of an era. Since potential parents and students are more interested in the current era of a school, not the remote past, a more meaningful way to analyse the data is to take the last 5 years (as an example) that may have fewer holes and missing points and more accurate source of information then redo the numbers with the appropriate denominators. A contemporary analysis comparing apples with apples is more beneficial to prospective applicants. This does not distract from the fine effort to gather the data by the poster which I commend. But, at the end of the day, the integrity of the data and the way that it's analysed can lead to equal and opposite conclusions! Unfortunately, a gander of the data does not support the inference the poster may be trying to get across. |
When I look at data I always ask myself the source and whether there are any potential biases introduced by the data collector? A couple of questions: 1. Was your original plan to look backward 10+ years? 2. Did your original plan change when you got the results for the last 5 contemporary years -- then you decided to go further backwards in time? 3. Given different eras, missing data not evenly distributed across the eras or institutions, what conclusions do you draw from the data presented? 4. Are you willing to draw any conclusions from the data your present? If you are, and are confident in the integrity of these data, please draw these conclusions? Thanks. I applaud your efforts in gathering the data. |
I'm confused--why do you need OP to answer all those many questions?? Didn't OP provide raw data? Aren't you all with the questions just as able to analyze it? And just as able to fill in missing information if that's what you want to do? Why all the demands? |