Anonymous wrote:Jesus, the navel gazing on this thread is so gross.
Newsflash: you can do anything from any school. No one cares once you're in college, so long as you do well and work hard. Hardly anyone has heard of my big, no-name midwestern U and it's certainly not any list in any meaningful rank. You know what? I've succeeded by every measure you people would use. And there are many people in my circle like me who didn't go to JHU, Duke, etc.
Anonymous wrote:Anonymous wrote:Duke at 45, should be enough to show how dumb this ranking is. This thread shouldn't be this long.
Duke in the top ten shows how dumb USNWR rankings are. The student body is more impressive than the departmental rankings.
Anonymous wrote:Anonymous wrote:Here is what people should know related to the methodology. It is not a best college ranking or even an ROI ranking. It is a subjective return on expected earnings ranking. In the background, WSJ creates a score that effectively is subjective as it compares colleges to “peers” that have somewhat similar student intakes. So whatever objective raw score is factual, it is then turned subjective by this segmentation. Then all of the newly formed scores are compared against each other. I confirmed last year that this is what WSJ actually is doing. This is a flawed statistical method, and would not pass a first year college stats class if it was submitted for a grade.
Another poster likened it to competing in a major golf tournament where someone can apply their handicap whereas a full pro could not. Or, for baseball fans, think of it as WSJ saying the best Texas League baseball team should rank higher than a middling Major League team because the former is first in its class. No one would accept this. And yet, that is what WSJ is doing here.
Last year when it introduced the new methodology, I actually had a series of back and forth conversation with the two authors. They explained their methodology twice to me. I asked questions to make sure I understood it (I work in applied mathematics) and then challenged this objective to subjective to scoring to ranking approach. I then asked them to show me where my conclusions about their methodology was wrong. I was willing to learn something. I never heard back from them. In other words, I truly believe they realized that an actual statistician could easily show the massive flaws in their methodology. For a so-called college ranking methodology to not even be good enough to past muster in an intro college stats class is rather absurd.
It is a shame. Sticking the previously honorable WSJ brand to this untidy piece of work is a disservice to the reader and especially parents of students aiming to learn about colleges. The one thing the ranking is good for is how not to attempt statistical models and inference, and then try to cover your tracks.
So, you’re saying that you proved that they were wrong, but they doubled down and used the same methodology this year? I don’t believe you.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Good for the WSJ. About time. Oh, and I'm loving the over-the-top wailing from the usual suspects!
That's sweet. Now find the Wall Street Journal reporters that went to UC-Merced, Cal-State Stanislaus, Dalton State, Martin Luther College, Kettering University, Benedictine College, and - this one's not f#cking around - The Masters University.
Best and brightest no doubt.
I genuinely don't understand why the editors of the WSJ let this silliness go. There is a huge opportunity to take down US News and this is what the Murdochs came up with? Clearly not ready for prime time. Stupid list. Not interesting. And not even worth talking about. Massive fail.
And yet, here you are! Clearly very interesting and very much worth talking about. Cheers to the WSJ.
Because I would like to see some genuine competition to US News, And I was hopeful for WSJ because they have the brand and the reach. Unfortunately, this is a very lame list that won't be taken seriously. There's this massive, very lucrative space for a major publication, and yet none are reaching for it. Mystifying.
Anonymous wrote:Jesus, the navel gazing on this thread is so gross.
Newsflash: you can do anything from any school. No one cares once you're in college, so long as you do well and work hard. Hardly anyone has heard of my big, no-name midwestern U and it's certainly not any list in any meaningful rank. You know what? I've succeeded by every measure you people would use. And there are many people in my circle like me who didn't go to JHU, Duke, etc.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Initially I was puzzled with some of the ratings in the various categories, until I realized that they are based on actual data vs expectations, where if the data outperforms the expectations, they receive a higher rating. I was looking specifically at graduation rate and was initially surprised to see higher grad ratings on some of the schools where I knew they were lower.
I actually like this balanced approach. I think looking at both this ranking and USNWR would be a good way to evaluate a school. I also like the New York Times tool, where you can set your own parameters. You can learn a lot from these three platforms.
No, it's like measuring your schwantz from the floor up. Does not give an accurate measurement of what it claims to. It's data, not information.
Sorry you are not happy with the results of your school that sunk in the rankings. Signed, mom of kid whose kid's school is in top 20 for both USNWR and WSJ.
USN rankings mean nothing, and these WSJ rankings mean less than nothing. Not one human being on the planet will make a college selection based on them. Insults and adhoms indicate you know you have weak tea.
Signed parent of one Ivy kid w undergrad degree currently getting grad degree at different Ivy and second kid w NESCAC degree but bragging is for aholes.
Anonymous wrote:Duke at 45, should be enough to show how dumb this ranking is. This thread shouldn't be this long.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Initially I was puzzled with some of the ratings in the various categories, until I realized that they are based on actual data vs expectations, where if the data outperforms the expectations, they receive a higher rating. I was looking specifically at graduation rate and was initially surprised to see higher grad ratings on some of the schools where I knew they were lower.
I actually like this balanced approach. I think looking at both this ranking and USNWR would be a good way to evaluate a school. I also like the New York Times tool, where you can set your own parameters. You can learn a lot from these three platforms.
No, it's like measuring your schwantz from the floor up. Does not give an accurate measurement of what it claims to. It's data, not information.
Sorry you are not happy with the results of your school that sunk in the rankings. Signed, mom of kid whose kid's school is in top 20 for both USNWR and WSJ.
Anonymous wrote:You do not have to believe me. They may have made some minor tweaks. I have not spoken to them this year. But the grand majority of the methodology clearly is the same, and I reckon I am not the only person to have had conversations with them about it.
People double-down on poorly thought out and executed processes all the time. CEOs do it, coaches do it, professors do it. It is a flaw with many humans. And if you were defending your job once you received lots of criticism with a “no, but we are right” defensive approach, that would not make you unique.
Anonymous wrote:Anonymous wrote:Here is what people should know related to the methodology. It is not a best college ranking or even an ROI ranking. It is a subjective return on expected earnings ranking. In the background, WSJ creates a score that effectively is subjective as it compares colleges to “peers” that have somewhat similar student intakes. So whatever objective raw score is factual, it is then turned subjective by this segmentation. Then all of the newly formed scores are compared against each other. I confirmed last year that this is what WSJ actually is doing. This is a flawed statistical method, and would not pass a first year college stats class if it was submitted for a grade.
Another poster likened it to competing in a major golf tournament where someone can apply their handicap whereas a full pro could not. Or, for baseball fans, think of it as WSJ saying the best Texas League baseball team should rank higher than a middling Major League team because the former is first in its class. No one would accept this. And yet, that is what WSJ is doing here.
Last year when it introduced the new methodology, I actually had a series of back and forth conversation with the two authors. They explained their methodology twice to me. I asked questions to make sure I understood it (I work in applied mathematics) and then challenged this objective to subjective to scoring to ranking approach. I then asked them to show me where my conclusions about their methodology was wrong. I was willing to learn something. I never heard back from them. In other words, I truly believe they realized that an actual statistician could easily show the massive flaws in their methodology. For a so-called college ranking methodology to not even be good enough to past muster in an intro college stats class is rather absurd.
It is a shame. Sticking the previously honorable WSJ brand to this untidy piece of work is a disservice to the reader and especially parents of students aiming to learn about colleges. The one thing the ranking is good for is how not to attempt statistical models and inference, and then try to cover your tracks.
After reading that, thousands of parents who had felt euphoric in seeing what they wanted to see in that piss-poor “ranking” are now on suicide watch.
Anonymous wrote:Anonymous wrote:Here is what people should know related to the methodology. It is not a best college ranking or even an ROI ranking. It is a subjective return on expected earnings ranking. In the background, WSJ creates a score that effectively is subjective as it compares colleges to “peers” that have somewhat similar student intakes. So whatever objective raw score is factual, it is then turned subjective by this segmentation. Then all of the newly formed scores are compared against each other. I confirmed last year that this is what WSJ actually is doing. This is a flawed statistical method, and would not pass a first year college stats class if it was submitted for a grade.
Another poster likened it to competing in a major golf tournament where someone can apply their handicap whereas a full pro could not. Or, for baseball fans, think of it as WSJ saying the best Texas League baseball team should rank higher than a middling Major League team because the former is first in its class. No one would accept this. And yet, that is what WSJ is doing here.
Last year when it introduced the new methodology, I actually had a series of back and forth conversation with the two authors. They explained their methodology twice to me. I asked questions to make sure I understood it (I work in applied mathematics) and then challenged this objective to subjective to scoring to ranking approach. I then asked them to show me where my conclusions about their methodology was wrong. I was willing to learn something. I never heard back from them. In other words, I truly believe they realized that an actual statistician could easily show the massive flaws in their methodology. For a so-called college ranking methodology to not even be good enough to past muster in an intro college stats class is rather absurd.
It is a shame. Sticking the previously honorable WSJ brand to this untidy piece of work is a disservice to the reader and especially parents of students aiming to learn about colleges. The one thing the ranking is good for is how not to attempt statistical models and inference, and then try to cover your tracks.
So, you’re saying that you proved that they were wrong, but they doubled down and used the same methodology this year? I don’t believe you.
Anonymous wrote:Anonymous wrote:Here is what people should know related to the methodology. It is not a best college ranking or even an ROI ranking. It is a subjective return on expected earnings ranking. In the background, WSJ creates a score that effectively is subjective as it compares colleges to “peers” that have somewhat similar student intakes. So whatever objective raw score is factual, it is then turned subjective by this segmentation. Then all of the newly formed scores are compared against each other. I confirmed last year that this is what WSJ actually is doing. This is a flawed statistical method, and would not pass a first year college stats class if it was submitted for a grade.
Another poster likened it to competing in a major golf tournament where someone can apply their handicap whereas a full pro could not. Or, for baseball fans, think of it as WSJ saying the best Texas League baseball team should rank higher than a middling Major League team because the former is first in its class. No one would accept this. And yet, that is what WSJ is doing here.
Last year when it introduced the new methodology, I actually had a series of back and forth conversation with the two authors. They explained their methodology twice to me. I asked questions to make sure I understood it (I work in applied mathematics) and then challenged this objective to subjective to scoring to ranking approach. I then asked them to show me where my conclusions about their methodology was wrong. I was willing to learn something. I never heard back from them. In other words, I truly believe they realized that an actual statistician could easily show the massive flaws in their methodology. For a so-called college ranking methodology to not even be good enough to past muster in an intro college stats class is rather absurd.
It is a shame. Sticking the previously honorable WSJ brand to this untidy piece of work is a disservice to the reader and especially parents of students aiming to learn about colleges. The one thing the ranking is good for is how not to attempt statistical models and inference, and then try to cover your tracks.
I was searching for the best analogy to understand these rankings...and this is it.
It's not objectively saying University of Delaware kids on average have absolute better outcomes than kids from Brown. It's saying they punch above their weight for what the school is. So, they award Delaware all kinds of additional points to reflect its particular circumstances compared to Brown, and now apply some multiplier to their average salaries to make up for these deficiencies.
It's definitely an interesting take on college rankings, but they still don't make their methodology that clear to understand. Perhaps they could show their rankings with the raw data, indicate the multiplier they determined, and then show these rankings.
Anonymous wrote:Anonymous wrote:Here is what people should know related to the methodology. It is not a best college ranking or even an ROI ranking. It is a subjective return on expected earnings ranking. In the background, WSJ creates a score that effectively is subjective as it compares colleges to “peers” that have somewhat similar student intakes. So whatever objective raw score is factual, it is then turned subjective by this segmentation. Then all of the newly formed scores are compared against each other. I confirmed last year that this is what WSJ actually is doing. This is a flawed statistical method, and would not pass a first year college stats class if it was submitted for a grade.
Another poster likened it to competing in a major golf tournament where someone can apply their handicap whereas a full pro could not. Or, for baseball fans, think of it as WSJ saying the best Texas League baseball team should rank higher than a middling Major League team because the former is first in its class. No one would accept this. And yet, that is what WSJ is doing here.
Last year when it introduced the new methodology, I actually had a series of back and forth conversation with the two authors. They explained their methodology twice to me. I asked questions to make sure I understood it (I work in applied mathematics) and then challenged this objective to subjective to scoring to ranking approach. I then asked them to show me where my conclusions about their methodology was wrong. I was willing to learn something. I never heard back from them. In other words, I truly believe they realized that an actual statistician could easily show the massive flaws in their methodology. For a so-called college ranking methodology to not even be good enough to past muster in an intro college stats class is rather absurd.
It is a shame. Sticking the previously honorable WSJ brand to this untidy piece of work is a disservice to the reader and especially parents of students aiming to learn about colleges. The one thing the ranking is good for is how not to attempt statistical models and inference, and then try to cover your tracks.
I was searching for the best analogy to understand these rankings...and this is it.
It's not objectively saying University of Delaware kids on average have absolute better outcomes than kids from Brown. It's saying they punch above their weight for what the school is. So, they award Delaware all kinds of additional points to reflect its particular circumstances compared to Brown, and now apply some multiplier to their average salaries to make up for these deficiencies.
It's definitely an interesting take on college rankings, but they still don't make their methodology that clear to understand. Perhaps they could show their rankings with the raw data, indicate the multiplier they determined, and then show these rankings.
Anonymous wrote:I have a junior so we haven’t been through the college admissions process yet but I find this list refreshing! It’s time to look beyond the USNWR rankings which have done nothing but cause endless stress for parents and students alike.