The New WSJ/Times Higher Education Rankings

Almost the moment I hit send on my last post about rankings, the inaugural Wall Street Journal/Times Higher Education rankings of US universities hit the stands.  It didn’t make a huge splash mainly because the WSJ inexplicably decided to put the results behind their paywall (which is, you know, BANANAS) but it’s worth looking at because I think in many ways it points the way to the future of rankings in many countries.

So the main idea behind these rankings is to try to do something different from the US News & World Report (USNWR) rankings which are a lot like Maclean’s rankings (hardly a surprise since the latter was explicitly modelled on the former back in 1991).  In part, the WSJ/THE went down the same road that Money Magazine went in terms of looking at output data: graduate outcomes like earnings and indebtedness, except that they were able to exploit the huge new database of institutional-level data on these things that the Obama administration.  In addition to that, they went a little bit further and created their own student survey to get evidence about student satisfaction and engagement.

Now this last thing may seem like old hat in Canada: after all, the Globe and Mail ran a rankings based on student surveys from 2003 to 2012 (we at HESA were involved from 2006 onwards and ran the survey directly for the last couple of years).  It’s also old hat in Europe, where a high proportion of rankings depend at least in part on student surveys.  But in the US, it’s an absolute novelty.  Surveys usually require institutional co-operation, and organizing this among more than a thousand institutions simply isn’t easy:  “top” institutions would refuse to participate, just as they won’t do CLA, NSSE, AHELO or any measurement system which doesn’t privilege money.

So what the Times Higher team did was effectively what the Globe did in Canada thirteen years ago: find students online, independent of their institutions, and survey them there.  The downside is that the minimum number of responses per institution is quite low (50, compared with the 210 we used to use at the Globe); the very big upside is that students’ voices are being heard and we get some data about engagement.  The result was more or less what you’d expect from the Canadian data: smaller colleges and religious institutions tend to do extremely well on engagement measures (the top three for Engagement were Dordt College, Brigham Young and Texas Christian).

So, I give the THE/WSJ effort high marks for effort here.  Sure, there are problems with the data.  The “n” is low and the resulting number have big error margins.  The income figures are only for those who have student loans and includes both those who graduated and those who did not.  But it’s still a genuine attempt to shift rankings away from inputs and towards processes and outputs.

The problem?  It’s still the same institutions coming in at the top.  Stanford, MIT. Columbia, Penn, Yale…heck, you don’t even hit a public institution (Michigan) until 24th position.  Even when you add all this process and outcome stuff, it’s still the rich schools that dominate.  And the reason for this is pretty simple: rich universities can stay relatively small (giving them an advantage on engagement) and take their pick of students who then tend to have better outcomes.  Just because you’re not weighting resources at 100% of the ranking doesn’t mean you’re not weighting items strongly correlated to resources at 100%.

Is there a way around this?  Yes, two, but neither is particularly easy.  The first is to use some seriously contrarian indicators.  The annual Washington Monthly rankings  does this, measuring things like percentage of students receiving Pell Grants, student participation in community service, etc.  The other way to do this is to use indicators similar to those used by THE/WSJ, but to normalize them based on inputs like income and incoming SATs.  The latter is relatively easy to do in the sense that the data already (mostly) exists in the public, but frankly there’s no market.  Sure, wonks might like to know about which institutions perform best on some kind of value-added measure, but parents are profoundly uninterested in this.  Given a choice between sending their kids to a school that efficiently gets kids from the 25th percentile up to the 75th percentile and sending their kid to a school with top students and lots of resources, finances permitting they’re going to take the latter every time.  In other words, this is a problem, but it’s a problem much bigger than these particular rankings.

My biggest quibble with these rankings?  WSJ inexplicably put them behind a paywall, which did much to kill the buzz.  After a lag of three weeks, THE made them public too, but too little too late.  A missed opportunity.  But still, they point the way to the future, because a growing number of national-level rankings are starting to pay attention to outcomes (American rankings remarkably are not pioneers here: in fact, the Bulgarian National Rankings  got there several years ago, and with much better data).  Unfortunately, because these kinds of outcomes data are not available everywhere and are not entirely compatible even where they are, we aren’t going to see these data sources inform international rankings any time soon.  Which is why, mark my words, literally all the interesting work in rankings over the next couple of years is going to happen in national rankings, not international ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.