Some of you have been calling and e-mailing over the last few weeks, asking me about the new global higher education rankings system called U-Multirank (full disclosure: I played a very minor “advisory” role in this project, in 2009). To save everyone else a call, I thought I’d give you the skinny, via this blog.
U-Multirank is a creation of the European Union. Stung by the THE and Shanghai rankings, which showed continental European (especially French) universities lagging badly, France took advantage of their EU Presidency in 2009 to announce that the EU would create a new rankings system, which, in the words of the French Minister of Higher Education, would create a new and fairer global ranking, one that would prove that French universities were the best. (Yes, seriously.)
The people to whom this project was entrusted are among the smartest people in all of higher education: namely, the folks at the Centre for Higher Education (CHE) in Germany, and the Center for Higher Education Policy Studies (CHEPS) in the Netherlands. The system they have created is how rankings would look if universities had created rankings themselves, rather than left it to newspapers and magazines. It includes indicators on teaching and learning, research, knowledge transfer, international orientation, and regional engagement, and it portrays the data on each of these separately – no summing across indicators to come up with a single league table with a single “winner”. Read the project’s feasibility report, here – it’s an important piece of work which fundamentally re-imagines the notion of global rankings.
It turns out, though, that not everyone likes the idea of rankings without winners. In one of the most cynical pieces of university politics I’ve ever seen, the group known as the “Leading European Research Universities” (LERU) announced a couple of months ago that it would not participate – ostensibly this is because the rankings system is, “ill-conceived and badly designed”, but really it’s because they don’t like rankings in which they don’t come out on top.
As you can tell, I’m a fan of the U-Multirank concept; but I’m cautious about its overall prospects for success. I think its ability to add non-European institutions will be limited because many of its indicators are euro-centric and will require non-european institutions to incur some cost in data collection. And I have my doubts about demand: after many years in this business, I’m increasingly convinced that, given the choice, consumers prefer the simplicity of league tables to the more accurate – but conceptually taxing – multi-dimensional rankings.
Still, if your institution is on the fence about participating, I urge you to give it a try. This is a good faith effort to improve rankings; failure to support it means losing your moral right to kvetch about rankings ever again.
I’m not at all sure that “rankings without winners” is actually what draws such scorn for U-Multirank. Many, if not most higher education analysts embrace multi-dimensional measurement.
I have only the 2010 version of the U-Multirank questionnaire, but it includes such clunkers as ” Please describe the specific profile of your institution in Business with regard to teaching & learning (max. 600 characters).”, and asks for a count of “Professors in the department Business offering lectures abroad”. I’m glad to know your role was very minor, but please tell me that the “smartest people in all of higher education” were not actually involved in the questionnaire design.
Yes, we need a multi-dimensional approach to measuring quality, but with U-Multirank we’re off to the worst possible start.