As I noted yesterday, the basic fight over rankings in higher education boils down to two questions: should institutions be judged as whole entities, or on the basis of their constitutent parts? And, should rankings give primacy to the existing hierarchy of values of higher education (i.e. research and publication), or to something else?
Let’s start with the first question. There’s absolutely nothing stopping us from ranking individual bits of the university, as opposed to the entire institution. We’ve had law school and MBA rankings for almost two decades now, and that same approach could – with a bit of modification – be applied to the rest of the academy as well. In Europe, it’s a quite common approach – Germany, Switzerland, Italy, the Netherlands, and the UK either have, or have had, rankings which make comparisons at the departmental level.
In Canada, the main thing stopping this kind of approach is a lack of data; though, with the Globe and Mail, we did manage to do some of this a few years ago in the Canadian University Report, and its online version. But if universities ever wanted to provide data to move in this direction, it would be easy enough to do.
To the second question: why can’t rankings challenge the hierarchy of values in academia, rather than re-inforce them? Well, the fact is that they can. Universities can get ranked on things like their commitment to sustainability, or their online offerings. In the US, Washington Monthly magazine includes metrics of social ability and service in their analysis.
But those are all somewhat traditional in the sense that they all assume there is a single “best” institution. If we get rid of the notion that you need to aggregate and sum individual indicator scores, we can have, what some people call, “multi-dimensional” rankings. The new European “U-Multirank” systems works along exactly these lines (as did the online version of the Globe’s Canadian University Report, which we at HESA developed).
There is, in short, lots of scope to address essentially every single criticism of rankings. The issue is whether there’s a will to change. Universities tend to like “multi-dimensional” rankings (and so do I!) because they’re less judgmental and more balanced, but given that they’re somewhat less intuitive than Maclean’s-style rankings, it’s not clear whether, given a choice, students and parents actually prefer them. Nor is it clear that universities outside Europe dislike existing rankings sufficiently to do the necessary work to provide data that would make improved rankings possible.
In short, we’re at an equilibrium. Better rankings are possible, but neither consumers nor data providers seem to be using their influence to make them so. Expect stasis, and continued kvetching.