One of the great quests in higher education over the past two decades has been to make the sector more “transparent”. Higher education is a classic example of a “low-information” economy. Like medicine, consumers have very limited information about the quality of higher education providers, and so “poor performers” cannot easily be identified. If only there were some way to actually provide individuals with better information, higher education would come closer to the ideal of “perfect information” (a key part of “perfect competition”), and poor performers would come under pressure from declining enrolments.
For many people, the arrival of university league table rankings held a lot of promise. At last, some data tools with some simple heuristics that could help students make distinctions with respect to quality! While some people still hold this view, others have become more circumspect, and have come to realize that most rankings simply replicate the existing prestige hierarchy because they rely on metrics like income and research intensity, which tend to be correlated with institutional age and size. Still, many hold out hope for other types of information tools to provide this kind of information. In Europe, the big white hope is U-Multirank; in the UK it’s the “Key Information Set”, and in Korea it’s the Major Indictors System. In the US, of course, you see the same phenomenon at work with the White House’s proposed college ratings system.
What unites all of these efforts is a belief that people will respond to information, if the right type of information is put in front of them in a manner they can easily understand/manipulate. The arguments have tended centre around what kind of information is useful/available, and the right way to display/format the data, but a study out last month from the Higher Education Funding Council for England asked a much more profound question: is it possible that none of this stuff makes any difference at all?
Now, it’s not an empirical study of the use of information tools, so we shouldn’t get *too* excited about it. Rather, it’s a literature review, but an uncommonly good one, drawing significantly from sources like Daniel Kahneman and Herbert Simon. The two key findings (and I’m quoting from the press release here, because it’s way more succinct about this than I could be) are:
1) that the decision-making process is complex, personal and nuanced, involving different types of information, messengers and influences over a long time. This challenges the common assumption that people primarily make objective choices following a systematic analysis of all the information available to them at one time, and
2) that greater amounts of information do not necessarily mean that people will be better informed or be able to make better decisions.
Now, because HEFCE and the UK government are among those people that believe deeply in the “better data leads to better universities via competition model” the study doesn’t actually say “guys, your approach implies some pretty whopping and likely incorrect assumptions” – but the report implies it pretty strongly.
It’s very much worth a read, if for no other reason than to remind oneself that even the best-designed, most well-meaning “interventions”, won’t necessarily have the intended effects.