Last week, the UK Minister for Business Innovation and Skills (which is responsible for higher education) released a green paper on higher ed. It covered a lot of ground, most of which need not detain us here; I think I have a reasonable grasp of my readers’ interests, and my guess is that the number of you who have serious views about whether the Office For Fair Access should be merged into a new Office for Students, along with the Higher Education Funding Council for England, is vanishingly small (hi, Andrew!). But it’s worth a quick peek into this document because it puts a bit more meat on the bones of that intriguing notion of a Teaching Excellence Framework.
You may remember that back in the summer I reviewed the announcement of a “Teaching Excellence Framework” wherein institutions that did well on a series of teaching metrics would be rewarded with the ability to charge higher tuition fees. The question at the time was: what metrics would be used? Well, the green paper is meant to be a basis for consultation, so we shouldn’t take this as final, but for the moment the leading candidates for metrics seem to be: i) post-graduation employment; ii) retention rates; and, iii) student satisfaction indicators.
Ludicrous? Well, maybe. At the undergraduate level, satisfaction tends to correlate with engagement, which at some vague level correlates with retention, so there’s sort of a case here – or would be if they weren’t already measuring retention. Retention is not a silly outcome measure either, provided you can: a) control for entering grades (else retention be simply a function of selectivity), and b) figure out how to handle transfer students. Unfortunately, it’s not clear from the document that either of these things has been thought through in any detail.
And as for using post-graduate employment? Again, it’s not necessarily a terrible idea. However, first: the regional distribution of graduate destinations matters a lot in a country where the capital city is so much richer than the rest of the country. Second: the mantra that “what you study matters more than where you study” works in the UK, too – measuring success by graduate incomes only makes sense if you control for the types of degrees offered by each institution. Third: the UK only looks at graduate incomes six months after graduation. Presumably, a longer survey period is possible (Canada does it at three years, for instance), but the only thing on the table at the moment is the current laughably-short period.
So, there’s clearly a host of problems with the measures. But perhaps even more troubling is what is on offer to institutions who do “well” on these measures. The idea was that institutions would pay attention to “teaching” (or whatever the aforementioned load of indicators actually measures) if doing so allowed them to raise tuition above the current cap of £9,000. However, according to the green paper, the maximum an institution will be allowed to increase fees every year is inflation. Yet at the moment CPI is negative, which suggests this might not be much of an incentive. Even if inflation returns to 1% or so, one has a hard time imagining this being enough of a carrot for all institutions to play along.
In sum, this is not a genuine attempt to find ways to encourage better teaching; rather, it is using a grab-bag of indicators to try to differentiate the sector into “better” and “worse” actors, and in so doing try to create more “signals of quality” to influence student decision-making. Why does it want to do this? Because it desperately wants higher education to work like a “normal” market, the government is trying to rationalize some of its weirder ideas about how the system should be run (the green paper also devotes quite a bit of space to market entry, which is code for letting private providers become universities with less oversight, as well as market exit, which is code for letting universities fail).
Though the idea of putting carrots in place to encourage better teaching has value, an effective policy would require a lot more hard thinking about metrics than the UK government appears willing to do. As it stands, this policy is a dud.
Yes. Yes. Yes.
Hi Alex; fascinating as always. Seems to me a good thing that there’s no significant carrot on the table, as that would likely provoke some legitimate protests as to the validity of the measures (even as thinly sketched out as they are). Even if there were outcomes data on employment, by the time you controlled for program and region of employment the numbers would likely be too small to be meaningful for many institutions. As well, surely many universities (especially in London) are more “destination” universities than others. Those who return home after graduation would probably introduce response bias, the extent of which would be impossible to measure.