Last month the British Council and NAFSA published an interesting pair of studies, which I had the good fortune to be involved with. Three colleagues – Janet Ilieva, Vangelis Tsigirlis and Pat Killingley – wrote the main report (which, among other things, focussed on differences within Europe) and I contributed a companion report on the Americas. The main report is interesting in a number of ways, notably its collection and collation of data on national research output (and the share thereof which involves international collaboration) and on national marketing budgets for international higher education efforts. While contributing to this report, I learned some interesting things about the creation and limitations of international policy comparison that are worth sharing.
The report was actually the fourth in the series, and so there was an established methodology for making comparisons across countries. What intrigued me about the project was not so much that it was comparing outcomes (because in international education that’s not all that hard, just count people moving from point A to point B) , but that it was actually trying to compare policies in a rigorous and quantitative way. We evaluated every country on 37 different questions, divided into three areas: “openness and mobility”, “quality assurance and degree recognition” and “access and sustainability”. Countries were scored on each of these areas and the summed scores were reasonably consistent with what you’d expect: broadly speaking, countries which imported lots of students (e.g. Australia) had “good” policies and therefore scored well, while those that tended to export students (e.g. Mexico) tended to score poorly. There were a few surprises, perhaps – Ireland did surprisingly well despite not being thought of as a major player in international higher education – but in the main the scores were about what you’d expect.
Except. Except that Canada and the United States, who are quite big deals in the international higher education game, look mediocre at best. And understanding why they do provide some interesting insights into why international comparative policy analysis is so difficult to do well.
The problem, essentially, is that attempts like this to “score” policies inevitably have to attempt what I call “policy Esperanto”; that is, they have to assume there is some kind of universally understood policy language governing the field. From a European perspective (and this study was originally meant as an intra-European comparison) that means making a number of assumptions about who exactly drives policy (national governments) and what policy gets valued (universally inclusive policy). And what’s interesting about Canada and the US is that they simply don’t fit that mold. For us, it is institutions and provincial governments that drive policy, and the policies that get valued are ones that provide cheap and efficient solutions to governments rather than inclusive policies for citizens.
For example, when it comes to scoring policies on quality assurance, Europeans look for national quality assurance agencies. That doesn’t really apply on our side of the Atlantic; QA and other similar types of regulations are provincial or regional in nature and is not handled identically across the country. In Europe, there are national authorities which advise institutions on things like academic standards in foreign countries to aid in selection and help guide foreign credential recognition. In North America, we let institutions make their own rules and we leave credential recognition essentially to the private sector (albeit usually non-profit bits of it) so by comparison our systems look chaotic. More broadly, when Europeans – continental ones anyway, not so much the UK – think about international higher education policy, they think of governments as being in the driver’s seat because institutions themselves don’t have a lot of financial skin in the game whereas in North America (and perhaps the anglosphere more broadly) the monetary rewards available to institutions for increasing international enrolments makes them far more independent and thus more influential in the policy field.
It’s not so much that one approach is good and one approach is bad, even though perforce the idea of “scoring” countries’ policies makes it look that way. It is more a question of whether our policies are clean and simple or not. Though we take for granted our federal systems, our preference for market solutions, and the competitive nature of our institutions, they all combine to make our systems much harder for outsiders to understand. At times, institutions and governments can move faster because we do not need to co-ordinate responses; the drawback is that our approach is chaotic, inconsistent and difficult to explain.
Clearly, the lack of European-style policies has not prevented Canada and the US from attracting a heck of a lot of international students. If all that matters are results, maybe the whole exercise is not that interesting. But if, over the long run, process matters, then results like this are at least worth pondering. Partially because the differences are deeply structural and not amenable to policy change as we usually think of it (Canada is not going to change article 93 of its constitution in order to look better on international surveys about higher education policy). In that sense, these comparisons are useful as “asterisks” in the analytical process (“oh right, Canada is constitutionally unable to run quality assurance the way the UK does”). But they are also a reminder that there are trade-offs to be made in policy-making, and the North American preference for fast, decentralized policy-making comes at the cost of something more rational.
In short, don’t mistake good results for good policy, or vice-versa. It’s a lot more complicated than that.