It is a universally acknowledged truth that while nearly every higher education policy maker in the world is required to discuss The Imperative of Accessibility, almost no one defines or measures it. Because God forbid access policies, especially Canadian policies, be informed by evidence.
It’s not like it’s impossible to do. In the UK, the University and College Application Service simply analyses the postal codes of applicants and students and use that to track changes over time. Are student entry rates going up or down for poor students transferring directly from secondary school? You don’t need to guess! You just look it up! The centralization of applications through UCAS makes this somewhat easier in the UK than elsewhere, but there is literally no reason that each institution in Canada couldn’t hand over its application and acceptance data for central processing and analysis. There is no reason to hand over identifying information, it can all be done quite anonymously. But no, we don’t do it. In fact, we don’t even do it in Ontario where we have centralized application data.
In fact, in the UK, it’s not only possible to look at system-wide entry rates: it’s possible to look at social mobility at the institutional level. Check out this great little paper put out by our friends over at the Higher Education Policy institute which ranks UK institutions by the gini coefficient of each university’s intake (Hull is best, Cambridge is worst, if you’re wondering). Depending on which country you are in, there are other options for data collection. In the US, the brilliant Raj Chetty and colleagues, as part of their Opportunity Insights project, have used tax data to rank every 4-year college in the US based on the percentage of students who attend from the top 1% and the top 60% percentiles. I doubt the stratification of the student body in Canada by institution is as concerning for reasons I outlined back here, but it couldn’t possibly hurt to have similar data, and we could do it fairly easily using the postal code data described above.
(In fact – slight digression – it was done once about 15 years ago in Ontario. Turned out there was almost no difference across institutions except for two – Nipissing, which was below the median, and Queen’s, which was well above it. Queen’s was mortified and so of course that data never saw the light of day. Because heaven forbid data on access embarrass an institution).
Imagine if we had this data. We could, in an instant, answer the question that new governments in Ontario and New Brunswick are asking themselves; namely – do targeted Free Tuition programs matter a damn in terms of changing patterns of access? We could find out if more low-income youth are actually attending university or college, or if they are shifting from one to the other, or changing which programs or institutions they select. But no. Same in Alberta: has the Notley government’s expenditure of tens of millions of dollars affected access at all? Or is it just making sure the same group of students from backgrounds of predominantly above-average affluence get larger public subsidies?
We could know this. But our governments don’t seem to care enough about evidence to collect the data. Hell, most of them can’t even be bothered to publish aggregate data on student aid and student grants because of the possibility that such publication might make them look bad compared to some other province. And, data on access mustn’t embarrass a provincial government.
(If anyone in the New Brunswick or Ontario governments is reading this – it’s not too late to get that application and acceptance data from your institutions going back to before the introduction of targeted free tuition. This study would cost a little bit of money, but it would lay to rest any number of questions about cost-benefits).
I would genuinely love it if some institutions or some provinces simply began publishing this data on their own. I would be so happy, so proud, if someone – anyone – in this country could show that our societal commitment to evidence outweighs our evident reluctance to avoid accountability for results on access. I’m not holding my breath– I’ve been waiting for this for 25 years, and seen very little evidence at the moment that the desire of data monopolists in governments and institutions to cover their asses is waning in any way shape or form (solitary exception: the Council of Ontario Universities’ decision to publish data on part-time faculty – that was pretty cool).
But goodness, how much better our system would be if we could just get over potential embarrassment and do the right thing on data.
You write that “it couldn’t possibly hurt to have similar data.” But it could, if only on the basis of Goodhart’s law. It might encourage institutions to game their acceptance and completion rates for the sake of avoiding embarrassment — even more so if the data is used as the basis for some kind of target.
I don’t think such gaming is particularly likely, and I’ve been encouraged by your reports on the lack of hierarchy in Canadian institutions, but we shouldn’t forget that a lot of data exists specifically to create and maintain hierarchies. Data can indeed do harm. There’s a reason that the English called their first census the Domesday Book.