There have been some interesting recent developments with respect to measuring the contribution that universities make to social mobility. Not in Canada of course – that would require caring about education outcomes or having any capacity for measuring and reporting data on socio-economic mobility – but rather in the UK and the US. Let me take you on a quick tour of what’s going on and what is being learned.
Back in 2017, the John Bates Clark-award-winning economist Raj Chetty and some colleagues started publishing what he called “Mobility Report Cards” for institutions, which I wrote about in more detail back here. The key metric is: what percentage of the incoming class came from the top 1% and bottom 60%, respectively? And essentially the worst institution on that metric was Washington University, St. Louis (which would not be a surprise to anyone who has read The Broken Heart of America: St. Louis and the Violent History of the United States – good book, two thumbs up- or indeed just driven along St. Louis’ Delmar Boulevard). Basically, the insight was to demonstrate that all the really famous universities just admitted rich kids, so what was the point of them anyway? And this is true: while most of those universities claim to be “need blind”, they nearly all adopt a set of admissions criteria which, by definition, only rich kids can achieve.
Then last year, the consulting firm CollegeNET developed a ranking to measure this stuff, which it called the Social Mobility Index. This ranking works off five variables: tuition, economic background of students, graduation rates of Pell Grant recipients (i.e. poorer students), early career salaries adjusted for student loan repayments (meaning if two schools have similar graduate salaries, the one with higher average debt levels will be scored lower because students won’t equally enjoy the fruits of their higher incomes), and endowment (which if I understand correctly is meant to stand in for institutional capacity, meaning that if two institutions have similar scores, then the one with the larger endowment will be marked off as worse, because presumably it had more ability to invest in social mobility and is choosing not to do so). What they find is that all ten of the “top” institutions in the US either belong to the CUNY system or to the California State University system. The Ivies tend to bunch in positions around 1300 or so (out of 1449 institutions examined)
Over in the UK, there was a response to all this from David Phoenix, the Vice-Chancellor of London South Bank University The response was published by the good folks at the Higher Education Policy Institute (HEPI). It suggested a way to do something similar for England, using a similar set of indicators – access and continuation rates for students from the bottom two income quartiles, plus graduate outcomes as measured by salary data (the UK has something called Longitudinal Educational Outcomes Data which is similar to Statistics Canada’s new-ish Educational and Labour Market Longitudinal Platform, or ELMLP). Interestingly, this exercise turns up far more variety in “top” institutions than the one from the US, as Imperial, King’s and Queen Mary – all London-based research-intensive institutions – all crack the top ten (not wishing to embarrass his colleagues, Phoenix chooses only to show the top 40, thus one cannot tell exactly how poorly some institutions do, but Oxford and Cambridge, intriguingly, both clear this bar).
Another approach appeared last month from the Postsecondary Value Commission, a group of notables convened by the Institute for Higher Education Policy and the Gates Foundation to focus on the question of “what is college worth?”. Their approach was to propose a series of tests with respect to return on investment, measured at the institutional level, the lowest being whether a student earns “at least as much as a secondary school graduate plus enough to recoup their total net price within ten years” and the highest being whether “students of colour and students from low-income backgrounds and women reach the level of wealth attained by their more privileged White, high-income or male peers”.
The full report – available here – details how to measure all of this (though as far as I can tell most of the stuff about wealth seems pie-in-the-sky for the moment), and examines various non-pecuniary benefits of higher education. It’s quite an interesting and worthy document, though as Brendan Cantwell of Michigan State University has pointed out, not all of its measures of income mobility actually make sense.
None of this would be technically impossible to do in Canada, of course. Institutions wanting to do so could look at access and continuation rates of lower-income students by linking application postal code data to the Canadian Index of Multiple Deprivation, the way Dr. Phoenix’s report does, and there is no technical reason we could not use ELMLP data to look at outcomes. But no institution will ever do so because no one wants to be held accountable, and Statscan colludes in this obscurantism by forbidding users of ELMLP to identify institutions in any analysis. The fact that it never occurs to anyone in Canada to set up something like the Postsecondary Value Commission should tell you all you need to know about the desire of our governments and institutions to hold themselves accountable on access and outcomes.
That said, at a technical level, it’s quite possible that the specific types of analyses now occurring in the US and the UK might not show the same kinds of institutional variation. Our institutional stratification is significantly lower than in either of these countries, and outcomes like graduate salaries seem to be so closely tied to local geography, that it’s not clear to me that pursuing such an approach would necessarily tell us what we would most wish to know.
But wouldn’t it be great to at least have the opportunity to test such a hypothesis? Alas, not in this country.