Just a note that I am at the CAUBO Conference in Montreal today…if you are attending, do drop by my session today after lunch and say “hi.”
Yesterday I sketched out a possible research agenda for Canadian higher education. Today, I am going to sketch out how we can best achieve this.
What needs to be done at the National level
The most important thing we could do is replace the Youth In Transition Survey. This was a longitudinal survey which followed two cohorts of youth for 12 years from 2000 onwards. The younger of these two cohorts was also the first Canadian cohort to take the now-famous PISA exams, which meant in addition to the usual data you get with a survey, this dataset also had a set of scores showing some sort of cognitive ability for each participant at aged 15, which was pretty cool. I am not sure we’d be able to do that again, exactly as my understanding is that Canada is starting to have difficulty achieving the necessary sample for PISA. I am not sure how much this has to do with StatsCan’s ludicrously high cost base for surveys, nor am I sure that we actually need to follow young people for quite so long a space of time. But we do need an instrument that covers all youth from about 15-23 so that we can learn more about transitions into and through post-secondary education. Right now, what we have is administrative data on students who actually reach post-secondary education— which is good! However, that data doesn’t tell us anything about who does and does not go, or who drops out. An actual survey is required if we are going to get answers to the real questions about access and persistence, particularly the question of access by socio-economic background.
(To some extent, Statistics Canada could get at the “who gets access” question by linking its vaunted Education and Labour Market Longitudinal Panel to the Longitudinal Administrative Databank, which would to some extent allow researchers to identify individual students’ parents’ family income. That would help, though it would not quite get at the question of who does not attend post-secondary education.)
Nationally, we could do a much better job of following students for a longer period of time to look at graduate outcomes. I am pretty sure, for instance, that there is no reason we cannot link the last couple of National Graduate Surveys to tax databases like the T1FF and track their labour market outcomes essentially forever. That would be great because then we could stop focusing exclusively on 2-year outcomes and instead focus on what happens 10, 15 and 20 years out, which is much more interesting. At the same time, we should probably start asking some more interesting questions of graduates and how they view their education years after it happened—basically checking in on the degree to which post-secondary education contributed to overall life satisfaction. I wouldn’t recommend doing this through graduate surveys, but I can certainly see tacking a couple of good modules onto the General Social Survey.
Similar questions could and should also be added to provincial graduate surveys. I know some provincial officials think it’s really important to use these surveys to look at employment rates, but in an era which is as close to full employment as makes no odds, who really cares if institution X has a 96 or a 97% graduate employment rate? Use the money to ask better questions.
With respect to data on faculty, the absolute best thing statistics Canada could do would be to – next time they run a faculty time use survey, they ask for each researcher’s ORCID ID and then use some platform like OpenAlex to measure research output (also, they really should be able to provide better breakdowns by institutional type: “university” is too broad to be useful).
And, finally, as I said back on Monday, Statistics Canada needs to find a way to link provincial administrative data on student loans to data on graduate outcomes to help us understand the effects of student debt. This isn’t rocket science, it’s completely doable, it’s just that no one bothers.
What Needs to Be Done at the Provincial Level
The issue of data on program quality really needs to come from provinces, which are, despite appearances, actually responsible for regulating it. My view here is that the real problem is that we approach this problem as a summative measure of institutional accountability (and thus try to measure programs at the level of the institution, which is difficult because of small numbers) rather than as a formative exercise which can help all institutions learn how to produce better graduates. Just imagine: a province-wide survey of employers of engineering programs, asking not “are you satisfied with the performance of recent graduate X from institution Y?” but rather questions like “are the engineers you hire more or less able to cope with the tasks you assign them than they were five years ago?” or “what skills do you find are most lacking in new graduates” or “what skills do you think you will be hiring for in five years ago that you don’t hire for now” etc. Information will be put to use in different ways by different institutions, obviously, but for much lower costs, the amount of valuable information about employers’ needs and pain points that can be put at the fingertips of program directors and curriculum specialists is enormous. Every province should do this.
What needs to be done collectively by Institutions
Pretty much everything else on yesterday’s list (data on the student experience, on technology transfer, and institutional efficiency) are things that institutions can do. But—and I can’t stress this enough—each institution trying to do this work on its own would be massively inefficient. Institutions can and should work together to come up with joint research agendas and research processes. If five similar institutions have five different hypotheses about a particular student service intervention, use common research instruments to look at effects and share results, each institution quintuples its knowledge return! Or if they all are interested in the same intervention, they can quintuple the sample size. Or, if a number of institutions have questions about institutional experiences in maximizing real estate income, they can form consortia to produce research (to some extent they do this already, but it needs to be more common). You get the idea.
But the one area where this needs to happen more than anywhere is on evaluating and improving teaching. I know many people are fixated on the idea that these come with a raft of sex and ethnic biases, but it’s interesting to note that almost none of the studies that show these effects actually produce the survey instrument that drove the results. My view is that there is a good chance that much can be remedied simply by asking better questions and even more can be achieved by statistical normalization of results by equity-deserving group. Or another possibility: we could experiment with asking students questions about teaching at the level of the program rather than the level of the individual professor. Bonus: the amount of information available to people responsible for revising program curricula will increase a hundredfold.
That’s it: that’s the agenda. If anyone is interested in working on any part of it, whether individually or in a consortium, drop me a line at President at higheredstrategy dot com. I’d be happy to put people in touch with one another.
Nice blog keep posting