A Research Agenda for Canadian Higher Education, Part 1

There is so much about Canadian higher education that we don’t know. Today, a list of the most important unknowns that I think are most important, organized by broad topic.

Who Gets Into Post-Secondary Education: Also, Where and Why and For How Long?

Our data on who gets into post-secondary education has got a little bit better in the last little while. For instance, check out this interesting piece by Tomasz Handler, Aneta Bonikowska and Marc Frenette which looks at Bachelor’s degree access and completion by ethnicity. But you know what we still have very little idea about? Access by family income. It’s kind of astonishing to me that we don’t have anything on this, actually, since this is the heart of the access/affordability debate. 

When we get that data, we will also want to look at the pattern of attendance: that is, where students from different ethnic and economic backgrounds attend post-secondary. If we are like pretty much every other country in the world, you will see that students from more advantaged backgrounds cluster a) in specific fields of study, like medicine, or b) in specific high-prestige schools. Access matters, but it matters what is being accessed. We should know whether the children of the elite are monopolizing the best spots or not.

Also, given how ubiquitous Bachelor’s degrees have become, it’s probably time to start looking at access to Master’s degrees. It’s quite possible that inequitable access is dropping at the level of access to some post-secondary and even bachelor’s degrees, but widening at the level of Master’s degrees. This is the kind of thing we should check.

What is the Student Experience?

We could do a lot more work than we currently do on the student experience. We have very little idea about student standards of living, and how these interact with things like part-time work, work-integrated learning, extra-curricular activities or service, or outbound international mobility. There are an awful lot of things we do in these areas either on the basis of simple correlational analysis or because “everyone knows” it’s A Good Thing. Heading into an era of restricted budgets, we might want to actually know what effects institutions and their activities have on students if we want to preserve the most high-impact practices.

The biggest area of student experience, though, is what happens in class. I will deal with the issue of teaching quality below, but all institutions should be working a lot harder than they currently are to work out how to improve the delivery of hybrid programs. Putting more courses online is about the only thing that is going to let institutions get through the coming demographic wave: but how to do it without compromising program quality? I think there are some basic principles that institutions can and should follow, but I see precisely none of them putting these lessons into practice at scale. Which is a tiny bit of a problem

Graduate Outcomes

We have a lot of data on graduate outcomes. But the data we collect just isn’t all that useful, or all that accurate. We have a pretty good idea of what happens to people in the first 18-24 months after they finish school through both the national graduate survey and we have a pretty lousy idea of what happens after that. We need longer follow-ups, and we need follow-ups that get at important issues like career progression (i.e. how does the relationship between job and education change over time?) and satisfaction with education. For more on this, do listen to the excellent (and thoughtful) Zakiya Ellis on the subject of measurement of post-graduation outcomes: it’s way ahead of most anything we’re doing in Canada.

Faculty

We have some decent high-level data on time-use of faculty in Canadian universities (see here), but it is at such an aggregated level that it doesn’t actually tell us very much. Finding ways to disaggregate data in ways that tells us helpful things about faculty productivity is really important. Colleges have some standardized ways of measuring workload; universities on the whole do not, and they certainly don’t have a way of linking work to research outputs. 

We also are getting actively worse at trying to measure or capture teaching quality. Faculty unions have seized on some studies purporting to show gender and racial biases in student teaching evaluations (some of which no doubt exists) to claim that any student evaluation of teaching is evil and should not be used for evaluative purposes. I am unpersuaded that the problems of interpreting such data in a fair way is insurmountable, but it is telling how little effort faculty unions and universities have put into developing alternative, fairer methods of measurement either of methods which include or don’t include student feedback. It’s not a good look, to be honest: it basically says we can and should ignore all forms of feedback because students have a tendency to be sexist and racist. Work in this area would be welcome.

Technology Transfer + Community Benefits

It’s been a decade or so since statistics Canada killed the Survey of Intellectual Property Commercialization in the Higher Education Sector. Which was not, in truth, a very good survey: it was focused too narrowly (IMHO) on offices of technology transfer and patents rather than on wider issues of knowledge transfer into the community. Universities could use a lot more information about what works and what does not in this area. It would also behoove governments to have more data to help them distinguish between wild and invariably inflated claims of institutional effects on GDP, and what institutions are actually achieving in the community.

Institutional Efficiency

As we head into a new era of tighter cost controls for higher education, there are a lot of institutions in need of a strategy for what would help reduce costs. A few big, rich institutions have bought into the Nous Group’s Uniforum benchmarking process (and I do mean rich—these are often eight figure contracts) which some at east have told me is quite helpful in starting to think about how to staff more efficiently. But most institutions can’t afford those services (and even if they could I have my doubts Uniforum could help them—everything I said about the deficiencies of Nous’ approach to Laurentian University back here still pertains to pretty much any institution with under 15,000 students), so there’s a lot of room for them to work together to come up with some common solutions to right-sizing administrative staff. 

Two other points here. First: the big way that American institutions have been reducing costs in the past decade or so is through investment in technology solutions. Canadian institutions have been slow to do that: people who understand this stuff better than I do suggest that Canadian institutions are maybe a decade behind our American counterparts in using technology in areas like student services. Some work on how to roll new technology out quickly and efficiently would be welcome to many.

And then there is the question of maximizing revenues. In general, I don’t think anyone should put too much stock in chasing revenue-side solutions (that’s what got us into trouble with international students). But for many small institutions, the need to learn about generating income is pressing; how to better monetize real estate, for instance, is a pretty big issue for most and deserves some attention.

Program Quality

Last but definitely not least: we actually don’t have a good sense of how well programs are performing. And part of the reason for this is that to the extent we measure programming, we do so at the level of individual programs at individual institutions: a level so microscopic that it’s hard to get good data. You can ask employers about how well graduates are doing but managers don’t necessarily know where all their employees came from and so can’t answer all these questions especially well. They do, however, have a pretty good sense of how their new intakes of young employees are doing compared to previous cohorts of graduates. Greater understanding of where new cohorts of graduates are meeting, exceeding, and falling short of employer standards could be an important source of information for how all schools could improve in the face of changing technology and labour market demands.

I think those are the biggest areas where research is needed right now. If you disagree, or have other ideas, please drop a line in the comments or write to me at president at higheredstrategy dot com. I’d be interested to hear from you. Tomorrow, I will talk about how to execute this research agenda, quickly and collaboratively.

Posted in

3 responses to “A Research Agenda for Canadian Higher Education, Part 1

  1. Seems too little very late in the game. Where is the coverage of this very imp⁰ortant issue in the major media outlets. If it is hard to get good data and governance is based evidence what has been going on. Yes the hint is lack of coordinated and collaborative efforts. We need leaders with vision and fewer politicians both in and outside of academia. If you look across Canada at unis and colleges related to current boards of governance we need to assess whether a decent talent pool exists and if not what corrective action is needed. The system will continue to flounder and unfortunately slip into crisis after crisis so I’m pleased to read a beginning discussion on the topic.Funding at the provincial level which is incentive based related to reporting, governance, and bold collaborations might be a solution. I’m not sure some of these issues are deeply embedded in the institutional cultures and will be challenging to redress. Good for you Alex in the call for action. I hope someone is listening.

  2. Eight Figures!!!??? Are you serious?
    I find this most reassuring.
    Obviously, there is no such thing as a financial crisis in those institutions.
    Makes me rethink my approach to collective bargaining 🙂

  3. “We also are getting actively worse at trying to measure or capture teaching quality. Faculty unions have seized on some studies purporting to show gender and racial biases in student teaching evaluations (some of which no doubt exists) to claim that any student evaluation of teaching is evil and should not be used for evaluative purposes.”

    I think you’re missing the strong argument against these measures: if they’re subject to gender and racial bias, then they aren’t really objective. One uses these sorts of survey questions in (say) the social sciences, to measure general trends at the population level, but not to judge individual people. It’s like replacing juries at trial with Yelp! reviews.

    And faculty are both highly qualified and deeply conscientious, as witnessed by all moving online then back in person. We deserve more respect that to be so judged.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.