HESA

Higher Education Strategy Associates

October 27

It Was 20 Years Ago Today*

…that the Report of the Commission of Inquiry on Canadian University Education was released.

In 1990, in the midst of deficit crises, national unity crises, etc., AUCC members decided that the only way to focus public attention on education was to appoint an independent commissioner, Dr. Stuart Smith, to shine a spotlight on their own activities. It worked, but probably not in the way they intended.

The first few pages of the report deal in the banalities used by every university president since Jesus was born: essentially, “the system is strong and healthy but could use more money.” That taken care of, Smith then took a vicious left turn from the script and laid into universities for neglecting their teaching mission and spending too much time on scientific research.

To say university presidents felt betrayed would be an understatement. They were not amused by the rather strong implication that their research mission was interfering with their teaching mission (now where have we heard that before?), and weren’t shy about saying so.

Reading the report today, one is struck both by what has changed and what hasn’t. It’s hard not to read the recommendations around credit transfer, the lack of data on faculty teaching loads or the imbalance of incentives around teaching and research and think “plus ça change.” But on the other hand, one can also read the recommendations around access, student assistance, teaching-track faculty research into higher education and performance indicators and think, “actually, we’ve come a really long way.”

(My favourite recommendation is the one suggesting that all institutions be required to publish the percentage of their budget devoted to helping faculty improve teaching or fund curricular innovation. Yeah it’s unworkable in practice, but it would be deliciously cruel – and probably highly motivational – to have institutions publish numbers that need to be measured in hundredths of percentage points.)

So, lots of progress, but frankly not enough. No one can read the section on teaching and learning and seriously think that the situation has improved in the last twenty years. It’s fair to say that Smith wasn’t providing a balanced picture of universities and their activities in his report. But I think it’s equally fair to say that wasn’t his brief.

Many people speak on behalf of research. Distressingly few, including student leaders, speak to the substance of education itself. The Smith Commission was by some distance the best manifesto for undergraduate education this country has ever produced. We could use another one like it soon.

*I think. It’s hard to tell about things that came out in the pre-Internet era.

October 26

Oh for Heaven’s Sake (Western Canadian Edition)

You may have seen some reporting recently – say, here, here and here – to the effect that I’ve authored a report saying that the intellectual centre of gravity in Canada is moving westward at a rapid rate. You may also have seen me quoted saying things to the effect that it’s a result of sustained funding increases over the past decade in the west, while in Ontario even the major increases seen in McGuinty’s first term were barely able to cope with increased demand, let alone reverse the effects of decades of underfunding.

I do believe all of this (and can argue this point at length, if any of you want to start me off with a beer), but I do feel that I should make it clear that no such report exists.

Here’s what happened: an Ottawa Citizen reporter asked me what the major issues of the Ontario election were in post-secondary education and I pointed to the fact that none of the three parties had promised any increase in PSE funding during the next five years. When asked to describe the possible effects of this, I pointed to the general relative decline of Ontario universities compared to those in the three Western provinces. This last bit, somehow, became the focus of the article.

“Toronto consultant says west is best” is one of those headlines that are hard to resist on the Prairies, so the Calgary Herald and others picked up the story. Then one student newspaper got the wrong end of the stick and assumed that since stories were being published around comments from a think-tank president on a subject, said comments must have originated in a report of some kind. This led to a somewhat surreal CUP podcast in which two journalists discussed a non-existent report.

I thought this was pretty harmless until this new version started getting picked up by places like Globe Campus, at which point I thought “enough is enough.”

So, to be clear: there are a number of Western Canadian universities that rock pretty hard; most eastern universities are struggling to keep up and will continue to do so as they get smacked with the effects of deficit-cutting measures; the difference between east and west is partly money, partly demographics (strong universities are a lagging indicator of economic success, as any academic from China or India could tell you) and in a couple of specific instances it’s about exceptional leadership; and for all these reasons it’s quite fair to call this a shifting of the country’s intellectual centre of gravity.

Just don’t go looking for the report because there isn’t one. Sorry.

October 25

Maslow v. Durkheim in the Canadian University Report

For those of you interested in student ratings of Canadian universities, the Globe and Mail’s Canadian University Report – for which we at HESA do the data work – is out today. I’m not going to recount all the gory details here – they’re available both in the magazine which accompanies today’s paper and online. What I’m going to do instead is outline briefly how the data can be used not just to compare institutions but to answer more profound questions about student experiences.

Most of the literature around “satisfaction” in universities indirectly traces back to the literature through student engagement to student retention, and on back to Durkheim (true story: if you just take Durkheim’s work, cross out the words “commit suicide” and write in “drop out of university,” you’re about 80% of the way to summarizing modern student retention literature). But the thing about most retention and engagement studies is that after you run them through the wringer they all tend to end up with an R-squared of about 0.4. That’s not nothing in social science by any means, but it suggests there’s a lot of other stuff going on that Durkheim can’t explain.

One theory bouncing around the HESA offices is that “belonging” is overrated as an explanation for engagement and satisfaction, and that self-actualization is more important. That, in effect, we need to be looking much more towards Abraham Maslow than to Durkheim for inspiration.

Examining data provided by the 33,870 students who responded to our Globe survey is a great way to check these hypotheses because of the enormous sample size, the ability to control for all sorts of institutional factors, and (of course) the very detailed information about satisfaction. It contains a battery of Durkheim-esque questions about belonging, and also some questions that hint at a Maslowian explanation for satisfaction, notably the one which asks students if their institution has the “right balance between work and fun.”

Using both sets of questions as independent variables vis-à-vis satisfaction and comparing the results questions can help us gain insights into the Maslow/Durkheim debate – and where we don’t get clear answers, we can use our monthly CanEd student panel to get further data to answer the questions (wait till you see next month’s survey!).

Over the long term, we think we can build up a pretty good picture of what makes different types of students tick, which will allow us not only to answer questions such as “Why are Toronto Students so Friggin’ Miserable,” but also to answer more profound questions about the sources of student satisfaction.

Over the next few weeks, we’ll be sharing some of this data with you. Stay tuned.

October 24

The Central Canadian Jock Windfall

The other day I was flipping through public policy maven Charles Clotfelter’s new book Big Time Sports in American Universities (you can get the gist via this interview on YouTube). It reminded me to check up a bit on a subject which intrigued me a few years ago, namely the evolution of sports scholarships in Canadian universities.

Fifteen years ago, Canadian athletic scholarships were still both small and rare: even in 2001-02, CIS schools were distributing just $3.4 million in scholarships to 2,439 athletes. By 2009-10, those figures had risen substantially: now, just over 4,000 students per year receive a combined $10 million in athletic scholarships (all data from CIS’s statistics page).

But what’s really interesting is where the increases have occurred. Since 2001-02, scholarships at prestige schools like U of T, UBC and Alberta haven’t increased that much. The Atlantic schools for the most part have kept their increases below the average in the rest of the country (the exceptions being Dalhousie, up 395% to $414,000, and Acadia, up 477% to $550,000), though it is nevertheless significant that this tiny region, home to less than 10% of Canadian students, accounts for 32% of all scholarship spending.

No, the real change in Canadian athletics is happening in big central Canadian universities – basically, the OUA plus the Anglophone Quebec universities. Check out some of these eye-popping percentage increases: McGill up 664% to $222,000, Concordia up 886% to $224,000, Carleton, up 1138% to $265,560. And that’s only the ones for whom it makes a modicum of sense to express change in percentage terms, i.e., who were spending more than $20,000 to begin with. We could get into others: Queen’s, up from $18,000 to $177,000, McMaster up from $5,000 to $218,000, and Wilfrid Laurier, up from $12,000 to $299,000.

Is this really where these universities want to be seen to be spending money on the eve of a serious downturn in public funding? Even assuming most of this is from alumni donations (which I suspect is not in fact universally the case), what’s the point? Have these universities become noticeably better at inter-university sport? Or has a scholarships arms race just created windfall benefits for a particular group of students?

Sport is about performance. It would be nice if we thought of university budgets the same way.

October 21

Why are Toronto Students so Friggin’ Miserable? (Part Two)

Today we revisit the issue of why student dissatisfaction in Canada seems to be concentrated in Toronto, aka the Centre of the Universe. We’ll try to answer the simple question – do Toronto schools fare poorly because a disproportionate number of Toronto students live in their parents’ basements?

Our data source today is the HESA-administered survey that fuels the satisfaction results in The Globe and Mail’s Canadian University Report, in which students are asked to express satisfaction on a scale ranging from one (Very Dissatisfied) to nine (Very Satisfied). In practice, students only rarely use the bottom half of the scale, so all institutions receive mean scores greater than five.

Living at home is indeed associated with lower satisfaction (Figure 1). Those who manage to escape their parents’ city entirely are the most satisfied. The difference is small but not insignificant – just over 0.4 points (out of 9) on average between the at-home and away-from-family groups. And Toronto certainly has plenty more kids living at home (Figure 2) – 57% of our Toronto sample lives at home, compared to 33% elsewhere.

Figure 1: Overall Satisfaction with Institution, by Living Arrangement

 

Figure 2: Living Arrangement by Location

So have we found our answer, then? Well, no. As Figure 3 shows, it’s not quite that simple.

Figure 2: Overall Satisfaction with Institution, by Living Arrangement and Location

While Toronto students who are stuck under the ever-vigilant eyes of their parents are indeed the least satisfied, there remains a large (0.6 points out of 9), significant and unexplained satisfaction gap between this group and those who live at home in other cities. Moreover, there’s still clearly a location effect: students who go away to university in Toronto are less happy than students who stay at home elsewhere in Canada.

So, living at home is clearly part of the answer, but it’s a long way from answering the question of why Toronto students are so friggin’ miserable. Next week, once we have some data from the new Globe survey (new CUR out on the 25th!), we’ll be delving into issues around institutional size and students’ perceptions of institutional mission.

October 20

What is Research, Anyway?

As we’ve seen repeatedly over the past few weeks, there’s a constituency out there that wants to see greater differentiation of institutions in terms of research-intensiveness. In the vernacular, this comes across as advocating “teaching institutions” to complement “research institutions,” something which occasionally gets incorporated into government policy as it did in British Columbia vis-à-vis the new universities.

This kind of talk, of course, makes much of the professoriate go bananas. And they fire back with good stuff like Stephen Saideman, did, saying that universities aren’t about research vs. teaching, they’re about research and teaching.

But here’s the thing: do we really think both sides mean the same thing when they use the word “research”?

When professors pull out the “my life as a scholar means nothing without research” line, they aren’t necessarily trying to say they all need large research budgets and hordes of grad students and tri-council grants or their lives will be meaningless (well, some might be saying that, but they’re a minority). What they are saying is that research as a process of searching for new knowledge or construction of new meaning – which can be done through low-budget activities like editing journals, writing reviews, etc. – is inherent in the notion of being a scholar, and that institutions where the teaching isn’t done by scholarly people aren’t worthy of being allowed to grant degrees. Fair enough.

On the flip side, when governments say “we want teaching-only institutions,” they’re not saying they wish to ban professors from doing scholarly reading or engaging with colleagues at colloquia, etc. No one’s going to tell professors to give back their SSHRC grants or to stop writing articles. What they are saying is (a) that they don’t want to stump up big bucks for research infrastructure and (b) they would prefer a system that more closely resembles the U.S. public university system where at flagship institutions, professors essentially teach two courses a semester but everywhere else, they teach four. Also fair enough – unless one is prepared to argue that every non-flagship U.S. institution isn’t a “real university” because they don’t focus enough on research.

“Research” encompasses a wide variety of activities of varying intensities and time commitments. If we’re going to talk more about the balance between teaching and research, we need to stop making absolute statements about research and start treating the subject with the nuance it deserves.

October 19

Ducking the Issue

Man, did last week’s Globe editorial on reforming higher education get the bien pensants’ knickers in a knot, or what?

Constance Adamson of OCUFA took the predictable “everything would be fine if only there was more money” line. Over at Maclean’s, Todd Pettigrew made a passionate defence of research and teaching being inextricably entwined, largely echoing a piece from the previous week by McGill’s Stephen Saideman, who argued that universities aren’t teaching vs. research but teaching and research.

Methinks some people doth protest too much.

Let’s take it as read that universities are intrinsically about both teaching and research; there’s still an enormous amount of room for discussion about their relative importance. It may be cute to say that choosing between the two is a false dichotomy but in the real world profs make trade-offs: when they increase their research activity, they tend to spend less time teaching. This shouldn’t be controversial. It’s just math.

Unfortunately, obfuscating the trade-offs between research and teaching is a stock in trade of academia. My particular favourite is the old chestnut about research vs. teaching being a false dichotomy because “the best teachers are often the best researchers.” I’m being restrained when I say that this, as an argument, is a bunch of roadapples. As research has consistently shown, the relationship between the two is zero. Being a good researcher has no effect on the likelihood of being a good teacher and vice versa.

Look, there’s lots to quibble with in the Globe editorial, not least of which is the ludicrous insouciance with which it treats the concept of quality measurement. But most of its basic points are factually correct: by and large, parents and taxpayers think the main purpose of universities is to teach undergraduates and prepare them for careers (broadly defined). Canadian academics are, in fact, the most highly paid in the world outside the Ivy League and Saudi Arabia. They are also demonstrably doing less teaching than they used to, ostensibly in order to produce more research.

Anyone who can’t understand why that combination of facts might provoke at least some questioning about value for money really needs to get out more.

One of the sources of miscommunication here is that the seemingly simple term “research” is actually a very contested term which means enormously different things to different people. More on this tomorrow.

October 18

Well, That Was Interesting

The Report of the Expert Panel on R&D, that is. It’s an intriguing and well-written piece of work (kudos to Peter Nicholson), at least as much for what it doesn’t say as what it does.

There are three things this report does extremely well: i) it explains the mind-boggling number of tiny programs the federal government supports, ii) it graphically shows how the Scientific Research and Experimental Development program massively overshadows all other panels combined and iiI), it amusingly tells the government in no uncertain terms that the bit in their mandate about evaluating the relative effectiveness of programs was a crock because no data exists to allow such a comparison.

(Seriously – read the first three pages of chapter 5, because they set a whole new standard in expert panels rejecting the premise of their terms of reference.)

Those pieces of fabulousness aside, the panel came up with six main recommendations and a bunch of subsidiary ones, which you can find in short form here. Some of them are completely innocuous, such as “encouraging more collaboration” and “simplifying forms”; some are only mildly innocuous, like “having a national dialogue about innovation.” Some of them are neither innocuous nor radical but simply overdue, especially the recommendations on improving venture capital funding.

Others are more radical, such as the creation of a one-stop shop known as the Industrial Research and Innovation Council. The effectiveness of such bodies have to be taken somewhat on faith; single-windows have their advantages, but they depend on effective management which not all arms-length organizations have. The most intriguing proposed mandate given to this body is the creation of a national “business innovation talent strategy.” It’s a fascinating idea, which depends on a great deal of co-operation between several ministries across two levels of government plus educational institutions. There’s some potential for crashing and burning here – but potential for real innovation breakthroughs as well.

The headline recommendation, though, has to be the call to “transform” the National Research Council, though a process which skeptics might call “asset stripping.” Basically, all 17 of its institutes are to be either incorporated into a government agency, turned into a non-profit or integrated with a university or universities. University government relations offices should have a ball with that last one: let the lobbying begin!

But most amazing of all – there is nothing in the recommendations about any of the NSERC or tri-council programs that were included in the review. Not. One. Word.

How did that happen? My guess is we’ll never find out. But I’ll bet it’s a really good story.

October 17

The Future of Canadian R&D – Round One

The Mowat Institute showed some canny timing by releasing its paper, Canada’s Innovation Underperformance: Whose Policy Problem Is It?, on the Friday before the federal government’s Research and Development Review Panel reports. It was a real master-class in media management.

The report, authored by Tijs Creutzberg, doesn’t break a lot of new ground; in many ways it’s just a lit review, albeit a very nicely-written one. Basically, it argues two things: i) that our government innovation strategies are overly biased towards tax-credits and make insufficient use of direct cash support and ii) that there is too much overlap between federal and provincial policy instruments.

Though the first issue got the lion’s share of the media attention Friday, it’s actually the place where the report is thinnest. The report’s “evidence” basically consists of one graph which shows Canada as a policy outlier in its reliance on tax credits (not news if you’ve been keeping up with the OECD literature), and two paper citations on the benefits of direct subsidies over tax credits (one of which, if you bother to look it up, actually says nothing at all about the relative efficacy of direct support versus tax credits). Creutzberg may well be right about this, but on the evidence presented, it’s hard to tell.

On the second issue – that of carving more rational policy roles for the federal and provincial government – Creutzberg oozes good sense about the importance of place and regions in research, and then comes up with an eminently logical way of dividing up policy responsibilities between the two. The problem is that some of the recommendations come off sounding a tad too idealistic. However sensible it might be to get the federal government out of direct cluster-specific subsidies or for provinces to abjure sector-specific tax credits, it’s really, really hard to imagine it ever happening. Forget theories of federalism – those programs win votes, and politicians don’t give up vote-winners easily.

Untouched in Creutzberg’s paper is the issue of how all those federal billions that go to university research play into our research and innovation system. That is likely going to be the centerpiece of today’s paper from the Expert Panel. There’s a serious air of anticipation about this report; despite rumours of a divided panel not a single leak has taken place, which in Ottawa is about as rare as a Senators’ playoff run. It should be interesting.

Tune in tomorrow for more.

October 14

Teaching, Testing, Grading

In the last couple of months, some very interesting memes have started to take shape around the role of the professoriate.

Grade inflation – or grade compression as some would have it – is of course quite real. Theories for it vary; there’s the “leave-us-alone-so-we-can-do-research theory,” and also the “professors-are-spineless-in-the-face-of-demanding-students theory.” Regardless of the cause, the agent is clear: professors. They simply haven’t held a consistent standard over time, and that’s a problem.

About two months ago, the Chronicle put together a very interesting article on Western Governors University and how they’ve managed to avoid grade inflation. Simply put: they don’t let teachers grade. Rather, they leave that job to cohorts of assessors, all of whom possess at least a Master’s degree in the subject they are grading, and who are quite separate from the instructors.

This kind of makes sense: teachers are subject matter experts, but they aren’t expert assessors, so why not bring in people who are? Unlike professors, who have to put up with course evaluations, independent assessors have no incentive to skew grades.

One could take this further. Not only are professors not experts at grading, but they aren’t necessarily experts at devising tests, either. Solution? Step forward Arnold Kling of George Mason University who recommends improving testing by having outside professionals taking a professor’s lecture notes and course readings and fashioning a test on the basis of them.

Are there good reasons to try these ideas? On grading, the gains might be on quality rather than cost. Informally, TAs do a lot of the grading in large classrooms anyways so it’s not as if we aren’t already quasi-outsourcing this stuff. But the TAs have no more expertise than professors in terms of assessment, so professionalizing the whole thing might be beneficial. On testing, you might not get cost advantage unless you had some economies of scale (i.e., you’d need multiple participating institutions to make it worthwhile), though again there may be quality advantages.

Of course, to get any cost savings at all on either of these, you’d need to get professors to explicitly trade their testing and marking responsibilities in these areas for greater class loads. Have them teach three courses a term instead of two, but do less in each of them. It’s hard to say if anyone would bite on that one; but given coming funding crunches, it might be worth somebody at least trial-ballooning these ideas during their next collective bargaining round.

Page 109 of 114« First...102030...107108109110111...Last »