Higher Education Strategy Associates

Category Archives: teaching

July 07

How to Measure Teaching Quality

One of the main struggles with measuring performance in higher education – whether of departments, faculties, or institutions – is how to measure the quality of teaching.

Teaching does not go entirely unmeasured in higher education.  Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester.  The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions.  That’s partly because faculty unions are wary about using individual professors’ performance data as an input for anything other than pay and promotion decisions, but it also suits the interests of the research-intensive universities who do not wish to see the creation of a metric that would put them at a disadvantage vis-a-vis their less-research-intensive brethren (which is also why course evaluations differ from one institution to the next).

Some people try to get around the comparability issue by asking students about teaching generally at their institution.  In European rankings (and Canada’s old Globe and Mail rankings), many of which have a survey component, students are simply asked questions about the quality of courses they are in.  This gets around the issue of using course evaluation data, but it doesn’t address a more fundamental problem, which is that a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it.  There is a bit of truth here: it has been established, for instance, that teachers who grade more leniently tend to get better course satisfaction scores.  But this is hardly a lethal argument.  Just control for average class grade before reporting the score.

It’s not as though there isn’t a broad consensus on what makes for good teaching.  Is the teacher clear about goals and expectations?  Does she/he communicate ideas effectively?  Is he or she available to students when needed?  Are students challenged to learn new material and apply this knowledge effectively?  Ask students those kinds of questions and you can get valid, comparable responses.  The results are more complicated to report than a simple satisfaction score, sure – but it’s not impossible to do so.  And because of that, it’s worth doing.

And even the simple questions like “was this a good course” might be more indicative than we think.  The typical push-back is “but you can’t really judge effectiveness until years later”.  Well, OK – let’s test a proposition.  Why not just ask students about a course they took a few years ago, and compare it with the answers they gave in a course evaluation at the time?  If they’re completely different, we can indeed start ignoring satisfaction types of questions.  But we might find that a good result today is in fact a pretty good proxy for results in a few years, and therefore we would be perfectly justified in using it as a measure of teaching quality.

Students may be inexperienced, but they’re not dumb.  We should keep that in mind when dismissing the results of teaching quality surveys.

June 09

Teaching Load Versus Workload

I often get into discussions that go like this:

Me: Over time, the number of classes each professor teaches has gone down.  Places where people used to teach 3/2 (three classes one term, two the other) now teach 2/1.  Places where 4/3 or even 4/4 were common are now 3/2.   This has been one of the main things making higher education more expensive in Canada.

Someone else (usually a prof): Yeah, but classes are so much larger now than they used to be.

Me: Do you not think that teaching fewer classes maybe the cause of higher average class size?  Do you think that if everyone taught more classes average class size would fall?

(nota bene: This isn’t the whole story, obviously.  Student-staff ratios have gone up to such a degree that even if profs were teaching the same number of courses, numbers would still be up a bit.  Though how much is hard to say, because of the changing use of sessional lecturers.)

Someone else: Does it matter?  Same number of students, same amount of work.

Me: Is it?  Are three classes of fifty students actually the same amount as five classes of thirty students?  Doesn’t less class prep time more than make up for the increase in marking?

Someone else: Um, well, yeah.  Probably.  But we’re still doing lots of committee work!  And tenure requirements have become much more punishing than they used to be!  And those teaching loads don’t count graduate student supervisions.

Me: No doubt, committee work can take up a lot of time – though much of it exists simply to make the university less effective.  But that research one – that’s not distributed equally across the university, is it? I mean, we know that the pace of publication falls pretty quickly after tenure is granted (see figure 3 of this PPP article by Herb Emery).  And not all university research is of the same quality: Well over 10% of all Canadian faculty (24% in the humanities) have never had a publication cited by anyone else (HESA research, which we demonstrated back here).

Someone else:  And graduate supervision?

Me: Fair point.  But graduate supervision is all over the place.  Supervising a PhD in Science tends to be more intensive than in Arts.  And course-based Masters’ student are increasingly more like undergraduates than doctoral students in the loads they bring.  Hard to measure.

Someone else: But shouldn’t all this be measured?

Me: Of course.  But notice how Canadian university Collective Bargaining Agreements avoid the question of overall workload, even though they often get really specific about teaching loads.  Universities don’t want to measure this stuff because it would expose how many profs are working way too hard, and unions don’t want to measure this stuff because it would expose how many profs aren’t.    Look how hard both sides worked to discredit the HEQCO paper on professorial productivity, which posed exactly that question.

Someone else: is this ever going to change?

Me: Governments could put pressure on institutions to actually enforce the bits of the CBAs that require faculty to actually do the hard-to-measure stuff (committee work, research).  Junior staff could make more of a fuss within the unions to start ensuring equal treatment of workloads within the bargaining unit.  Short of that, no.

Someone else: Aren’t you a bit cynical?

Me: Around here, hard not to be.

March 13

Teaching Loads, Fairness, and Productivity

It’s been a long time since I’ve been as disappointed by an article on higher education as I was by the Star’s coverage of the release of the new HEQCO paper on teaching and research productivity.  A really long time.

If you haven’t read the HEQCO paper yet, do so.  It’s great.  Using departmental websites, the authors (Linda Joncker and Martin Hicks) got a list of people teaching in Economics, Chemistry, and Philosophy at ten Ontario universities.  From course calendars, Google scholar, and tri-council grant databases, they were able to work out each professor’s course load, and whether or not they were “research active” (i.e. whether they had either published something or received a tri-council grant in the past three years).  On the basis of this, they could work out the teaching loads of profs who were research-active vs. those who were not (except in Philosophy, where they reckoned they couldn’t publish the data because there simply weren’t that many profs who met their definition of being research-active).  Here’s what they found:

Annual Course Load by Research Active Status














To be clear, one course here is actually a half course.  So the finding that “non-research-active” professors teach less than one course extra means that there are, in fact, a heck of a lot of non-research-active profs who teach no extra courses, and who teach exactly the same amount as professors who are research active.

For reasons of fairness as much as  productivity, this seems like a result worth discussing, no?  And yet – here’s where the disappointment comes in – that doesn’t appear to be where the main actors in this little drama want to go with the story.  Rather, they appear to want to make irrelevant asides about the study itself.

Now I say “appear” because it’s possible they have more nuanced views on the subject, and the Star just turned the story into a he-said/she-said.  I want to give them the benefit of the doubt, because the objections printed by the Star are frankly ludicrous.  They amount to the following:

1)      Teaching involves more than classroom time, it’s preparation, grading, etc.  True, but so what?  The question is whether profs who don’t produce research should be asked to teach more.  The question of what “teaching” consists of is irrelevant.

2)      Number of courses taught is irrelevant – what matters is the number of students taught.  This is a slightly better argument, though I think most profs would say that the number of courses is a bigger factor in workload than the number of students (4 classes of 30 students is significantly harder than 3 of 40).  But for this to be a relevant argument, you’d need to prove that the profs without a research profile were actually teaching systematically larger classes than their research-active counterparts.  There’s no evidence either way on this point, though I personally would lay money against it.

Here’s the deal: you can quibble with the HEQCO data, but it needs to be acknowledged: i) that data could be better, but that it is institutions themselves who hold the data and are preventing this question from being examined in greater depth; and, ii) that this is the one of the best studies ever conducted on this topic in Canada.  Kvetching about definitions is only acceptable from those actively working to improve the data and make it public.  Anyone who’s kvetching, and not doing that, quite frankly deserves to be richly ignored.

March 10

Could We Eliminate Sessionals if We Wanted To?

Last week, when I was writing about sessionals, I made the following statement:

“Had pay levels stayed constant in real terms over the last 15 years, and the surplus gone into hiring, the need for sessionals in Arts & Science would be practically nil”.

A number of you wrote to me, basically calling BS on my statement.  So I thought it would be worthwhile to show the math on this.

In 2001-02, there were 28,643 profs without administrative duties in Canada, collectively making $2.37 billion dollars, excluding benefits.  In 2009-10, there were 37,266 profs making $4.29 billion, also excluding benefits.  Adjusting for inflation, that’s a 56% increase in total compensation – but, of course, much of that is taken up by having more profs.  If we also control for the increase in the number of professors, what we have left is an increase of 18.8%, or $679 million (in 2009 dollars).

How many new hires could you make with that?  Well, the average assistant prof in 2009 made $90,000.  So, simple math would suggest that 7,544 new assistant profs could have been hired for that amount.  That means that had professors’ salaries stayed even in real terms, universities could have hired 16,347 new staff in that decade, instead of the 8,803 they actually did.

(Okay, I’m oversimplifying a bit.  There are transaction costs to landing new professors.  And hiring that many young profs all at once would just be storing up financial chaos 5-15 years down the road, as they gain in seniority.  So $679 million probably wouldn’t buy you that many new profs.  But on the other hand, if you were doing some hiring, you’d spend less money on sessionals, too, so it’s probably not far off.)

Would that number of new hires have eliminated the need for sessionals?  Hard to say, since we have no data either on the number of sessionals, or the number of courses they collectively teach.  What we can say is that if 7,500 professors had been hired, the student:faculty ratio would have fallen from 25:1 to 22:1, instead of rising – as, in fact, it did – to 27:1. That’s a pretty significant change no matter how you slice it.

(The question remains, though: would you want to give up sessionals, even if you could?  As I pointed out last week, in many programs sessionals perform a vital role of imparting practical, real-world experience to students.  And even where that’s not their primary function, they act as swing labour, helping institutions cope with sudden surges of students in particular fields of study.  They have their uses, you know.)

Now, I’m not suggesting that professors should have foregone all real wages increases over a decade, in order to increase the size of the professoriate.  But I am suggesting that universities have made some choices in terms of pay settlements that has affected their ability to hire enough staff to teach all the students they’ve taken on.  The consequence – as I noted before – is more sessionals.  But it very definitely did not need to be that way.

March 06


The plight of sessional lecturers (or, as they call them in the US, “adjuncts”) is possibly the only issue in higher education that generates even more overblown rhetoric than tuition fees.  Any time people start evoking slavery as a metaphor, you know perspective has flown the coop.

Though data on sessional numbers in Canada are non-existent, no one disputes that their numbers are rising, and that they are becoming an increasingly central part of major universities’ staffing plans.  In large Ontario universities, it’s not uncommon for certain faculties to have 40-50% of their total credit hours taught by sessionals.  Wage data is scarce, too, though last year University Affairs produced a worthwhile survey on sessionals’ working conditions.  The numbers vary from place to place, but let’s just say that relying solely on sessional wages must be pretty challenging.

A problem in generalizing about sessionals is that they come in two distinct varities.  First are the mid/late-career professionals who already make good money from full-time employment elsewhere, and who help provide relevant, up-to-date content based on practical experience in programs like Law and Nursing.  For them, sessional teaching is a way to pick up an extra cheque, and maybe have some fun doing it. Outside Arts & Science, this is the dominant model of sessionals, and universities are much the better for their presence.

In Arts & Sciences, on the other hand, sessionals are much more likely to be recent PhD graduates looking to get a foothold on the academic ladder.  Unable for the moment to make the tenure track, taking multiple sessional gigs lets them stay within the university system, but prevents them from doing what they (and indeed the entire higher ed system) value most: research.  As a result, being a sessional can sometimes take one further from the tenure track, rather than closer to it.  The sessional “crisis”, needless to say, focuses on this latter group, rather than on the professionals.

What’s truly bizarre about the discourse on sessionals are the frankly conspiratorial views of the cause of the “crisis”.  But there’s no mystery here: universities, for the most part, get paid by governments and students according to how much teaching they do; despite this, they pay their academic staff to spend roughly half their time doing stuff other than teaching.  Unsurprisingly, this results in there being more teaching duties than available teaching time.  Hence the need for sessionals (a need that has only grown larger as research has increased in importance).

And why is their pay so low?  Partly, it’s a free market and there’s a heck of a lot of people willing to do academic work for very little pay.  But partly it’s because institutions have a conscious choice to prioritize pay rises for existing full-time staff (gotta pay more for research excellence!) over hiring new full-time staff. Had pay levels stayed constant in real terms over the last 15 years, and the surplus gone into hiring, the need for sessionals in Arts & Science would be practically nil.

Basically, no one “decided” to create an academic underclass of sessionals.  Rather, they are an emergent property of a system where universities mostly earn money for teaching, but spend a hell of a lot of it doing research.

November 06

Teach for Canada: Attack of the Kielberger Colonialists

I see the Globe has given some laudatory coverage to something called “Teach for Canada”.  The brain-child of a couple of Bay Street types (who have never themselves taught a class), the idea here is to shamelessly rip-off Teach for America (TFA) and apply its methods to the problem of low achievement among the country’s Aboriginal youth.

This is a terrible idea.  And here’s why:

TFA recruits top university graduates right out of their undergraduate program, to do two years of teaching in some of the country’s poorest communities.  The idea is that bright, energetic, idealistic grads can succeed in teaching underprivileged youth, where regular, salaried teachers cannot.  And indeed, there’s some significant evidence that the program does work in terms of raising Math scores – such as this new study from the US Department of Education.

There is, however, no reason to think that this approach would have a similar effect if deployed in Canada among Aboriginal youth.

The reason TFA delivers some modest results is not because their brief training stint and alternative certification is equally effective as teachers college; rather, it’s because the quality of teachers in US public schools is so patchy.  Teaching isn’t a valued profession in the US, and doesn’t attract top students; the teacher-training itself is pretty weak by international standards (see Amanda Ripley’s, The Smartest Kids in the World for a decent summary on this).  Also, schools serving the poorest students tend to get weaker teachers, because funding is local and their tax base can’t support high teacher pay – a problem Canada doesn’t really have to deal with.  Of course, Canada isn’t completely free from these problems, but they’re nowhere near as severe here as they are in the US.

Ah, you say, but what about on reserves?  Doesn’t the argument hold there?

Well, the pay argument certainly does.  But let’s be clear: TFA was designed for urban environments.  TFA staff get ongoing training and mentorship.  TFA staff, for the most part, still get to live in (or close to) hip urban areas.  TFA does not go to reserves in fly-in communities, in part because the number of volunteers would be pretty low, but also because the model itself simply wouldn’t work.

More importantly, perhaps: the idea that what First Nations need are a lot of well-meaning but inexperienced white kids showing up in their communities saying, “we’re here to help!” is plain ludicrous.  There’s no doubt that education for First Nations, particularly those from more remote communities, is in a desperate state, and deserving of vastly more money and policy attention than it currently receives.  But youthful enthusiasm just isn’t a substitute for money and teaching experience.

Teach for Canada is pure do-gooding Kielberger-style colonialism.  It’s an idea that deserves a quick death.

February 12

How to Compare Salaries

One of the things that keeps popping up in labour relations is the salary comparison: a union at one institution says, “we deserve what professors at the University of X get”.  It’s a reasonable tactic, but making useful and accurate comparisons at the institutional level is much harder than it looks, and one needs to be alert to the possibility of cherry-picking comparisons.

Academic salaries in Canada are, for the most part, based on three things: rank, years of service, and  field of study.  The greater the proportion of staff with full professorships, the older the average faculty age; and, the more professional programs a school has, the higher faculty salaries will be.  The last is especially pernicious: comparing the averages at, say, Winnipeg and Manitoba will lead to all sorts of the distortion, due to the presence of Law, Medicine, Dentistry, and Engineering at the latter.  My advice: ignore anyone who tries to sell you something based on those types of comparisons.

But even within institutions of similar size and scope, there’s still plenty of potential for bad comparisons.  Take, for example, this close comparator set of institutions from the Maritimes: St. Thomas University (STU), Mount Allison University (Mt.A), St. Francis Xavier University (St. FX),  Mount St. Vincent University (MSV), and Acadia University.

Figure 1 – Distribution of Faculty by Rank, Selected  Small Maritime Institutions, 2009-10













As Figure 1 shows, these institutions have quite different rank structures.  St. FX has a lot more junior faculty than the others, with 40% of the staff being at the assistant level; At MSV and STU, the proportion of assistant professors is half that of St. FX.   Given the salary gap between full and assistant professors, this has a non-trivial effect on the overall average salaries; if St. FX had MSV’s rank structure, its average salary would rise by about 6%.

To get closer to an apples-to-apples comparison, one needs to look at the actual average salaries by rank, as in Figure 2. 

Figure 2 – Salaries by Rank, Selected  Small Maritime Institutions, 2009-10













According to this data (and yes, it’s a little old, but this is a free email, and you get what you pay for), St. FX has the lowest salaries across the board, while Mt.A has (mostly) the highest.

But even this might not be an entirely accurate comparison.  If the average years-since-promotion at one institution is higher than at another, even the comparisons within each salary band may be off a bit, because of the effects of rising through the ranks (worth 2-3% per average year of difference, at the moment).  That’s probably not enough to explain the entire gap between St. FX and the others, but it may explain some of it.

Finally, of course, all of these comparisons are suspect, without comparing work loads.  But that’s another story, all together.

October 20

What is Research, Anyway?

As we’ve seen repeatedly over the past few weeks, there’s a constituency out there that wants to see greater differentiation of institutions in terms of research-intensiveness. In the vernacular, this comes across as advocating “teaching institutions” to complement “research institutions,” something which occasionally gets incorporated into government policy as it did in British Columbia vis-à-vis the new universities.

This kind of talk, of course, makes much of the professoriate go bananas. And they fire back with good stuff like Stephen Saideman, did, saying that universities aren’t about research vs. teaching, they’re about research and teaching.

But here’s the thing: do we really think both sides mean the same thing when they use the word “research”?

When professors pull out the “my life as a scholar means nothing without research” line, they aren’t necessarily trying to say they all need large research budgets and hordes of grad students and tri-council grants or their lives will be meaningless (well, some might be saying that, but they’re a minority). What they are saying is that research as a process of searching for new knowledge or construction of new meaning – which can be done through low-budget activities like editing journals, writing reviews, etc. – is inherent in the notion of being a scholar, and that institutions where the teaching isn’t done by scholarly people aren’t worthy of being allowed to grant degrees. Fair enough.

On the flip side, when governments say “we want teaching-only institutions,” they’re not saying they wish to ban professors from doing scholarly reading or engaging with colleagues at colloquia, etc. No one’s going to tell professors to give back their SSHRC grants or to stop writing articles. What they are saying is (a) that they don’t want to stump up big bucks for research infrastructure and (b) they would prefer a system that more closely resembles the U.S. public university system where at flagship institutions, professors essentially teach two courses a semester but everywhere else, they teach four. Also fair enough – unless one is prepared to argue that every non-flagship U.S. institution isn’t a “real university” because they don’t focus enough on research.

“Research” encompasses a wide variety of activities of varying intensities and time commitments. If we’re going to talk more about the balance between teaching and research, we need to stop making absolute statements about research and start treating the subject with the nuance it deserves.

October 19

Ducking the Issue

Man, did last week’s Globe editorial on reforming higher education get the bien pensants’ knickers in a knot, or what?

Constance Adamson of OCUFA took the predictable “everything would be fine if only there was more money” line. Over at Maclean’s, Todd Pettigrew made a passionate defence of research and teaching being inextricably entwined, largely echoing a piece from the previous week by McGill’s Stephen Saideman, who argued that universities aren’t teaching vs. research but teaching and research.

Methinks some people doth protest too much.

Let’s take it as read that universities are intrinsically about both teaching and research; there’s still an enormous amount of room for discussion about their relative importance. It may be cute to say that choosing between the two is a false dichotomy but in the real world profs make trade-offs: when they increase their research activity, they tend to spend less time teaching. This shouldn’t be controversial. It’s just math.

Unfortunately, obfuscating the trade-offs between research and teaching is a stock in trade of academia. My particular favourite is the old chestnut about research vs. teaching being a false dichotomy because “the best teachers are often the best researchers.” I’m being restrained when I say that this, as an argument, is a bunch of roadapples. As research has consistently shown, the relationship between the two is zero. Being a good researcher has no effect on the likelihood of being a good teacher and vice versa.

Look, there’s lots to quibble with in the Globe editorial, not least of which is the ludicrous insouciance with which it treats the concept of quality measurement. But most of its basic points are factually correct: by and large, parents and taxpayers think the main purpose of universities is to teach undergraduates and prepare them for careers (broadly defined). Canadian academics are, in fact, the most highly paid in the world outside the Ivy League and Saudi Arabia. They are also demonstrably doing less teaching than they used to, ostensibly in order to produce more research.

Anyone who can’t understand why that combination of facts might provoke at least some questioning about value for money really needs to get out more.

One of the sources of miscommunication here is that the seemingly simple term “research” is actually a very contested term which means enormously different things to different people. More on this tomorrow.

September 28

Differentiating University Missions (Part Three)

Here’s an important question. Why don’t Canadian governments act as if outputs matter when it comes to funding universities and colleges?

There’s nowhere in Canada where the overwhelming majority of operating funding isn’t essentially determined by enrolments (OK, you get goofy exceptions like Nova Scotia where the funding formula is based on what enrolment was in 2003, but apart from that…). But this creates no incentives other than to try increase market share, which essentially is a zero-sum game. It’s also really dull.

If we want to shake things up and get institutions to pursue differentiation, we need to go in a radically different direction. And in this respect, I’m a big proponent of the methods of the X Prize Foundation. Put a carrot out there big enough for institutions to pursue and institutions will change their behavior.

Interested in emphasizing good teaching? Why not offer $50 million annually to the institution that comes top on teaching quality in the next Globe and Mail satisfaction exercise? I guarantee that dozens of institutions will snap to it in terms of emphasizing teaching.

(Yes, yes, I know it’s an imperfect measure of teaching. But do it once and it’s an absolutely certainty that institutions will come up with a better measurement method the next year, so why not, you know?)

One could do the same kind of thing in terms of all sorts of outputs. The institution with the greatest impact on local economies? $40 million every five years. The institution that does the most to improve graduate employability? $80 million every five years. The amounts don’t actually matter that much, as long as they are big enough to drive institutional behaviour.

Where quantitative data can’t quite provide a definitive answer, adjudication can be done entirely by academics themselves (though preferably ones from out-of-province or from other countries) – by all means, let’s keep the principle of peer review. If nothing else, it will make institutions pay attention to their own outputs a lot more assiduously, which would be a good in and of itself.

As we saw yesterday, academia left to itself won’t provide diversity. You can try to tie institutions down to particular missions, but that’s likely to meet with resistance. So why not put down the stick and try some carrots instead? Considering how badly we’ve done at incentivizing diversity to date, the downside seems pretty minimal.

Page 1 of 212