Higher Education Strategy Associates

Category Archives: teaching

November 24

Class Size, Teaching Loads, and that Curious CUDO Data Redux

You may recall that last week I posted some curious data from CUDO, which suggested that the ratio of undergraduate “classes” (we’re not entirely sure what this means) to full-time professors in Ontario was an amazingly-low 2.4 to 1.  Three quick follow-ups to that piece.

1.  In the previous post, I offered space on the blog to anyone involved with CUDO who could clear up the mystery of why undergraduate teaching loads appeared to be so low.  No one has taken me up this offer.  Poor show, but it’s not too late; I hereby repeat the offer in the hope that someone will step forward with something convincing.

2.  I had a couple of people – both in Arts faculties at different medium-sized non-U15 Ontario universities – try to explain the 2.4 number as follows: teaching loads *are* in fact 4 courses per year (2/2), they said.  It’s just that once you count sabbaticals, maternity leaves, high enrolment (profs sometimes get a reduced load if one of their classes is particularly large), leaves for administrative duty, and “buyouts” (i.e. a prof pays to have a sessional teach the class so he/she can do research), you come down to around 2.5.

This is sort of fascinating.  I mean, if this were generally true, it essentially means that universities are managing their staff on the assumption that 35-40% of staff resources are theoretically available for teaching.  Now, obviously all industries overstaff to some extent: sickleaves and maternity happen everywhere.  But 40%?  That sounds extremely high.  It does not speak particularly well of an institution that gets its money primarily for the purpose of teaching.  Again, it would be useful if someone in an institution could confirm/deny, but it’s a heck of a stat.

3.  Turns out there’s actually a way to check this, because at least one university – give it up for Carleton, everyone – actually makes statistics about sessional professors public!  Like, on their website, for everyone to seeMirabile dictu.

Anyways, what Carleton says is that in 2014-15, 1,397 “course sections” were taught by contract or retired faculty, which translates into 756.3 “credits”.  At the same time, the university says it has 850 academic staff (actually, 878, but I’m excluding the librarians here).  Assuming they are all meant to teach 2/2, this would be 3,400 “classes” per year.  Now, it’s not entirely clear to me whether the definition of “classes” is closer to “credits” or “course sections”; I kind of think it is somewhere in between.  If it’s the former, then contract/retired faculty are teaching 22.2% of all undergraduate classes; if it’s the latter, then it’s 41.1%.  That’s a wide range, but probably about right.  And since Carleton is a pretty typical Canadian university, my guess is these numbers roughly hold throughout the system.

However, what this doesn’t tell you is what percentage of credit hours are taught by sessionals – if the undergraduate classes taught by these academics are larger, on average, than those taught by full-timers, then the proportion will be even higher than this.  I’ve had numerous conversations with people in a position to know who indicate that in many Ontario Arts faculties, the percentage of undergraduate credit hours taught by sessional faculty is roughly 50%. Elsewhere, of course, mileage may vary, but my guess is that with the possible exception of the Atlantic, this is the case pretty much everywhere.

I could be wrong, of course.  As with my CUDO offer, anyone who wants to step forward with actual data to show how I am wrong is welcome to take over the blog for a couple of days to present the evidence.

November 17

Curious Data on Teaching Loads in Ontario

Back in 2006, university Presidents got so mad at Maclean’s that they stopped providing data to the publication.  Recognizing that this might create the impression that they had something to hide, they developed something called “Common University Dataset Ontario” (CUDO) to provide the public with a number of important quantitative descriptors of each university.  In theory, this data is of better quality and more reliable than the stuff they used to give Maclean’s.

One of the data elements in CUDO has to do with teaching and class size.  There’s a table for each university, which shows the distribution of class sizes in each “year” (1st, 2nd, 3rd, 4th): below 30, 31-60, 61-90, 91-150, 151-250, and over 250.  The table is done twice, once including just “classes”, and another with slightly different cut-points that include “subsections”, as well (things like laboratories and course sections).  I was picking through this data when I realised it could be used to take a crude look at teaching loads because the same CUDO data also provides a handy number of full-time professors at each institution.  Basically, instead of looking at the distribution of classes, all you have to do is add up the actual number of undergraduate classes offered, divide it by the number of professors, and you get the number of courses per professor.  That’s not a teaching load per se, because many courses are taught by sessionals, and hell will freeze over before institutions release data on that subject. Thus, any “courses per professor” data that can be derived from this exercise is going to overstate the amount of undergradaute teaching being done by full-time profs.

Below is a list of Ontario universities, arranged in ascending order of the number of undergraduate courses per full-time professor.  It also shows the number of courses per professor if all subsections are also included.  Of course, in most cases, at most institutions, subsections are not handled by full-time professors but some are; and so assuming the underlying numbers are real, a “true” measure of courses per professors would be somewhere in between the two.  And remember, these are classes per year, not per term.

Classes Per Professor, Ontario, 2013


















Yes, you’re reading that right.  According to universities’ own data, on average, professors are teaching just under two and a half classes per year, or a little over one course per semester.  At Toronto, McMaster, and Windsor, the average is less than one course per semester.  If you include subsections, the figure rises to three courses per semester, but of course as we know subsections aren’t usually led by professors.   And, let me just say this again, because we are not accounting for classes taught by sessionals, these are all overstatements of course loads.

Now these would be pretty scandalous numbers if they were measuring something real.  But I think it’s pretty clear that they are not.  Teaching loads at Nipissing are not five times higher than they are at Windsor; they are not three and a half times higher at Guelph than at Toronto.  They’re just not.  And nor is the use of sessional faculty quite so different from one institution to another as to produce these anomalies.  The only other explanation is that there is something wrong with the data.

The problem is: this is a pretty simple ratio; it’s just professors and classes.  The numbers of professors reported by each institution look about right to me, so there must be something odd about the way that most institutions – Trent, Lakehead, Guelph, and Nipissing perhaps excepted – are counting classes.  To put that another way, although it’s labelled “common data”, it probably isn’t.  Certainly, I know of at least one university where the class-size data used within the institution explicitly rejects the CUDO definitions (that is, they produce one set of figures for CUDO and another for internal use because senior management thinks the CUDO definitions are nonsense).

Basically, you have to pick an interpretation here: either teaching loads are much, much lower than we thought, or there is something seriously wrong with the CUDO data used to show class sizes.  For what it’s worth, my money is on it being more column B than column A.  But that’s scarcely better: if there is a problem with this data, what other CUDO data might be similarly problematic?  What’s the point of CUDO if the data is not in fact common?

It would be good if someone associated with the CUDO project could clear this up.  If anyone wants to try, I can give them this space for a day to offer a response.  But it had better be good, because this data is deeply, deeply weird.

November 10

An Update on England’s Teaching Excellence Framework

Last week, the UK Minister for Business Innovation and Skills (which is responsible for higher education) released a green paper on higher ed.  It covered a lot of ground, most of which need not detain us here; I think I have a reasonable grasp of my readers’ interests, and my guess is that the number of you who have serious views about whether the Office For Fair Access should be merged into a new Office for Students, along with the Higher Education Funding Council for England, is vanishingly small (hi, Andrew!).  But it’s worth a quick peek into this document because it puts a bit more meat on the bones of that intriguing notion of a Teaching Excellence Framework.

You may remember that back in the summer I reviewed the announcement of a “Teaching Excellence Framework” wherein institutions that did well on a series of teaching metrics would be rewarded with the ability to charge higher tuition fees.  The question at the time was: what metrics would be used?  Well, the green paper is meant to be a basis for consultation, so we shouldn’t take this as final, but for the moment the leading candidates for metrics seem to be: i) post-graduation employment; ii) retention rates; and, iii) student satisfaction indicators.

Ludicrous?  Well, maybe.  At the undergraduate level, satisfaction tends to correlate with engagement, which at some vague level correlates with retention, so there’s sort of a case here – or would be if they weren’t already measuring retention.  Retention is not a silly outcome measure either, provided you can: a) control for entering grades (else retention be simply a function of selectivity), and b) figure out how to handle transfer students.  Unfortunately, it’s not clear from the document that either of these things has been thought through in any detail.

And as for using post-graduate employment? Again, it’s not necessarily a terrible idea. However, first: the regional distribution of graduate destinations matters a lot in a country where the capital city is so much richer than the rest of the country.  Second: the mantra that “what you study matters more than where you study” works in the UK, too – measuring success by graduate incomes only makes sense if you control for the types of degrees offered by each institution.  Third: the UK only looks at graduate incomes six months after graduation.  Presumably, a longer survey period is possible (Canada does it at three years, for instance), but the only thing on the table at the moment is the current laughably-short period.

So, there’s clearly a host of problems with the measures.  But perhaps even more troubling is what is on offer to institutions who do “well” on these measures.  The idea was that institutions would pay attention to “teaching” (or whatever the aforementioned load of indicators actually measures) if doing so allowed them to raise tuition above the current cap of £9,000.  However, according to the green paper, the maximum an institution will be allowed to increase fees every year is inflation.  Yet at the moment CPI is negative, which suggests this might not be much of an incentive.  Even if inflation returns to 1% or so, one has a hard time imagining this being enough of a carrot for all institutions to play along.

In sum, this is not a genuine attempt to find ways to encourage better teaching; rather, it is using a grab-bag of indicators to try to differentiate the sector into “better” and “worse” actors, and in so doing try to create more “signals of quality” to influence student decision-making.  Why does it want to do this?  Because it desperately wants higher education to work like a “normal” market, the government is trying to rationalize some of its weirder ideas about how the system should be run (the green paper also devotes quite a bit of space to market entry, which is code for letting private providers become universities with less oversight, as well as market exit, which is code for letting universities fail).

Though the idea of putting carrots in place to encourage better teaching has value, an effective policy would require a lot more hard thinking about metrics than the UK government appears willing to do.  As it stands, this policy is a dud.

August 04

Summer Updates from Abroad (2): The UK Teaching Excellence Framework

The weirdest – but also possibly most globally consequential – story from this year’s higher education silly season comes from England.  It’s about something called a “Teaching Excellence Framework”.

Now, news of nationally-specific higher education accountability mechanisms don’t often travel.  Because, honestly, who cares?  It’s enough trouble keeping track of accountability arrangements in one’s own country.  But there are few in academia, anywhere, who have not heard about the UK’s Research Excellence Framework (or its nearly-indistinguishable predecessor, the Research Assessment Exercise).  There is scarcely a living British academic who has travelled abroad in the last two decades without regaling foreign colleagues with tales of this legendary process, usually using words like “vast”, “bureaucratic”, “walls full of filing cabinets”, etc.  So news that the country may be looking at creating a second such framework, related to teaching, is sure to strike many as some sort of Orwellian joke.

But no, this government is serious.  It’s fair to say that the government was somewhat disappointed that its de-regulation of tuition fees did not force institutions to focus more on teaching quality.  With the market having failed in that task, they seem to be retreating to good old-fashioned regulation, mixed with financial incentives.

The idea – and, at the moment, it’s still just a pretty rough idea – is rather simple: institutions should be rated on the quality of their teaching.  But there are two catches: first, how do you measure it?  And second, what are the rewards for doing well?

The first of these seems to be up in the air.  Although the government has committed to the principle of assessing teaching at the institutional level, it genuinely seems to have not thought through in the least how it intends to achieve this.  There are a lot of options here: one could simply look at use of resources and presence of qualifications: student/teacher ratios, number of profs who have actually sought teaching qualifications, etc.  One could go the survey route, and ask students how they feel about teaching; one could also go the peer assessment route, and have profs rate each others’ teaching.  Or there’s the “learning gain” model, used by the Collegiate Learning Assessment, which was part of the AHELO system (from which, by the way, the UK has now officially withdrawn).  Of course, everyone knows that most of these measurements are either untested, or can be gamed, so there’s some fear that what the government really wants to do is to rely on – what might generously be called – lowest-common denominator statistics; namely, employment and income data.

Why might they want to do something this bell-ended, when everyone knows income is tied most closely to fields of study?  Well, the clue is in the rewards.  British universities have – as universities do – recently been clamouring for more money.  But according to this government, there is no more money to be had; in fact, at about the same time they announced the new excellence framework, they also announced a £150 million cut to the basic teaching grant, spread over two years.  So the proposed reward for good teaching is the ability to charge higher fees (so much for de-regulation… ) But as I explained a couple weeks backraising tuition doesn’t help much because, thanks to high debt and a generous loan forgiveness system, somewhere between 60 and 80% of any extra charges at the margin will end up on the public books circa 2048, anyway. 

But… if you only increase tuition at schools where income is the highest, the likelihood is that you will get a higher proportion of graduates earning enough to pay back their loans, over time.  And hence less money will need to be forgiven.  And hence this might not actually cost so much.  Which is why there is an incentive for government to do the wrong thing here.

Still, on the off-chance the government gets this initiative at least partially right, the impact could be global.  Governments all over the world are trying to get institutions to pay more attention to teaching; expect a lot of imitators if the results of this exercise look even half-promising.  Stay tuned.

April 08

ATMs and the Future of Education

I recently came across a fascinating counterintuitive piece of trivia in Timothy Taylor’s Conversable Economist blog.  At the time ATMs were introduced in 1980, there were half a million bank tellers in America.  How many were there 30 years later, in 2010?  Answer: roughly 600,000.  Don’t believe me?  See the data here.

Most people to whom I’ve told this story tend to get confused by this.  ATMs are one of the classic examples about how technology destroys “good middle class jobs”.  And so the first instinct many people have when confronted with this information is to try and defend the standard narrative – usually with something like “ah, but population growth, so they still took away jobs that could have existed”.  This is wrong, though.  When we look at manufacturing, we see absolute declines in jobs due to (among other things) automation.  With ATMs, however, all we see is a change in the rate of growth.

The key thing to grasp here is that the machines did not put the tellers out of business; rather, they modified the nature of bank telling.  To quote Taylor, “tellers evolved from being people who put checks in one drawer and handed out cash from another drawer to people who solved a variety of financial problems for customers”.

There’s an important truth here about the way skill-use evolves in the economy.  When most people think about technological change and its impacts on skills, they initially tend to presume “more machines → high tech → more tech skills needed → more STEM”.  But actually this is, at best, half the story.  Yes, new job categories are springing up in technical areas that require new forms of training.  But the more important news is that older job categories evolve into new ones with different kinds of requirements, and requiring a different skill set.  And in most cases, those new skills are – as in our bank teller example – about problem-solving.

Now, as a society, every time we see job requirements changing, our instinct is to keep kids in school longer.  But: a) pretty soon cost constraints put a ceiling on that strategy; and, b) this approach is of limited usefulness if all you’re doing is teaching the same old things for longer.

At a generic level, it’s not hard to teach in such a way that you’re giving students necessary skills to thrive in the future labour market.  Most programs, at some level, teach problem-solving (identifying a problem, synthesizing data about it, coming up with possible solutions, evaluating them, and coming up with a solution), although not all of them test for them explicitly, or explain to students how these skills are likely to be applied later on.  More could be done with respect to encouraging teamwork and interpersonal skills, but these aren’t difficult to add (although having the will to add them is something different).

The more difficult problem has to do with understanding where technology is likely to replace jobs and where it is likely to modify them.  What do driverless cars mean for the delivery business?  At a guess, it means an expanded market for the delivery of personalized services during commuting time.  Improved automatic diagnostic technology or robot pharmacists?  More demand for health professionals to dispense lifestyle and general health counselling.  Increased automation in legal affairs?  Less time on research means more time for, and emphasis on, negotiation.

I could go on, but I won’t.  The point, as Tyler Cowen makes in Average is Over (a book whose implications for higher education have been criminally under-examined) is that the future in many fields belongs to people who can best blend human creativity with the power of computers.  And so the relevant question for universities is: to what extent are you monitoring technology trends and thinking about how they will change what you teach, how you teach it, and how you evaluate it?  Or, put differently: to what extent are your curricula “future-ready”?

In too many cases, the answers to these questions land somewhere between “not very much” and “not at all”.  As a sector, there is some homework to be done here.

February 11

Who Owns Courses?

After the preposterous CAUT report on the University of Manitoba’s Economics Department was released, President David Barnard offered a wonderfully robust and thought-provoking refutation of CAUT’s accusations.

One of the most interesting observations Barnard makes relates to a specific incident from the report, namely the request by a departmental council to review an existing Health Economics course after having approved a new Economic Determinants of Health Course taught by the same professor.  CAUT viewed this as a violation of the professor’s academic freedom (basically – she/he can teach whatever she/he likes).

In an age when we are all intensely aware of intellectual property rights issues, we have, over time, come to focus on the professor’s role as a creator of content.  And this is absolutely right.  The way in which Economics Macro 300 or Organizational Behaviour 250 gets taught is a reflection of a professor’s lifetime of scholarship, and many hundreds of hours of hard work in creating a pedagogy and syllabus that conveys the necessary information to students.  The idea that this “belongs” to anyone other than the professor is ridiculous – which is why there have been such fierce battles over the terms of universities’ involvement with private for-profit companies, like Coursera, with respect to online education.

Barnard responds to this line of thinking by reminding us of a very important truth: Macro 300 and OB 250 exist independently of the professors who currently teach them.  When they are approved by Senate, they become the property of the university as a whole (with the department in which the course is situated taking special responsibility).  After the incumbent of a particular course retires or leaves, someone else will be asked to takeover.  The course, in this sense, is eternal and communal.  It does not “belong” to the professor.

There’s an obvious tension here between the way a course gets taught (owned by the prof) and the course objectives and outcomes (owned by the university).  Usually – at least in Canada and the United States – we solve the problem by always leaning in favour of the professor.  Which is certainly the easier option.  However, this attitude, which gives total sovereignty to professors at the level of the individual course, inevitably leads to programs become disjointed –  especially in Arts and Sciences.  Students end up missing key pieces of knowledge, or have to learn it and re-learn it two or three times.

Universities own courses in the sense that a course is a building block towards a degree, (which the university very definitely owns – its entire existence is predicated on being a monopoly provider of degrees).  As a result, course objectives, how a course fits into the overall program goals, course assessment guidelines, and course delivery mechanisms (online, blended, or in-person) are all legitimately in the hands of the university and its academic decision-making bodies.  The actual syllabus – that is, what material gets taught in pursuit of the objectives – and the pedagogical methods used is what belongs to the professor.

The problem here is that, in Arts and Science at least (less so elsewhere), our smorgasbord thinking about curriculum makes us prone to assuming that courses stand alone, and do not contribute to a larger programmatic structure.  Hence the widespread fallacy that professors “own” courses, when the reality is that courses are a shared enterprise.

January 23

Classroom Economics (The End)

So we spent Monday looking at the economic basics of classroom and teaching loads, and Tuesday looking at how difficult it is to improve the situation by increases in tuition or government grants.  Wednesday we saw that reducing average academic compensation (presumably via increasing the proportion of credits taught by adjuncts) can be quite effective in reducing teaching loads, while on Thursday we saw how trying to achieve a similar effect through attacking costs other than academic compensation would require enormously painful – and probably unrealistic – cuts.

What can we conclude from all this?

There is no silver bullet here.  You can’t solve everything on the revenue side because governments: i) aren’t going to fork over the stonking huge amounts of money required to change things; ii) aren’t going to permit large tuition increases; and, iii) at some point are going to put limits on the extent to which universities can escape domestic fiscal problems by becoming finishing schools for the Asian middle class.  At the same time, you can’t solve everything by decreasing average academic wages because: i) tenure; ii) unions; and, iii) casualization can’t go on indefinitely.  Finally, you can’t solve everything by cutting “fat” on the non-academic side because the size of the bloodletting would simply be too big.

So, realistically, the solution to keeping teaching loads (and hence class sizes) manageable is to work at the margins on all three, at once.  The income one is probably the easiest: even if government does not have more money, it could (as I argued back here) allow tuition to rise without students being unduly affected if it simply reformed student aid to make it more efficient and transparent.

On non-academic costs, vigilance is key.  Costs need to be kept in check.  There is a need to continually become more efficient – which probably means looking more seriously at outsourcing certain functions. Bits of IT come to mind, as do bookshops.

On academic salaries, there’s no big secret about what needs to be done.  Every time wages increase, universities either have to get more income, or increase the number of sessionals, or raise teaching loads.  That’s simple arithmetic.  To the extent an institution can keep enrolments up and get a little bit more money per student, on average, the situation can stay relatively stable indefinitely (though it isn’t going to get any better).

Where this gets tricky is where student numbers – and hence income – start to fall.  We didn’t explore that this week because our equation – X = aϒ/(b+c) – assumes that there is budget balance.  But when enrolment drops, expenditure has to drop in the medium term because the lack of students means you can’t release the pressure by increasing teaching loads.

So when you see the number of applicants to an institution drop by, say, 20% (as first-choice applications have now done at Windsor) over two years, you start to worry.  Without the option to increase loads, expenditures have to fall, and as we’ve seen, the least disruptive way to do that is to increase sessionals.  But since tenure exists and you can’t force out a professor and replace them with a sessional, that’s a marginal solution at best.  Academic compensation will have to fall: either through wage freezes, pension changes, or a reduction in the number of academic positions.  Either that or the institution will close.

There’s no sinister conspiracy here, no evil administrative plots.  It’s just math.  More people should pay attention to it.

January 21

Classroom Economics (Part 3)

(If you’re just tuning in today, you may want to catch up on Part 1 and Part 2)

Back to our equation: X = aϒ/(b+c), where “X” is the total number of credit hours a professor must teach each year (a credit hour here meaning one student sitting in one course for one term), “ϒ” is average compensation per professor, “a” is the overhead required to support each professor, “b” is the government grant per student credit hour, and “c” is the tuition revenue per credit hour.

I noted in Part 1 of this series that most profs don’t actually teach the 235 credit hours our formula implied. Partly that’s because teaching loads aren’t distributed equally.  Imagine a department of ten people, which would need to teach 2350 credit hours in order to cover its costs.  If just two people teach the big intro courses and take on 500 credit hours apiece, the other 8 will be teaching a much more manageable 169 credit hours (5 classes of under 35 students for those teaching 3/2).

Now, while I’m talking about class size, you’ll notice that this concept isn’t actually a factor in our equation – only the total number of credit hours required to be taught.  You can divide ‘em up how you want.  Want to teach 5 courses a year?  Great.  Average class size will be 47.  Want to teach four courses?  No sweat, just take 59 students per class instead.  It’s up to you.

When you hear professors complain about increased class sizes, this is partly what’s going on.  As universities have reduced professors’ teaching loads (to support research, natch) without reducing the number of students, the average number of students per class has risen.  That has nothing to do with underfunding or perfidious administrators; it’s just straight arithmetic.

But there is a way to get around this.  Let’s say a university lowers its normal teaching load from 3/2 to 2/2, as many Canadian institutions have done in the last two decades.  As I note above, there is no necessary financial cost to this: just offer fewer, larger courses.  Problem is, no university that has gone down this path has actually reduced its course offerings by the necessary 20% to make this work.  Somehow, they’re still offering those courses.

That “somehow” is sessional lecturers, or adjuncts if you prefer.  They’ll teach a course for roughly a third of what a full-time prof will.  So their net effect on our equation is to lower the average price of academic labour.  Watch what happens when we reduce teaching loads from 3/2 to 2/2, and give that increment of classes over to adjuncts.

(.8*150,000) + (.2*50,000) = $130,000

X= 2.27($150,000)/($600+$850) = 235

X= 2.27(130,000)/($600+$850) = 195

The alert among you will probably note that the fixed cost nature of “a” means that it would likely rise somewhat as ϒ falls, so this is probably overstating the fall in teaching loads a bit.  But still, this result is pretty awesome.  If you reduce your faculty teaching load, and hand over the difference to lower-paid sessionals, not only do you get more research, but the average teaching load also falls significantly.  Everyone wins!  Well, maybe not the sessionals, but you get what I mean.

This underlines something pretty serious: the financial problems we have lay much more on the left side of the equation than on the right side.  However much you think professors deserve to be paid, there’s an iron triangle of institutional income, salaries, and credit hours that cannot be escaped.  If you can’t increase tuition, and more government money isn’t forthcoming, then you either have to accept higher teaching loads or lower average salaries.  And if wage rollbacks among full-time staff isn’t in the cards, then average costs are going to be reduced through increased casualization.  Period.

Or almost, anyway. To date we’ve focused just on ϒ – but what about “a”?  Can’t we make that coefficient smaller somehow?

Good question.  More tomorrow.

January 20

Classroom Economics (Part 2)

Yesterday, I introduced the equation X = aϒ/(b+c) as a way of setting overall teaching loads. Let’s now use this to understand how funding parameters drive overall teaching loads.

Assume the following starting parameters:







Where a credit hour = 1 student in 1 class for 1 semester.

Here’s the most obvious way it works.  Let’s say the government decides to increase funding by 10%, from $600 to $660 (which would be huge – a far larger move than is conceivable, except say in Newfoundland at the height of the oil boom).  Assuming no other changes – that is, average compensation and overhead remain constant – the 10% increase would mean:

X= 2.27($150,000)/($600+$850) = 235

X= 2.27($150,000)/($660+$850) = 225

In other words, a ten percent increase in funding and a freeze on expenditures would reduce teaching loads by about 4%.  Assuming a professor is teaching 2/2, that’s a decrease of 2.5 students per class.  Why so small?  Because in this scenario (which is pretty close to the current situation in Ontario and Nova Scotia), government funding is only about 40% of operating income.  The size of the funding increase necessary to generate a significant effect on teaching loads and class sizes is enormous.

And of course that’s assuming no changes in other costs.  What happens if we assume a more realistic scenario, one in which average salaries rise 3%, and overhead rises at the same rate?

X= 2.27($154,500)/($660+$850) = 232

In other words, as far as class size is concerned, normal (for Canada anyway) salary increases will eat up about 70% of a 10% increase in government funding.  Or, to put it another way, one would normally expect a 10% increase in government funding to reduce class sizes by a shade over 1%.

Sobering, huh?

OK, let’s now take it from the other direction – how big an income boost would it take to reduce class sizes by 10%?  Well, assuming that salary and other costs are rising by 3%, the entire right side of the equation (b+c) would need to rise by 14.5%.  That would require an increase in government funding of 35%, or an increase in revenues from students of 25% (which could either be achieved through tuition increases, or a really big shift from domestic to international enrolments), or some mix of the two; for instance, a 10% increase in government funds and a 17% increase in student funds.

That’s more than sobering.  That’s into “I really need a drink” territory.  And what makes it worse is that even if you could pull off that kind of revenue increase, ongoing 3% increases in salary and overhead would eat up the entire increase in just three years.

Now, don’t take these exact numbers as gospel.  This example works in a couple of  low-cost programs (Arts, Business, etc.) in Ontario and Nova Scotia (which, to be fair, represent half the country’s student body), but most programs in most provinces are working off a higher denominator than this, and for them it would be less grim than I’m making out here.  Go ahead and play with the formula with data from your own institution and see what happens – it’s revealing.

Nevertheless, the basic problem is the same everywhere.  As long as costs are increasing, you either have to get used to some pretty heroic revenue assumptions (likely involving significant tuition increases) or you have to get used to the idea of ever-higher teaching loads.

So what are the options on cost-cutting?  Tune in tomorrow.

July 07

How to Measure Teaching Quality

One of the main struggles with measuring performance in higher education – whether of departments, faculties, or institutions – is how to measure the quality of teaching.

Teaching does not go entirely unmeasured in higher education.  Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester.  The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions.  That’s partly because faculty unions are wary about using individual professors’ performance data as an input for anything other than pay and promotion decisions, but it also suits the interests of the research-intensive universities who do not wish to see the creation of a metric that would put them at a disadvantage vis-a-vis their less-research-intensive brethren (which is also why course evaluations differ from one institution to the next).

Some people try to get around the comparability issue by asking students about teaching generally at their institution.  In European rankings (and Canada’s old Globe and Mail rankings), many of which have a survey component, students are simply asked questions about the quality of courses they are in.  This gets around the issue of using course evaluation data, but it doesn’t address a more fundamental problem, which is that a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it.  There is a bit of truth here: it has been established, for instance, that teachers who grade more leniently tend to get better course satisfaction scores.  But this is hardly a lethal argument.  Just control for average class grade before reporting the score.

It’s not as though there isn’t a broad consensus on what makes for good teaching.  Is the teacher clear about goals and expectations?  Does she/he communicate ideas effectively?  Is he or she available to students when needed?  Are students challenged to learn new material and apply this knowledge effectively?  Ask students those kinds of questions and you can get valid, comparable responses.  The results are more complicated to report than a simple satisfaction score, sure – but it’s not impossible to do so.  And because of that, it’s worth doing.

And even the simple questions like “was this a good course” might be more indicative than we think.  The typical push-back is “but you can’t really judge effectiveness until years later”.  Well, OK – let’s test a proposition.  Why not just ask students about a course they took a few years ago, and compare it with the answers they gave in a course evaluation at the time?  If they’re completely different, we can indeed start ignoring satisfaction types of questions.  But we might find that a good result today is in fact a pretty good proxy for results in a few years, and therefore we would be perfectly justified in using it as a measure of teaching quality.

Students may be inexperienced, but they’re not dumb.  We should keep that in mind when dismissing the results of teaching quality surveys.

Page 1 of 212