HESA

Higher Education Strategy Associates

Category Archives: Teaching & Learning

October 11

How Sessionals Undermine the Case for Universities

Last year, I wrote a blog post about what sessionals get paid, and how essentially it works out to about what assistant profs get paid for the teaching component of their jobs and that in this sense at least one could argue that sessionals in fact are getting equal pay for work of equal value.

I got a fair bit of hate mail for that one, mostly because people have trouble distinguishing between is-ought arguments.  People seemed to think that because I was pointing out that pay rates for teaching are pretty close for junior profs and sessionals, that everything is therefore hunky-dory.  Not at all.  The heart of the case is that sessionals don’t want to be paid just to teach, they’d like to be paid to do research and all that other scholarly stuff as well.

(Well, some of them would, anyway.  Others have day jobs and are perfectly happy teaching one course a year on the side because it’s fun.  We have no idea how many fall into each category. Remarkably, Statistics Canada is planning on spending a million dollars to count sessionals in Canadian universities but in such a way as to shed absolutely no light on this rather important question.  But I digress: for the moment, let us assume that when we are talking about sessionals, we are talking about those who aspire to full-time academia)

A lot of advocates on behalf of sessionals seem obsessed with arguing their case on “fairness” grounds.  “It’s not fair” that they only get paid to teach while others get paid to teach and research and do whatever the hell service consists of.  To which there is a fairly curt answer, even if most people are too polite to make it: “if you didn’t get hired as a full-time prof, it’s probably because the relevant hiring committee didn’t think you were up to our standards on the whole research thing.”  So this isn’t really a winning argument.

Where universities are much more vulnerable is on the issue of mission.  The whole point of universities – the one thing that gives them their cachet – is that they are supposed to be delivering education in a research-intensive atmosphere.  This is the line of defence that is continually used whenever the issue of offering more degrees in community colleges or polytechnics arises.  “Research-intensive!  Degrees Gotta Be Research-intensive!  Did we mention the Research Intensity thing?”

But given that, why is it that in most of the country’s major universities, over 50% of undergraduate student course-hours are taught by sessionals who are specifically employed so as to not be research-active?

Ok, where the purpose of education is more practice than theory (eg law, nursing, journalism), you probably want a lot of sessionals who are working professionals.  In those programs, sessionals complement the mission.  But in Arts?  Science?  Business?  In those fields, the mere existence of sessionals undermines the case for universities’ exclusivity in undergraduate degree-granting.  And however financially-advantageous sessionals may be to the university (not an unimportant consideration in an era where public support for universities is eroding), long-term this is a far more dangerous problem.

So the real issue re: universities and sessionals is not one of fairness but of hypocrisy.  If sessionals really wanted to put political pressure on institutions, they would make common cause with colleges and polytechnics.  They would ceaselessly demand documents from institutions through Freedom of Information from institutions to determine what percentage of student credit hours are taught by sessionals.  They would use that information to loudly back colleges’ claims for more undergraduate degree-granting powers, because really, what’s the difference?  And eventually, governments would relent because the case for the status quo is really weak.

My guess is that those activists arguing on behalf of sessionals won’t choose this course because their goal is less to smash the system of privileged insiders than it is to join it.    But they’d have a case.  And universities would do well to remember it.

May 10

Why Education in IT Fields is Different

A couple of years ago, an American academic by the name of James Bessen wrote a fascinating book called Learning by Doing: The Real Connection Between Innovation, Wages and Wealth.  (It’s brilliant.  Read it).  It’s an examination of what happened to wages and productivity over the course of the industrial revolution, particularly in the crucial cotton mill industry.  And the answer, it turns out, is that despite all the investment in capital which permitted vast jumps in labour productivity, in fact wages didn’t rise that much at all.  Like, for about fifty years.

Sound familiar?

What Bessen does in this book is to try to get to grips with what happens to skills during a technological revolution.  And the basic problem is that while the revolution is going on, while new machines are being installed, it is really difficult to invest in skills.  It’s not simply that technology changes quickly and so one has to continually retrain (thus lowering returns to any specific bit of training); it’s also that technology is implemented in very non-standard ways, so that (for instance) the looms at one mill are set up completely differently from the looms at another and workers have to learn new sets of skills every time they switch employers.  Human capital was highly firm-specific.

The upshot of all this: In fields where technologies are volatile and skills are highly non-standardized, the only way to reliably increase skills levels is through “learning by doing”.  There’s simply no way to learn the skills in advance.  That meant that workers had lower levels bargaining power because they couldn’t necessarily use the skills acquired at one job at another.  It also meant that, not to put too fine a point on it, that formal education becomes much less important compared to “learning by doing”.

The equivalent industry today is Information Technology.  Changes in the industry happen so quickly that it’s difficult for institutions to provide relevant training; it’s still to a large extent a “learning by doing” field.  Yet, oddly, the preoccupation among governments and universities is: “how do we make more tech graduates”?

The thing is, it’s not 100% clear the industry even wants more graduates.  It just wants more skills.  If you look at how community colleges and polytechnics interact with the IT industry, it’s often through the creation of single courses which are designed in response to very specific skill needs.  And what’s interesting is that – in the local labour market at least – employers treat these single courses as more or less equivalent to a certificate of competency in a particular field.  That means that these college IT courses these are true “microcredentials” in the sense that they are short, potentially stackable, and have recognized labour market value.  Or at least they do if the individual has some demonstrable work experience in the field as well (so-called coding “bootcamps” attempt to replicate this with varying degrees of success, though since they are usually starting with people from outside the industry, it’s not as clear that the credentials they offer are viewed the same way by industry).

Now, when ed-tech evangelists go around talking about how the world in future is going to be all about competency-based badges, you can kind of see where they are coming from because that’s kind of the way the world already works – if you’re in IT.  The problem is most people are not in IT.  Most employers do not recognize individual skills the same way, in part because work gets divided into tasks in a somewhat different way in IT than it does in most other industries.  You’re never going to get to a point in Nursing (to take a random example) where someone gets hired because they took a specific course on opioid dosages.  There is simply no labour-market value to disaggregating a nursing credential, so why bother?

And so the lesson here is this: IT work is a pretty specific type of work in which much store is put in learning-by-doing and formal credentials like degrees and diplomas are to some degree replaceable by micro-credentials.  But most of the world of work doesn’t work that way.  And as a result, it’s important not to over-generalize future trends in education based on what happens to work in IT.  It’s sui generis.

Let tech be tech.  And let everything else be everything else.  Applying tech “solutions” to non-tech “problems” isn’t likely to end well.

April 10

Evaluating Teaching

The Ontario Confederation of University Faculty Associations (OCUFA) put out an interesting little piece the week before last summarizing the problems with student evaluations of teaching.  It contains reasonable summary of the literature and I thought some of it would be worth looking at here.

We’ve known for awhile now that the results of student evaluations are statistically biased in various ways.  Perhaps the most important way they are biased is that professors who mark more leniently get higher rankings from their students.  There is also the issue of what appears to be discrimination: female professors and visible minority professors tend to get lower ratings than white men.  And then there’s the point that OCUFA makes with respect to the comments section of these evaluations being a hotbed of statements which amount to harassment.  These points are all well worth making.

One might well ask: given that we all know about the problems with teaching evaluations, why in God’s name do institutions still use them?  Fair question.  Three hypotheses:

  1. Despite flaws in the statistical measurement of teaching, the comments actually do provide helpful feedback, which professors use to improve their teaching.
  2. When it comes to pay and promotion, research is weighted far more highly than teaching, so unless someone completely tanks their teaching evals – and by tanking I mean doing so much below par that it can’t reasonably be attributed to one of the biases listed above – they don’t really matter all that much (note: while this probably holds for tenured and tenure-track profs, I suspect the stakes are higher for sessionals).
  3. No matter how bad a measurement instrument they are, the idea that one wouldn’t treat student opinions seriously is totally untenable, politically.

In other words, there are benefits despite the flaws, the consequences of flaws might not be as great as you think, and to put it bluntly, it’s not clear what the alternative is.  At least with student evaluations you can maintain the pretense that teaching matters to pay and promotion.  Kill those, and what have you got?  People already think professors don’t care enough about teaching.  Removing the one piece of measurement and accountability for teaching that exists in the system – no matter how flawed – is simply not on.

That’s not to say there aren’t alternatives to measuring teaching.  One could imagine a system of peer evaluation, where professors rate one another.  Or one could imagine a system where the act of teaching and the act of marking are separated – and teachers are rated on how well their students perform.  It’s not obvious to me that professors would prefer such a system.

Besides, it’s not as though the current system can’t be redeemed.  Solutions exist.  If we know that easy markers get systematically better ratings, then normalize ratings based on the class average mark.  Same thing for gender and race: if you know what the systematic bias looks like, you can correct for it.  And as for ugly stuff in the comments section, it’s hardly rocket science to have someone edit the material for demeaning comments prior to handing it to the prof in question.

There’s one area where the OCUFA commentary goes beyond the evidence however, and that’s in trying to translate the findings of student teaching evaluations (ie. how did Professor X do in Class Y) to surveys of institutional satisfaction.  The argument they make here is that because the one is known to have certain biases, the other should never be used to make funding decisions.  Now, without necessarily endorsing the idea of using student satisfaction as a funding metric, this is terrible logic. The two types of questionnaires are entirely different, ask different questions, and simply are not subject to the same kinds of biases.  It is deeply misleading to imply otherwise.

Still, all that said, it’s good that this topic is being brought into the spotlight.   Teaching is the most important thing universities do.  We should have better ways of measuring its impact.  If OCUFA can get us moving along that path, more power to them.

February 21

Two Studies to Ponder

Sometimes, I read research reports which are fascinating but probably wouldn’t make for an entire blog post (or at least a good one) on their own.  Here are two from the last couple of weeks.

Research vs. Teaching

Much of the rhetoric around universities’ superiority over other educational providers is that their teachers are also at the forefront of research (which is true if you ignore sessionals, but you’d need a biblically-sized mote in your eye to miss them).  But on the other hand, research and teaching present (to some extent at least) rival claims on an academic’s time, so surely if more people “specialized” in either teaching or research, you’d get better productivity overall, right?

Anyone trying to answer this question will come up pretty quickly against the problem of how to measure excellence in teaching.   Research is easy enough: count papers or citations or whatever other kind of bibliometric outcome takes your fancy.  But measuring teaching is hard.  One line of research tries to measure the relationship between research productivity and things like student evaluations and peer ratings.  Meta-analyses show zero correlation between the two: high research output has no relationship with perceived teaching quality.  Another line of research looks at research output versus teaching output in terms of contact hours.  No surprise there: these are in conflict.  The problem with those studies is that the definitions of quality are trivial or open to challenge.  Also, very few studies do very much to control for things like discipline type, institutional type, class size, stage of academic career, etc.

So now along comes a new study by David Figlio and Morton Schapiro of Northwestern University, which has a much more clever way of identifying good teaching.  They look specifically at professors teaching first year courses and ask the question: what is the deviation in grades each of their students receives in follow-up courses in the same subject. This is meant to measure whether or not professors are “inspiring” their students.  Additionally, the measure how many students actually go on from each professor’s first year class to major in a subject.  The first is meant to measure “deep learning” and the second to measure how well professors inspire their students.  Both measures are certainly open to challenge, but they are still probably better than the measures used in earlier studies.

Yet the result is basically the same as those earlier studies: having a better publishing record is uncorrelated with teaching quality measures: that is, some good researchers have good teaching outputs while others don’t.

Institutions should pay attention to this result.  It matters for staffing and tenure policies.  A lot.

Incubator Offsets

Christos Kolympiris of Bath University and Peter Klein of Baylor University have done the math on university incubators and what they’ve found is that there are some interesting opportunity costs associated with them.  The paper is gated, but a summary can be found here.  The main one is that on average, universities see a decrease in both patent quality (as measured by patent citations) and licensing revenues after establishing an incubator.  Intriguingly, the effect is larger at institutions with lower research income, suggesting that the more resources are constrained, the likelier it is that incubator funding is being drawn from other areas of the institutional research effort, which then suffer as a result.

(My guess, FWIW, is that it also has to do with limited management attention span.  At smaller institutions, there are fewer people to do oversight and hence a new initiative takes away managerial focus in addition to money).

This intriguing results is not an argument against university or polytechnic incubators; rather, it’s an argument against viewing such initiatives as purely additive.  The extent to which they take resources away from other parts of the institution needs to be considered as well.  To be honest, that’s probably true of most university initiatives, but as a sector we aren’t hardwired to think that way.

Perhaps we should be.

January 23

A Puzzling Pattern in the Humanities

Big news in Alberta the other day: the University of Alberta has decided to cut fourteen (14!) programs, in the humanities. That’s on top of a programs cull just two years ago in which seventeen programs – mostly in Arts – were also axed! Oh my God! War on the humanities, etc, etc.

Or at least that’s the way it sounds, until you read the fine print around the announcement and realise that these fourteen programs, collectively, have 30 students enrolled in them. The puzzle here, it seems, is not so much “why are these programs being cancelled” as “why on earth were they ever approved in the first place”?

For the record, here are the programs being axed: Majors programs in Latin American studies, Scandinavian studies, honours programs in classical languages, creative writing, history/classics (combined) religious studies, women and gender studies, comparative literature, French, math (that is, a BA Hon in math – which is completely separate from the BSc in Math, which is going nowhere), and also Scandinavian studies (again). And technically, they are not being axed, but rather “suspending admissions”, which means that current students will be able to finish their degrees.

Two takeaways from this:

The first is that the term “programs” is a very odd and sometimes misunderstood one. Universities can get rid of programs without affecting a single job, without even reducing a single course offerings. In the smorgasboard world of North American universities, all programs are essentially virtual. The infrastructure of a university is essentially the panoply of courses offered by departments. Academic entrepreneurs can then choose to bundle certain configurations of courses into “programs” (with the approval of a lot of committees and Senate of course). Of course, programs need co-ordinators and a co-ordinators get stipends and more importantly a small bump in prestige. But overall, programs are very close to costless because departments are absorbing all the costs of delivering the actual courses. (The real costs are actually the ludicrous amount of programming time involved in getting registrarial software to recognize all these different degree pathway requirements).

It doesn’t actually have to be this way. Harvard’s Faculty of Arts and Science only has about fifty degree programs; pretty much every mid-size Canadian university has twice that. And there’s no obvious benefit to students in this degree of specialization. What’s the advantage of this? Why, apart from inertia and a desire not to rock the boat, do we put up with this?

A second point, though. Readers may well ask “why do these kinds of program cuts always affect the humanities more than any other faculty”. This is a good question. And the answer is: because no other faculty hacks itself into ever-tinier pieces the way humanities does. Seriously. This isn’t a question of specialization – every field has that – it’s a question of whether or not to create academic structures and bureaucracies to parallel every specialization.

Imagine, for instance, what biology would look like if it were run like humanities. You’d probably have separate degrees and program co-ordinators for epigenetics, ichnology, bioclimatology, cryobiology, limnology, morphology – the potential list goes on and on. But of course biology doesn’t do that, because biology is not ridiculous. Humanities, on the other hand…

There are lots of good histories of the humanities out there (I recommend Rens Bod’s A New History of the Humanities and James Turner’s Philology: the Origins of the Modern Humanities), but as far as I know no one has ever really looked in a historical way as to why humanities, alone among branches of the academy, chose to Balkanize itself administratively in such an odd way.  For a set of disciplines which constantly worries about being under attack, you’d think that grouping together in larger units would be an obvious defence posture.  Why not just have big programs in philosophy, languages and literature and philology/history and be done with it?

October 25

What could a new private university in Canada look like?

Yesterday I outlined why a major private university has never emerged in Canada.  But I also suggested that it wasn’t impossible one might pop up in the future if it were backed by someone with sufficiently deep pockets and an eye for strategy.  Here is what I mean by this:

For a private university to be a success, it needs to be getting thousands of students.  Say 4,000 or so.  It’s not impossible to operate below that level, but it’s precarious.  Ask Bishop’s.

And that’s tough.  Getting people to commit to a university before it has any visible sign of success (such as well-employed graduates) is extremely difficult when there are quite prestigious institutions available nearby, as is the case nearly everywhere in Canada.  Ask Quest.

Any new university is likely to take a few years to catch on; and yet it must be able to put out a quality product during that time.  Hence the need for deep pockets.  But there also needs to be a real value proposition for a new institution: a reason to go there rather than a regular university.  England’s Buckingham University and Australia’s Bond University (both private universities which have managed to clear the 4,000 student mark) did this by offering accelerated degrees that allowed a student to graduate more quickly.  That might also work here, but let me suggest a couple of other ways that might work too.

The first possibility is to create a university which can compete with big public universities on price.  There are a couple of ways of doing this, but basically it means re-thinking the structure of an institution.  One popular route these days is to do away with departments (which are an utter cost sink and the source of pretty much any cost-inflating idea a university can come up with) and leave faculties as the only level of administration.  Combine this structure with a human-resource strategy which combines a few well-rewarded big names with a mostly casual staff, and there’s the possibility of creating an institution which is cost-effective while still carrying enough prestige to attract students.  In the United States, two new universities have been built along more or less this model in the past decade (Harrisburg University of Science and Technology and University of Minnesota Rochester), although neither has gone quite as far as Professor Vance Fried and his prescriptions in a well-known 2008 paper which purported to offer “Ivy-like” education for below $7,500 per student.  In Ontario, back a few years ago when for some reason the government thought it was going to build three new universities, a similar idea was proposed by Centennial College plus Maureen Mancuso and Alastair Summerlee of Guelph University.  The proposal was technically ineligible and the competition for new campuses never happened anyway because (whoops) the number of 18 year-olds started declining in 2013 (and who could possibly have foreseen that at any time since 1995?).  But nevertheless I think it shows there’s at least some appetite to head down this route, and that it would be possible to go down this route for at least Arts, Sciences and Business.

The second possibility would take the opposite route.  If Canada has an available niche, it’s in luxurious, prestigious liberal arts colleges (yes, there is the U4 League, but none of them could be described as luxurious – indeed provincial funding models leave these kinds of universities pretty stretched).  So why not try to charge top dollar for a Liberal Arts school with big names?  This has been the approach of AC Grayling’s New College of the Humanities (NCH) in London for the past four years and – regulatory niggles aside – it seems to be doing reasonably well.   Now I know what you’re thinking: who wants to pay for Liberal Arts degrees, unemployment, baristas, etc.  But the fact is, as institutions like Middlebury, Bryn Mawr and indeed NCH show, provided the level of instruction is good and the student-teacher ratio small, there are lots of people prepared to pay for that kind of education.  Maybe not 4,000 people for year, but if the fees are high enough, a university can survive at somewhat smaller numbers.

So yes, the potential for a private university is there.  What’s missing so far is ambition and money.  One day, someone will fill that gap.  It’s just a question of when.

August 17

Measuring Teaching Quality

The Government of Ontario, in its ongoing quest to try to reform its funding formula, continues to insist that one element of the funding formula needs to relate to the issue of “teaching quality” or “quality of the undergraduate experience”.  Figuring out how to do this is of course a genuine puzzle.

There are some of course who believe that quality can only be measured in terms of inputs (i.e. funding) and not through outputs (hi, OCUFA!)  Some like the idea of sticking with existing instruments like the National Survey on Student Engagement (NSSE); others want to measure this through “hard numbers” on post-graduate outcomes like employment rates, average salaries and the like.  Still others are banging away at certain types of solutions involving testing of graduates; HEQCO’s Essential Adult Skills Initiative seems like an interesting experiment in this respect.

But there are obvious defects with each of these approaches.  The problem with the “let’s-measure-inputs-not-outputs” approach is that it’s bollocks.  The problem with the “hard numbers” approach is that unemployment and income among graduates are largely functions of location and program offerings (a pathetic medical school in Toronto would always do better than a kick-ass Arts school in Thunder Bay).  And while the testing approach is interesting, all that testing is a bit on the clunky side, and it’s not entirely clear how well the data from such exercises would actually help institutions improve themselves.

That leaves the old survey stalwarts like NSSE and CUSC.  These, to be honest, don’t tell us much about quality or paths to improvement.  They did when they were first introduced, 15-20 years ago, but each successive survey adds less and less.  To be honest, pretty much the only reason we still use them is because nobody wants to break up the time-series.  But that’s an argument against particular surveys rather than surveys in general.  Surveys are good because they are cheap and easily replicable.  We just need to find a better survey, one that measures quality more directly.

Here’s my suggestion.  What we really need to know is how many students are being exposed to good teaching practices and at what frequency.  We know from various types of research what good teaching practices are (e.g. Chickering & Gamson’s classic Seven Principles for Good Practice).  Why not ask students about whether they see those practices in the classroom?  Why not ask students how instructional time is used in practice (e.g. presenting content vs. discussion vs. group work), or what they are asked to do outside of class?  And not just in a general way across all classes, the way NSSE does it (which ends up resembling a kind of satisfaction measurement exercise and doesn’t give Deans or departmental chairs a whole lot to work with): why not do it for every single class a student takes, and link those responses to the students’ academic record?

Think about it: at an aggregate faculty or institutional level – which is all you would need to report publicly or to government – the results of such a survey would instantly become a credible source of data on teaching quality.  But more importantly,  they would provide institutions with incredible data on what’s going on inside their own classrooms.  Are certain teaching practices associated with elevated levels of dropping out, or with an upward shift in grades?  By tying the survey to individual student records on a class-by-class basis, you could know that from such a survey.  A Dean could ask intelligent questions about why one department in her faculty seem to be less likely to involve group work or interactive discussions than others, as well as see how that plays into student completion or choice of majors.  Or one could see how teaching patterns vary by age (are blended learning classes only the preserve of younger profs?).  Or, by matching descriptions of classes to other more satisfaction-based instruments like course evaluations, it would be possible to see whether certain modes of teaching or types of assignment result in higher or lower student satisfaction results – and whether or not the relationship between practices and satisfaction hold true across different disciplines (my guess is it wouldn’t in some cases, but there’s only one way to find out!)

So there you go: a student-record-linked survey with a focus on classroom experiences on a class-by-class could conceivably get us a system which a) provides reliable data for accountability purposes on “learning experiences” and b) provides institutions with vast amount of new, appropriately granular data which can help them improve their own performance.  And it could be done much more cheaply and less intrusively than wide-scale testing.

Worth a try, surely.

May 26

Taking Advantage of Course Duplication

I recently came across an interesting blogpost from a professor in the UK named Thomas Leeper (see here), talking about the way in which professors the world over spend so much time duplicating each others’ work in terms of developing curricula.  Some key excerpts:

” …the creation of new syllabi is something that appears to have been repeated for decades, if not centuries. And yet, it all seems rather laborious in light of the relatively modest variation in the final courses that each of us creates on our own, working in parallel.”

“… In the digital age, it is incredibly transparent that the particular course offerings at every department are nearly the same. The variation comes in the quality of lectures and discussion sections, the set of assignments required of students, and the difficulty of the grading.”

“We expend our efforts designing strikingly similar reading lists and spend much less time on the factors that actually differentiate courses across institutions: lecture quality, learning activities, and feedback provision… we should save our efforts on syllabus construction and spend that time and energy elsewhere in our teaching.”

Well, quite.  But I think you can push this argument a step further.  I’ve heard (don’t quote me because I can’t remember exactly where) that if you group together similar courses across institutions (e.g. Accounting 100, American History survey courses, etc.), then something like 25% of all credits awarded in the United States are accumulated in just 100 “courses”.  I expect numbers would not be entirely dissimilar in other Anglophone countries.  And though this phenomenon probably functions on some kind of power law – the next 100 courses probably wouldn’t make up 10% of all credits – my guess is your top 1000 courses would account for 50-60% or more of all credits.

Now imagine all Canadian universities decided to get together and make a really top-notch set of e-learning complements to each of these 1000 courses.  The kinds of resources that go into a top-notch MOOC  in order to improve the quality of each of these classes (like, for instance, the University of Alberta’s Dino 101).  Not that they would be taught via MOOC – teaching would remain the purview of individual professors – but that each have excellent dedicated on-line resources associated with each of them.  Let’s say they collectively put $500,000 into each of them, over the course of four years.   That would be $500M in total, or $125M per year.  Obviously, those aren’t investments any single institution could contemplate, but if we consider the investment from the perspective of the entire sector (which spends roughly $35 billion per year) this is chump change.   $120 per student.  A trifle.

So, a challenge to university Presidents and Provosts: why not do this?  We’re talking here about a quantum jump in learning resources available for half the credits undergraduate earn each semester.  Why not collectively invest money to improve the quality of the learning environment?  Why not free up professors’ time so they can focus more on lecture quality and feedback provision?  And to provinces and CMEC: why not create incentives for institutions to do precisely this?

A debate should be had.  Let’s have it.

April 15

Are Teaching Costs Increasing at Canadian Universities?

On Wednesday, someone took me to task in the comments section of the blog for part of my analysis on the financial situation of higher education, saying:

“The HE sector has hiked tuition up far faster than inflation citing “Increased teaching costs”. They have been unable or unwilling to provide proper costings for this.”

Is this true? Well, it depends how long a time-frame you choose to use. Let’s look at the data.

To look at “teaching costs”, we need to use data from the Statscan/Canadian Association of University Business Offices Financial Information of Universities and Colleges (FIUC) survey. FIUC divides salary costs into three categories – “academic ranks” (meaning permanent academic staff), “other instruction and research” (meaning mostly sessionals), and “other salaries and wages” (meaning non-academic staff). Unfortunately, it does not break out “benefits” costs in the same way – these are all lumped together in a single category. It also allows you to divide these up by “function” (admin, student services, libraries, etc.)

For this exercise, I will restrict the analysis to expenditures under “Instruction and non-sponsored research”, and include salaries for both permanent and sessional academics. Within this category, these two groups make up about 80% of all salaries, so I’m going to assign 80% of all benefit dollars as well (this is probably an undercount because academic staff tend to have better benefit packages). Together, I will call these “core teaching costs”. I will then going to divide total expenditures on these three areas by the number of “full-time equivalent students”, which, according to Statscan, = FT students + (PT students/3.5)

Here’s what that looks like, in $2016, back to 1979-1980.

Figure 1: Core teaching Costs per FTE Student, Canada, 1979-80 to 2013-14, in $2016

2016-04-14-1

So: a major decline in per-student core instructional costs from 1979 to about 2003, of about 20%, followed by a decade of increases – mainly on the benefits side – which saw costs rebound by 17% to bring us up to our highest point since 1980. In other words, the story is pretty mediocre if you look at a really long view, but not bad if you take a lend of a decade or so.

Now, to tuition, which is much simpler to track, using the standard Statscan tuition: average undergraduate fees across all programs.

Figure 2: Average Undergraduate Tuition, Canada, 1979-80 to 2013-14, in 2016

2016-04-14-2

That’s a pretty simple story: flat in real dollars through the 80s’, sharp increases in the 1990s and more moderate ones since then (if one were to include subsidies like grants and tax credits, it would be close to flat since 2000, but let’s not complicate the analysis).

Now let’s compare what’s going on here over a 10 and a 35-year horizon. Figure 3 shoes that if you confine the analysis to the last decade or so, tuition and core instructional costs are rising at similar rates.

Figure 3. Tuition vs. Core Instructional Costs, Canada, 2003-4 to 2013-14, 2003-04 = 100

2016-04-14-3

However, if you extend the analysis back to say 1979, you get a completely different picture.

Figure 4. Tuition vs. Core Instructional Costs, Canada, 1979-80 to 2013-14, 1979-80 = 100

2016-04-14-4

Why the difference? Well, mostly because the 1990s were a time of disinvestment, so in part higher tuition fees were replacing government spending, but also because between 1990 and 2005 or so there were some fairly major changes to the way universities spend their money. A lot more money went into IT, student services, scholarships (and, yes, administration), meaning that core instructional costs shrunk as a percentage of total expenditures. So my comments-section interlocutor is certainly right over the long term, less so over the short term.

That said, there is a real question about whether or not those “core teaching costs” are really meaningful over time given the appearance that an increasing portion of staff time is devoted to research rather than teaching. But that’s a debate for another day.

February 09

Can Universities Judge Themselves?

One of the more difficult problems to unravel in the world of higher education is the fact that universities are responsible both for delivering teaching and judging whether or not a student has learned enough to get a degree.  To most reasonable minds, this is a conflict of interest.  Indeed, this is the conflict that makes universities unreformable: as long as universities have a monopoly on judging their own quality, no one external to the system (students, governments) can make realistic comparisons between institutions, or can push for improvements.

Yet, it hasn’t always been this way.  Even in living memory, the University of London was, to a large extent, an examination body.  Higher education institutions all over Africa were simply “colleges” that taught at the higher education level; to get a degree, students would still have to sit exams set by the University of London.  One body teaches, one body examines.

Historically, Canadian universities did a lot of this kind of thing.  The University of British Columbia and the University of Victoria both started as “affiliates” of McGill, before they got degree-granting status of their own – students would learn at one institution, and then get a degree from another.  Ditto Brandon with McMaster.  Similarly, the University of Manitoba started out as an examining body for students taking degrees at a variety of denominational colleges across Winnipeg (including United College, which later went its own way and became the University of Winnipeg); even the University of Toronto got its start as an examining body, responsible for overseeing the work of denominational colleges like Trinity.  Eventually, of course, Toronto and Manitoba started providing teaching as well as judging, and eventually all of these institutions became the regular kind of universities we know today, only with really awkward college structures.

Would something like that still work today?  Well, in some places it still does.  A.C. Grayling’s much-maligned New College of the Humanities in London does not issue its own degrees, but rather prepares students to take the University of London exams.  In India, tens of thousands of colleges exist that do nothing but prepare students for examinations from one of the roughly 200 “real” universities (which also teach their own students at their own campuses).

Could we get this genie back out of its bottle by creating a new university, which could test what other universities are doing?  Well, this could only work if the new university had a higher level of prestige than the institutions that students were currently attending; otherwise, a student would quite reasonably not bother, and just stick with the degree from the institution s/he was already at.  The reason it used to work here is because the colleges were new and had no prestige, whereas the established university (e.g. McGill) or the provincially-mandated organization (e.g. Manitoba) were seen as bigger and better.

In truth, the only way this could work nowadays is if a genuinely stupendous university (say, Harvard) would offer to give degrees to anyone who could pass its exams.  But as we’ve seen with the MOOCs saga, the one thing that stupendous universities really don’t want to do is to dilute their perceived exclusiveness by giving out degrees to the hoi polloi.  You could set up government institutions to do it, as Korea has done with its Academic Credit Bank and self-study degrees; as innovative as those are, however, they are still seen as second-class degrees as far as prestige is concerned.

Where you could imagine this kind of system working is in developing countries, where a lot of new universities are opening at once (e.g. Kenya, Ghana).  Here, new universities might actually attract more students if they could claim that students would earn degrees from the system’s flagship institution.  But in our neck of the woods, it’s much harder to see a workable way to divorce teaching from degree-granting.

Page 1 of 812345...Last »