HESA

Higher Education Strategy Associates

Tag Archives: Teaching vs Research

March 01

Under-managed universities

I have been having some interesting conversations with folks recently about “overwork” in academia.  It is clear to me that a lot of professors are absolutely frazzled.  It is also clear to me that on average professors work hard – not necessarily because The Man is standing over them with a whip but because as a rule academics are professional and driven, and hey, status within academia is competitive and lots of people want to keep up with the Joneses.

But sometimes when I talk to profs – and for context here the ones I speak to most often are ones roughly my own age (mid-career) or younger – what I hear a lot of is about work imbalance (i.e. some professors are doing more work than others) or, to put it more bluntly, how much “deadwood” there is in universities (the consensus answer is somewhere between 20-30%).  And therefore, I think it is reasonable to ask the question: to what extent do some people’s “overwork” stem from the fact that some professors aren’t pulling their weight?

This is obviously something of a sticky question, and I had an interesting time discussing it with a number of interlocutors of twitter last week.  My impression is that opinion roughly divides up into three camps:

1)      The self-righteous Camp.  “This is ridiculous I’ve never heard professors talking like this about each other, we all work hard and anyway if anyone is unproductive it’s because they’re dealing with kids or depressed due to the uncaring, neoliberal administration smashing its boot into the face of academia forever…”

2)      The Hard Science Camp. “Well, you know there are huge differences in workload expectation across the institution – do you know how much work it is to run a lab? Those humanities profs get away with murder…”

3)       The “We’ve earned it” Camp “Hey look at all the professions where you put in the hours at the start and get to relax later on. We’re just like that. Would you want to work hours like a junior your whole life? And by the way older profs just demonstrate productivity on a broader basis than just teaching and research….”

There is probably something to each of these points of view.  People do have to juggle external priorities with academic ones at some points in their lives; that said, since most of the people who made the remarks about deadwood have young kids themselves, I doubt that explains the phenomenon. There probably are different work expectations across faculties; that said, in the examples I was using, my interlocutors were talking about people in their own units, so that’s doesn’t affect my observation, much.  Perhaps there are expectations of taking it easier as careers progress, but I never made the argument that deadwood is related to seniority so the assumption that this was what caused deadwood was… interesting).  So while acknowledging that all of these points may be worthwhile, I still tend to believe that at least part of the solution to overwork is dealing with the problem of work imbalances.

Now, at some universities – mainly ones which have significantly upped their research profile in the last couple of decades – this might genuinely be tough because the expectations of staff who were hired in the 1970s or 1980s might be very, very different than the expectations of ones hired today.  Places like Ryerson or MacEwan are obvious examples, but can also be true at places like Waterloo, which thought of itself as a mostly undergraduate institution even into the early 1990s.  Simply put, there is a huge generational gap at some universities in how people understand “the job” because they were hired in totally different contexts.

What strikes me about all of this is that neither management nor – interestingly – labour seem to have much interest in measuring workload for the purpose of equalizing it.  Sure, there’s lots of bean counting, especially in the sciences, especially when it comes to research contracts and publications and stuff like that.  But what’s missing is the desire to use to adjust individuals’ work loads in order to reach common goals more efficiently.

My impression is that in many departments, “workload management” means, at most, equalizing undergraduate teaching requirements.  Grad supervisions?  Those are all over the place.  “Service”?  Let’s not even pretend that’s well-measured.  Research effort?  Once tenure has been given, it’s largely up to individuals how much they want to do.  The fiercely competitive may take on 40 or 50 hours a week on top of their other duties, others much less.  Department heads – usually elected by professors in the department themselves – have limited incentive and means to get the overachievers to maybe cool it sometimes and the underachievers to up their game.

In short, while it’s fashionable to say that professors are being “micro-managed” by universities, I would argue that on the rather basic task of regulating workload for common good, academics are woefully under-managed.  I’d probably go even further and say most people know they are undermanaged and many wish it could change.  But at the end of the day, academics as a voting mass on Senates and faculty unions consistently seem to prefer undermanagement and “freedom” to management and (perhaps) more work fairness.

I wonder why this is. I also wonder if there is not a gender component to the issue.

What do you think?  Comments welcome.

March 09

Sessionals, Nursing Degrees, and the Meaning of University

Be forewarned: I am going to be very mean about universities today. 

One thing the labour disputes in Ontario highlight is the amount of undergraduate teaching done by non-tenure track professors.  Numbers on this are hard to come by, and poorly defined when they are.  York sessionals claim to be teaching 42% of all undergraduate classes – but how do you define a class?  But from what I’ve gathered from talking to people across the province who are in a position to know, it is not uncommon at larger universities to at least see between 40 and 50% of all undergraduate credit hours (which is the correct unit of analysis) taught by sessionals.

Think about that for a minute: half of all credit hours in major Ontario universities are taught by staff  who are off the tenure track.  People with no research profile to speak of.  Yet aren’t we always told that the combination of research and teaching is essential in universities?  Aren’t we told that without research, universities would be nothing more than – God forbid – community colleges?  So what does it mean when half of all undergraduate credit hours are taught by these sessionals?  Are students are only getting the essential university experience half the time?  And the other half of the time they are at community colleges?  If so, why are student and taxpayers paying so much more per credit hour?

These are important questions at any time, but I think their importance is underlined by the stramash currently going on between Ontario universities and colleges over the possibility of colleges offering stand-alone nursing programs.  You see, Ontario has none of these.  Universities can have stand-alone nursing programs; colleges can have nursing programs, but require a university partner to oversee the curriculum.  This partnership has nothing to do with sharing of physical resources or anything – Humber College’s partner is the University of New Brunswick (which is how UNB became Ontario’s third/fourth-largest supplier of nurses a few years ago).  No, it’s just a purely protectionist measure, which Ontario universities justify on the grounds that “patient care [has] become so complex that nurses needed research, theory, critical thinking, and practice in order to be prepared [for work]”.  Subtext being: obviously you can’t get that just from a community college.

But why is this obvious?  Clearly, universities themselves don’t believe that theory and critical thinking are related to research, because they’re allowing non-research staff to provide half the instruction.  Indeed, maybe – horror upon horrors – nearly all undergraduate instruction in nursing can be delivered by halfway competent practitioners who are reasonably familiar with developments in nursing research, and that actually having one’s own research practice is neither here nor there.   In which case, the argument for stand-alone nursing schools – with appropriate quality oversight from professional bodies – is pretty much unanswerable.

Too much of universities’ power and authority rests on their near-monopoly on degree-granting.  And too much of that monopoly on degree-granting rests on hand-waving about “but research and teaching!”  Yet, as sessionals’ strikes always remind us, Ontario universities are nowhere close to living up to this in practice.  I wonder how long it will be before some government decides to impose some costs on them for this failure.

September 23

Revisiting BS

Seems I hit a nerve last week when I wrote about Teaching v. Research. Between the emails and the twitter chat afterwards, I can safely say I’ve never received as much feedback on a piece as I did on that one.  As a result, I thought I should respond to a few of the key lines of discussion.

Interestingly, few critics seemed to have picked up on the fact that I was attacking the hypocrisy and sanctimony around the teaching/research discussion; instead, most tried to find ways to justify modern teaching loads.  Some missed the point entirely, protesting that the reason profs were teaching less was because of increased expectations around research.  This, of course, was precisely my point.  I wasn’t accusing people of slacking – I was suggesting that priorities and activities had (stealthily) changed.

Others suggested that the reduced teaching load was an illusion, (e.g. “but our classes are so much bigger now!”)  But class size and teaching loads are linked; if profs taught more, class sizes would go down.   Teaching time may not be a strict function of classroom hours, but neither is it a simple function of students taught.  Two classes of sixty students take up less time than three classes of forty.  Maybe not 33% less time, but a substantial amount nonetheless.

The most substantive critiques were around graduate teaching, and how that should be counted.  I admit to glossing-over this issue, so let’s talk about it here.  Part of the problem is that there are many kinds of graduate teaching. In the sciences, it can be indistinguishable from research; in the Humanities, it’s quite the opposite.  In some disciplines, Master’s level seminars are about the easiest thing to prepare for, though as graduate class sizes grow to undergraduate levels, the workload distinction varies, too.  And on top of that is doctoral supervision, which can be extremely demanding (though standards vary).

We know virtually nothing about graduate-level teaching loads, though they have presumably increased along with graduate enrolments, and are probably distributed in a very uneven way.  This leads to another question: is it perhaps the case that in addition to a substitution effect on undergraduate teaching, overall average workloads are also increasing?  That seems at least plausible to me.

Bottom line, though: we don’t know enough about workloads.  Faculty and administrations have kept this data hidden, even from themselves, for decades.  It’s time for more transparency.  Not only will it reduce BS, but it will increase accountability for how universities use their most important asset: professors’ time.

September 18

Cutting the BS on Teaching and Research

Sometimes people ask me: “what would I change in higher education, if I could”? My answer varies, but right now my fondest wish is for everyone to just cut the BS around the teaching/research balance.

Whenever a debate on teaching and research starts, there’s always people who either intimate how “unfortunate” it is that we have to talk about trade-offs, or people who claim that any deviation from the current trade-off means the death of the academic.  But this is nonsense.  There are only twenty-four hours in a day; trade-offs between teaching and research are always being made.  The issue is not teaching v. research, but where the balance is.  Twenty-five years ago, it was perfectly normal for professors to teach five courses a year.  Now, even at mid-ranking comprehensives, the idea of 3/2 is enough to cause paroxysms.  Just because it’s a good idea for professors to combine research and teaching doesn’t mean that any specific combination of research and teaching is right.

Even if we take it as read that, “engagement with research” makes you a better teacher (something which is much less empirically established than many assume), it’s clearly not equally the case in all disciplines.  Researchers in some disciplines – Physics and Math come to mind – are so far removed from the understanding of your average undergraduate that it’s probably a waste of everyone’s time to put them together in a classroom.  Conversely, language courses are almost never taught by people engaging in research on language acquisition, because it’s unnecessary.  Indeed, first and second year courses in many disciplines probably don’t even need researchers teaching them – a fact institutions acknowledge every day because they keep handing them to non-tenure track faculty.

Over the past 25 years, teaching norms at large universities went from five courses per year to four, to even three; that is, full-time teaching time went down by 20-40%.  The academy did this without ever engaging the public about whether that was the right way to spend public and student dollars.  It’s therefore worth debating, in light of current fiscal pressures, whether the current (historically unprecedented) trade-off between research and teaching is the right one.  We once had good research universities with many professors teaching five courses a year; there’s no reason we couldn’t do so again.  Shutting discussions down – as CAUT Executive Director, Jim Turk, recently did – by equating any change in the balance as an attempt to turn universities into high school isn’t just unhelpful and obnoxious: it’s BS.

And so I say: no more BS.  Let’s all be grownups and talk reasonably about what balance makes sense, not just for professors, but for students and the public as well.

June 13

What if Higher Education Subsidies Were Transparent?

 An interesting little exercise in budget analysis:

There are just under 5600 humanities professors at Canadian universities, and 7600 in the social sciences (excluding law, which is another 600 or so).  On average, these people make about $108,000/year (slightly higher in social sciences, slightly lower in humanities).  Add another 25% on that for payroll taxes, health, and pension, and the direct costs of employing these folks is about $135,000 per year.  That comes out to about $1.85 billion in total.

Now, what do we pay these people to do, exactly?  Well, according to the standard formula (which I recognize does not apply to everyone), 40% of their time is for teaching, 40% of their time is for research, and 20% is for the ever-nebulous concept of “service”.  So, while the transparent subsidies to humanities and social science research – the ones paid through SSHRC – amount to about $700 million, the non-transparent subsidies embedded in academic salaries is, all told, another $750 million on top of that.

When you start dividing out these salary-embedded research amounts by field of study, it’s kind of fascinating, particularly in the humanities.  $33 million each year for research in philosophy; $58 million for history; $57 million for English.  That adds up: nearly $300 million for humanities-based research. That’s almost as much as we spend on transfers to First Nations for post-secondary education each year.

I am not particularly concerned here about whether this amount of spending is desirable, or whether it offers value-for-money or anything like that; I’m sure there would be good arguments both ways.  What does concern me is this: nobody in this country ever stood up and voted for $33 million of public money to be spent on research in Philosophy, and nor would they because nobody thinks that what Philosophy professors are actually paid to do.

When Canadian universities quietly – oh so quietly – began dropping faculty teaching loads about fifteen years ago, from 3/3 and 3/2 to 2/2 and 2/1, implicitly we were shifting compensation – paying more for research and less for teaching.  In some fields – mainly in the sciences –that made eminent good sense.  In others – such as the humanities – it’s not clear that made any sense at all.  After all, the ultimate defense of the humanities is “we teach kids to learn how to think”.  Fair enough: so why spend all that money paying humanities professors not to teach?

We never had a proper debate about any of this, mainly because we are not transparent about what services we are actually buying when we hire a professor.  The quality of debates on higher education would improve enormously if we did.

April 05

No to “World-Class” Research in the Humanities

You often hear talk about how Canadian institutions need to do more research.  Better research.  “World-class” research, even.  Research that will prove how smart our professors are, how efficient they are with public resources, and, hence, justify a claim to an even greater share of those resources.

In medicine, the biological sciences, and engineering, this call is easy to understand.  Developments in these areas can – with the right environment for commercialization – lead to new products, which, in turn, have direct economic benefits to Canadians.  In the social sciences, too, it makes sense.  Most social sciences have (or should have) some relevance to public policy; thus, having world-class research in the social sciences can (or should) mean an improvement in that country’s governance, and its ability to promote a strong, healthy, and equitable society.

But what about in the humanities?  Is there a national public interest in promoting world-class research in the humanities?

My answer is no.  For two reasons.

The first is kind of technical.  When it comes to research, “world-class” status tends to get defined by bibliometrics.  In the sciences, scholarly conversations are, by their nature, global, and so a single standard of measurement makes sense.  But in the humanities, an awful lot of the conversations are, quite properly, local.  And so while bibliometric comparisons in the humanities, within a single country (say, between institutions), might say something important about relative scholarly productivity, comparisons between countries are, to a large degree, only measuring the relative importance of different national polities.  A strategy favouring world-class bibliometric scores in History, for instance, would de-emphasize Canadian History and Aboriginal studies, and instead focus on the Roman and British Empires, and the United States.  And that, obviously, would be nuts.

But there’s a bigger issue here: namely, why do we assume that the worth of humanities has to be judged via research, in the same manner we judge scientific disciplines?  Arguments in defence of the humanities – from people like Martha Nussbaum, Stanley Fish, etc. – stress that the discipline’s value is in encouraging students to think critically, to appreciate differences, and to create meaning.  And it’s not immediately obvious how research contributes to that.  Even if you completely buy the argument that, “scholarly engagement is necessary to teaching”, can you really claim that an increased research load improves teaching?  Have students started thinking more critically since 3/3 teaching loads were cut to 2/2 in order to accommodate more research?

The real national public interest is in having a humanities faculty that can develop critical thinkers, promote understanding, and foster creativity.  Figuring out how to better support and celebrate those things is a lot more important than finding yet more ways for the humanities to ape the sciences.

January 14

Better, not Cheaper

If there is one clear meme concerning higher education coming out of America during this recession, it’s this: “higher education is too expensive and it’s delivering a sub-optimal product.”

Zeitgeist statements like this one have to be handled carefully.  Even if you don’t agree with this meme, failure to engage with it can expose one to charges of being “defensive,” or “part of the problem”.  So, for the moment, let’s accept this statement at face-value, and focus on how one might respond to it.

From a business perspective, there’s simply no question that in a quasi-monopolistic system like higher education, the choice between cheaper and better is obvious.  Only a chump gives up the revenue.  If consumers perceive that the quality – however that may be defined – isn’t there, that’s what needs to be fixed.

Given this, it’s absolutely astonishing to me how quickly the debate in America has focussed around cost.  Everywhere, the mantra is about “bending the cost-curve” (tellingly, a phrase consciously borrowed from the health-care debate), and states like Florida, Texas, and California are all making serious moves to implement so-called $10,000 degrees (that’s not the price, it’s the cost).   Faced with the proposition that, “higher education isn’t delivering the goods, and it costs too much”, the dominant reaction in America seems to be, “well, let’s make it cheaper, then”.  Now, obviously, this response is being driven by political actors rather than educational ones, but it’s stunning nonetheless.

Canada hasn’t quite seen the same level of disillusionment with higher education, mainly because youth unemployment hasn’t spiked in anything like the way it has in the US (the irritating but inevitable fact: higher education will take blame, and credit, for preparing young people for jobs in direct relation to the amplitude of the economic cycle, over which it has zero influence).  But the “cheaper-not-better” agenda could easily take root here, too; Lord knows, in Ontario, we’ve only recently escaped the clutches of a Minister who was in thrall to exactly that vision.

So, here’s a thought: let’s be proactive about this.  Instead of waiting for the next crisis to pop-up, let’s get ahead of the curve by improving the value proposition of undergraduate education.  As I’ve said before, what people really want are graduates who are effective, engaged, and innovative, so let’s find a way to deliver on that.

Put aside for awhile the pitches for more grad students and more research.  Winning the battle for public trust in the system is going to depend first and foremost on how our system delivers on undergraduate education.  Only by being better can the system avoid the call to be cheaper.

December 07

A Zinger from HEQCO

To One Yonge Street, and the offices of the redoubtable Higher Education Quality Council of Ontario (HEQCO), who yesterday released a small publication with the unassuming name, The Productivity of the Ontario Public Postsecondary.  The title may be a little on the soporific side, but the contents are anything but.

There are some real gems in here.  Did you know that 39% of all granting council funding went to Quebec?  OK, the grants on average are somewhat smaller than they are in Ontario, but that’s still an incredible number.  It’s not as though, on average, their publication records are better than anyone else’s (they’re not – see Table 7, which we, here at HESA, contributed to the report).  By what quirk of the funding council system does this happen?  What’s the secret to their success?  Inquiring minds want to know if this is replicable elsewhere?

But for my money, the really explosive section is Table 9, which takes data from four collaborating institutions (Guelph, Queen’s, Laurier, and York) in order to look at staff workloads.  Specifically, workloads were analyzed according to whether a professor had any “research output” (defined – rather generously, I think – as having held any external grant, or had a publication in the 2010-2011 academic year).  Here’s what they found:

A couple of points here:  First, the comparison for science isn’t especially interesting since there are almost no non-research-active faculty there (over 80% of them hold an NSERC grant at any given time).  In the SSHRC fields it’s a different story – it’s closer to 50-50.  But, apparently, the ones who are research-active are not bunking off teaching to do their research; in fact, they only teach a half-course less per year, on average, than those who are non-research-active.  Perhaps the better question here is, “what exactly are all those non-research-active profs doing with their time”?

Obviously, there are some possible ways this result could be innocuous.  It could be that the non-research-active profs spend more time devoted to service, or that the line between research-active and non-research-active isn’t especially clear-cut in humanities and social sciences (if publication is mostly in the form of books, it’s easy to go a year, or more, without a new one).   You’d have to do a lot more digging before jumping to any definitive conclusions.

But just the fact that HEQCO got four universities to go even this far with their data is a big deal.   It’s the biggest move anyone’s made so far to start engaging publicly on the issue of faculty productivity.  So kudos to HEQCO and the four institutions that participated.  The sector is going to need a lot more of this if it’s going to adjust to the new fiscal reality.

June 14

Bibliometrics: Measuring Zero-Impact

Bibliometrics aren’t just useful for analyzing who’s being cited; they are also pretty good at telling you who’s not being cited, too.

Today, we’ll look at professors whose H-index (see here for a reminder of how it is calculated) is zero – that is, professors who have either never been published or (more likely) never been cited.

There are three reasons why a scholar might have an H-index of zero. The first is age; younger scholars are less likely to have published, and their publications have had less time in which to be cited. The second is prevailing disciplinary norms. there are some disciplines – English/Literature would be a good example – where scholarly communication simply doesn’t involve a lot of citations of other scholars’ work. The third is simply that a particular scholar might not be publishing anything of particular importance, or indeed publishing anything at all.

Let’s take each of these in turn. We can examine the first two questions pretty easily just by looking at the proportion of scholars with zero H-indexes by rank and field of study (our database has data on the rank of a little over three-quarters of academic staff in it – about 47,000 of the people in total, which is a pretty good sample).

Proportion of Academic Staff without a Cited Publication, by Rank and Field of Study

Surprised? So were we. Not because of the differences across ranks (H-index scores are necessarily positively correlated with length of career) or across fields of study (we did this one already0. What really blew us away was the number of full professors who have never had a paper cited, especially in the sciences. Who knew that was even possible?

So, what about that third reason? It is obviously difficult to generalize, but one should note that even within disciplines, there are some enormous gaps in publication/citation rates. In economics as a whole, 15.6% have an H-index of zero, but the proportion of economists in any individual economics department with an H-index of zero varies between 0% and 63%. In biology (disciplinary average: 7.7%), individual departments range between 0% and 60%; in history (disciplinary average: 13.4%), the range is between 5% and 50%. It is vanishingly unlikely that these differences are solely the result of different departmental age profiles; more likely, they reflect genuine differences of scholarly strength.

Now, there’s nothing saying all professors need to be publishing machines. But if that’s the case, maybe not all professors need to have 2/2 or 2/1 teaching loads to conduct all that impactful research, either. Running a university requires trade-offs between research and teaching: bibliometric analysis such as this is a way to make sure those trade-offs are well-informed.

March 23

Bibliometrics, Part the First

The shock and horror generated by proposals of teaching-only universities makes it pretty clear that most of Canadian academia thinks that research is important. So important, indeed, that we want every professor to devote 40% of his or her time (under the 40-40-20 rule) to it.

Now, that’s a pretty serious commitment. Even before you get to the costs of graduate students, research grants and research infrastructure, 40% of staff time equals $2 billion/year on research.

So why do we spend so little time measuring its impact?

Oh, we’re all big on measuring inputs, I know – it’s hard to go a week without some communications office telling you how successful their institution is in getting tri-council grants. But presumably the sign of a good university isn’t how much money you get, but what you do with it. In a word, outputs.

Some countries have some extraordinary ways of measuring these. Take New Zealand’s Performance-Based Research Fund Assessment, for instance. This involves having researchers in every department put together portfolios of their work which are then peer-assessed by committees made up of local and international experts. There’s a lot to recommend this approach in terms of its use of peer evaluators, but it’s really expensive, costing something like 15% of the annual research budget in a given year (the equivalent in Canada would be about $400 million, though presumably there would be some economies of scale that would bring the cost down a bit). As a result, they only do it every six years.

The alternative, of course, is bibliometrics; that is, using data on publications and citations to measure research output. It’s in use across much of Europe and Asia, but is comparatively underused here in Canada.

Of course, as with any system of measurement, there are good and bad ways of using bibliometric data. Not all of the ways in which bibliometrics have been used make sense – in fact, some are downright goofy. Over the next couple of weeks, I’ll be demystifying the use of bibliometrics, to help you distinguish the good from the bad, the useful to the not-so-useful, and provide some food for thought about how they can be used in institutional management.

But there is one use of research metrics that I can tell you about right now. In next week’s Globe and Mail, HESA presents a list of Canada’s most distinguished academics, by field of study, based on a joint measure of publications and citations known as the “H-Index.” Take a look at Tuesday’s Report on Business and let us know what you think.

Page 1 of 212