HESA

Higher Education Strategy Associates

Category Archives: research

May 12

Non-Lieux Universities: Whose Fault?

About four months ago, UBC President Stephen Toope wrote a widely-praised piece called “Universities in an Era of Non-Lieux“.  Basically, the piece laments the growing trend toward the deracinated homogenization of universities around the globe.  He names global rankings and government micro-management of research and enrolment strategies – usually of a fairly faddish variety, as evidenced by the recent MOOC-mania – as the main culprits.

I’m not going to take issue with Toope’s central thesis: I agree with him 100% that we need more institutional diversity; but I think the piece fails on two counts.  First, it leaves out the question of where governments got these crazy ideas in the first place.  And second, when it comes right down to it, the fact is that big research universities are only against institutional diversity insofar as it serves their own interests.

Take global rankings, for instance.  Granted, these can be fairly reductionist affairs.  And yes, they privilege institutions that are big on research.  But where on earth could rankers have come up with the idea that research was what mattered to universities, and that big research = big prestige?  Who peddles that line CONSTANTLY?  Who makes hiring based on research ability?  Who makes distinctions between institutions based on research intensity?  Could it possibly be the academic community itself?  Could it be that universities are not so much victims as culprits here?

(I mean, for God’s sake, UBC itself is a member of “Research Universities Council of BC” – an organization that changed its name just a few years ago so its members would be sure to distinguish themselves from the much more lumpen new [non-research-intensive] universities who caucus in the much less-grandly named BC Association of Institutes & Universities.  Trust me – no rankers made them do that.  They came up with this idea on their own.)

As for the argument that government imposes uniformity through a combination of meddling and one-size-fits-all funding models, it’s a point that’s hard to argue.  Canadian governments are notorious for the way they only incentivize size and research, and then wonder why every university wants to be larger and more research-intensive.  But frankly, this has traditionally worked in research universities’ favour.  You didn’t hear a lot of U15 Presidents moaning about research monocultures as long as the money was still flowing entirely in their direction.  So while Toope is quite right that forcing everyone into an applied research direction is silly, the emergence of a focus on applied research actually has a much greater potential to drive differentiation than your average government policy fad.

So, to echo Toope, yes to diversity, no to “non-lieux”.  But let’s not pretend that the drive to isomorphism comes from anywhere but inside the academy.  We have met the enemy and he is us.

November 07

International Alliances and Research Agreements

In business, companies strive to increase market share; in higher education, institutions compete for prestige.  This is why, despite whatever your told by people in universities, rankings are catnip to university administrations: by codifying prestige, they give institutions actual benchmarks against which they can measure themselves.

But prestige is actually much harder to amass than market share.  Markets can increase in size; prestige is a zero-sum affair (my prestige is related directly to your lack thereof).  And universities have fewer tools than businesses to extend their reach.  Mergers are not unheard of – indeed, the pressure of global rankings has been a factor behind a wave of institutional mergers in France, Russia, and Scandinavia – but these tend to be initiated by governments rather than institutions. Hostile take-overs are even less common (though UBC’s acquisition of a campus in the Okanagan shows it’s not impossible).

So, what’s a university to do?  Increasingly, the answer seems to be: “make strategic alliances”.

These tend to come in two forms: multi-institutional alliances (like Universitas 21, the Coimbra Group, and the like), and bilateral institutional deals.  Occasionally, the latter exercise can go as far as ambitious, near-institutional mergers (see the Monash-Warwick alliance, for instance), but it usually consists of much simpler initiatives – MOUs between two institutions, designed to promote co-operation in fairly general terms.  There’s a whole industry around this now – both QS and Thompson Reuters offer services to help institutions identify the most promising research partners.  And signing these MOUs seem to take up an increasing amount of time, effort, and air miles among senior managers.

So it’s fair to ask: do these MOUs make any difference at all to research output?  I have no hard evidence on this, but I suspect that returns are actually pretty meagre.  While inter-institutional co-operation is increasing all the time, for the most part these links are organic; that is, they arise spontaneously from the interaction of individual researchers coming up with cool ideas for collaboration, rather than from more top-down interactions.  While there’s a lot that governments and institutions can do to promote inter-institutional linkages in general, there’s a very limited amount that central administrations can do to promote specific linkages, that doesn’t quickly become counterproductive.

Having significant international research links is indeed the sign of a good university – the problem is that for managers under pressure to demonstrate results, organic growth isn’t fast enough.  The appeal of all these MOUs is that they give the appearance of rapid progress on internationalization.  But given the time and money expended on these things, some rigour is called for. This is an area where Board members can, and should, hold their administrations to account, and ask for some reasonable cost-benefit analysis.

November 05

Owning the Podium

I’m sure many of you saw Western President, Amit Chakma’s, op-ed in the National Post last week, suggesting that Canadian universities need more government assistance to reach new heights of excellence, and “own the podium” in global academia.  I’ve been told that Chakma’s op-ed presages a new push by the U-15 for a dedicated set of “excellence funds” which, presumably, would end up mostly in the U-15′s own hands (for what is excellence if not research done by the U-15?).  All I can say is that the argument needs some work.

The piece starts out with scare metrics to show that Canada is “falling behind”.  Australia has just two-thirds our population, yet has seven institutions in the QS top 100, compared to Canada’s five!  Why anyone should care about this specific cut-off (use the top-200 in the QS rankings and Canada beats Australia 9 to 8), or this specific ranking (in the THE rankings, Canada and Australia each have 4 spots), Chakma never makes clear.

The piece then moves on to make the case that, “other countries such as Germany, Israel, China and India are upping their game” in public funding of research (no mention of the fact that Canada spends more public dollars on higher education and research than any of these countries), which leads us to the astonishing non-sequitur that, “if universities in other jurisdictions are beating us on key academic and research measures, it’s not surprising that Canada is also being out-performed on key economic measures”.

This proposition – that public funding of education is a leading indicator of economic performance – is demonstrably false.  Germany has just about the weakest higher education spending in the OECD, and it’s doing just fine, economically.  The US has about the highest, and it’s still in its worst economic slowdown in over seventy-five years.  Claiming that there is some kind of demonstrable short-term link is the kind of thing that will get universities into trouble.  I mean, someone might just say, “well, Canada has the 4th-highest level of public funding of higher education as a percentage of GDP in the OECD – doesn’t that mean we should be doing better?  And if that’s indeed true, and our economy is so mediocre, doesn’t that give us reason to suspect that maybe our universities aren’t delivering the goods?”

According to Chakma, Canada has arrived at its allegedly-wretched state by virtue of having a funding formula which prioritizes bums-in-seats instead of excellence.  But that’s a tough sell.  Most countries (including oh-so-great Australia) have funding formulae at least as demand-oriented as our own – and most are working with considerably fewer dollars per student as well.  If Australia is in fact “beating” us (a debatable proposition), one might reasonably suspect that it has at least as much to do with management as it does money.

Presumably, though, that’s not a hypothesis the U-15 wants to test.

October 02

A New Study on Postdocs

There’s an interesting study on postdocs out today, from the Canadian Association of Postdoctoral Scholars (CAPS) and MITACS.  The report provides a wealth of data on postdocs’ demographics, financial status, likes, dislikes, etc.  It’s all thoroughly interesting and well worth a read, but I’m going to restrict my comments to just two of the most interesting results.

The first has to do, specifically, with postdocs’ legal status.  In Quebec, they are considered students. Outside Quebec, it depends: if their funding comes from internal university funds, they are usually considered employees; but, if their funding is external, they are most often just “fellowship holders” – an indistinct category which could mean a wide variety of things in terms of access to campus services (are they students?  Employees?  Both?  Neither?).  Just taxonomically, the whole situation’s a bit of a nightmare, and one can certainly see the need for greater clarity and consistently if we ever want to make policy on postdocs above the institutional level.

The second – somewhat jaw-dropping – point of interest is the table on page 27, which examines postdocs’ training.

Level of Training Received or Available, in % (The 2013 Canadian Postdoc Survey, Table 3, pg. 27)

 

 

 

 

 

 

 

 

 

 

 

 

 

As the authors note, being trainees is what makes postdocs a distinct group – it’s basically the only thing that distinguishes them from research associates.  So what should we infer from the fact that only 18% of postdocs report receiving any formal training for career development, 15% for research ethics, and 11% on either presentation skills or grant/proposal writing?  If there’s a smoking gun on the charge that Canadian universities view postdocs as cheap academic labour, rather than as true academics-in-waiting, this table is it.

All of this information is, of course, important; however, this study’s value goes beyond its presentation of new data.  One of its most important lessons comes from the fact that a couple of organizations just decided to get together and collect data on their own.  Too often in this country, we turn our noses up at anything other than the highest-quality data, but since no one wants to pay for quality (how Canadian is that?), we just wring our hands hoping StatsCan will eventually sort it out it for us.

But to hell with that.  StatsCan’s broke, and even when it had money it couldn’t get its most important product (PSIS) to work properly.  It’s time the sector got serious about collecting, packaging, and – most importantly – publishing its own data, even if it’s not StatsCan quality.  This survey’s sample selection, for instance, is a bit on the dodgy side – but who cares?  Some data is better than none.  And too often, “none” is what we have.

CAPS/MITACS have done everyone a solid by spending their own time and money to improve our knowledge base about some key contributors to the country’s research effort.  They deserve both to be commended and widely imitated.

May 22

Bad Arguments for Basic Research

Last week’s announcement that the NRC was “open for business” has, if nothing else, revealed how shockingly weak most of the arguments are in favour of “basic” research.

Opponents of the NRC move have basically taken one of two rhetorical tacks.  The first is to present the switch in NRC mandate as the equivalent of the government abandoning basic science.  This is a bit off, frankly, considering that the government spends billions of dollars on SSHRC, NSERC, CIHR, etc.  Even if you’re passionate about basic research, there are still valid questions to be answered about why we should be paying billions of dollars a year to government departments doing basic research when the granting councils fund universities to ostensibly do the same thing.

The second argument is to say that government shouldn’t support applied science, because: a) it’s corporate welfare, and b) all breakthroughs ultimately rely on basic science, and so we should fund that exclusively.  It seems as though those who take this line have never heard of Germany’s Fraunhofer Institute, a publicly funded agency in Germany which does nothing but conduct applied research of direct utility to private enterprises.  It’s generally seen as a successful and useful complement to the government’s investments in basic science through the Max Planck Institute, and to my knowledge, Germany has never been accused of being anti-science for creating and funding Fraunhofer.

Another point here: the benefits of “basic” research leak across national borders. Very little of the upstream basic research that drives our economy is Canadian in origin.  So while it’s vitally important that someone, somewhere, puts a lot of money down on risky, non-applied research, individual countries can – and probably should – make some different decisions on basic vs. applied research based on local conditions.

The relative benefit of a marginal dollar investment in applied research vs. basic research depends on the kind of economy a country has, the pattern of firm size, and receptor capacity for research.  It’s not an easy thing to measure accurately – and I’m not suggesting that the current government has based its decision on anything so empirical – but it’s simply not intellectually honest to claim that one is always a better investment than the other.

Opposition to the NRC change is clearly – and probably justifiably – coloured by a more general irritation at a host of this government’s other policies on science and knowledge (Experimental Lakes, long-form census, etc).  But that’s still no excuse for this farrago of flimsy argumentation.  Rational policy-making requires us to engage in something more than juvenile, binary discussions about what kind of research is “best”.

May 08

Fundamental Research

Scientific discovery is not valuable unless it has commercial value” (John McDougall, NRC president, yesterday).

Discovery comes from what scientists think is important, not what industry thinks is important.  Fundamental scientific advancement drives innovation, and that is driven by basic research.” (David Robinson, CAUT Associate Executive Director, yesterday).

Some days, the level of discourse in Canadian higher education policy seems to be improving.  Other days, like yesterday, it is full of childish, one-dimensional arguments about the nature of science and research, arguments that the rest of the world outgrew of fifteen or twenty years ago, and I just want to weep.

The basic concept of research was invented by Vannevar Bush in his 1945 work, Science: The Endless Frontier.  In order to press for greater funding of university research, Bush made a sharp distinction between “basic” (or “fundamental”) research, “performed without thought of practical ends” at universities, and “applied” research” (something to be left to business and the military) that developed from the former.  To have more of the latter, he conveniently argued, you needed more of the former.

But this neat division was a rhetorical device rather than a meaningful scientific taxonomy.  As Donald Stokes pointed out in his book, Pasteur’s Quadrant, outside of theoretical physics, there really aren’t many fields of science where scientists knock about “without thought of practical ends”.   Fundamental research often solves very practical problems that industry faces (which is true for a great deal of research in Engineering, Computer Science, and Chemistry), or which quite clearly has commercial applications (true for much medical research, for instance).  Discovery, as David Robinson says, does come from “what scientists think is important”, but that begs the question: “how do they decide what’s important”?  The answer, often, is discovered by interacting with industry and finding out what companies think is important.  If that weren’t true, frankly, the contribution of university science to economic growth would be a hell of a lot smaller than it is.

As for the notion that scientific discovery is not valuable without a commercial application: man, that’s some strong ganja they’re smoking on Montreal Road.   Are mathematics worthless because you can’t patent an equation?  Was Galileo just some flâneur because he never made a penny off heliocentrism?  How the hell can you tell, a priori, whether something has a commercial application?  I mean, Rutherford wasn’t thinking about multi-billion dollar industries in telecommunications, nuclear power, and quantum computing when he did his gold foil experiments.  Yet all those industries would be non-existent if we still thought that atoms were solid shells.

As a country, our scientific and academic leaders should do better than this.

April 25

The Leiden Rankings 2013

Though it was passed over in silence here in Canada, the new Leiden university research rankings made a bit of a splash elsewhere, last week.  I gave a brief overview of the Leiden rankings last year.  Based on five years’ worth of Web of Science publication and citation data (2008-2012), it is by some distance the best way to compare institutions’ current research output and performance.  The Leiden rankings have always allowed comparisons along a number of dimensions of impact and collaboration; what’s new – and fabulous – this year is that the results can be disaggregated into five broad areas of study (biomedical sciences, life & earth sciences, math & computer science, natural sciences & engineering, and social sciences & humanities).

So how did Canadian universities do?

The big news is that the University of Toronto is #2 in the world (Harvard = #1) in terms of publications, thanks mainly to its gargantuan output in biomedical sciences.  But when one starts looking at impact, the story is not quite as good.  American universities come way out in front on impact in all five areas of study – natural, since they control the journals and they read and cite each others’ work more often than they do that of foreigners.  The UK is second in all categories (except math & computer science), third place in most fields belongs to the Dutch (seriously – their numbers are stunning), followed by the Germans and Chinese, followed (at a distance) by Canada and Australia.   Overall, if you look at each country’s half-dozen or so best universities, sixth or seventh is probably where we rank as a country in all sub-fields, and overall.

Also of interest is the data on collaboration, and specifically the percentage of publications which have an international co-author.  That Canada ranks low on this measure shouldn’t be a surprise: Europeans tend to dominate this measure because there are so many countries cheek by jowl.  But the more interesting finding is just how messy international collaboration is as a measure of anything.  Sure, there are some good schools with high levels of international collaboration (e.g. Caltech).  But any indicator where the top schools are St. Petersburg State and King Saud University probably isn’t a clear-cut measure of quality.

Among Canadian schools, there aren’t many big surprises.  Toronto, UBC, and McGill are the big three; Alberta does well in terms of volume of publications, but badly in terms of impact; and Victoria and Simon Fraser lead the way on international collaborations.

If you have even the slightest interest in bibliometrics, do go and play around with the customizable data on the Leiden site.  It’s fun, and you’ll probably learn something.

April 05

No to “World-Class” Research in the Humanities

You often hear talk about how Canadian institutions need to do more research.  Better research.  “World-class” research, even.  Research that will prove how smart our professors are, how efficient they are with public resources, and, hence, justify a claim to an even greater share of those resources.

In medicine, the biological sciences, and engineering, this call is easy to understand.  Developments in these areas can – with the right environment for commercialization – lead to new products, which, in turn, have direct economic benefits to Canadians.  In the social sciences, too, it makes sense.  Most social sciences have (or should have) some relevance to public policy; thus, having world-class research in the social sciences can (or should) mean an improvement in that country’s governance, and its ability to promote a strong, healthy, and equitable society.

But what about in the humanities?  Is there a national public interest in promoting world-class research in the humanities?

My answer is no.  For two reasons.

The first is kind of technical.  When it comes to research, “world-class” status tends to get defined by bibliometrics.  In the sciences, scholarly conversations are, by their nature, global, and so a single standard of measurement makes sense.  But in the humanities, an awful lot of the conversations are, quite properly, local.  And so while bibliometric comparisons in the humanities, within a single country (say, between institutions), might say something important about relative scholarly productivity, comparisons between countries are, to a large degree, only measuring the relative importance of different national polities.  A strategy favouring world-class bibliometric scores in History, for instance, would de-emphasize Canadian History and Aboriginal studies, and instead focus on the Roman and British Empires, and the United States.  And that, obviously, would be nuts.

But there’s a bigger issue here: namely, why do we assume that the worth of humanities has to be judged via research, in the same manner we judge scientific disciplines?  Arguments in defence of the humanities – from people like Martha Nussbaum, Stanley Fish, etc. – stress that the discipline’s value is in encouraging students to think critically, to appreciate differences, and to create meaning.  And it’s not immediately obvious how research contributes to that.  Even if you completely buy the argument that, “scholarly engagement is necessary to teaching”, can you really claim that an increased research load improves teaching?  Have students started thinking more critically since 3/3 teaching loads were cut to 2/2 in order to accommodate more research?

The real national public interest is in having a humanities faculty that can develop critical thinkers, promote understanding, and foster creativity.  Figuring out how to better support and celebrate those things is a lot more important than finding yet more ways for the humanities to ape the sciences.

January 17

Can’t Get No Satisfaction (Data)

Many of you will have heard by now that the Globe and Mail has decided not to continue its annual student survey, which we at HESA ran for the last three years.  The newspaper will continue publishing the annual Canadian University Report, but will now do so without any quantitative ratings.

Some institutions will probably greet this news with a yawn, but for a number of others, the development represents a real blow.  There were a number of institutions who based a large part of their marketing campaigns around the satisfaction data, and the loss of this data source makes it more difficult for them to differentiate themselves.

When the survey started a decade ago, many were skeptical about the relevance of satisfaction data.  But slowly, as year followed year, and as schools more or less kept the same scores in each category year after year, people began to realize that satisfaction data was pretty reliable, and might even be indicative of something more interesting.   And as it became apparent that satisfaction scores actually had a reasonably good correlation with things like “student engagement” (basically: a disengaged student is an unhappy student), it also  became apparent that “satisfaction” was an indicator which was both simple and meaningful.

Sure, it wasn’t a perfect measure.  In particular, institutional size clearly had a negative correlation with satisfaction.  And there were certainly some extra-educational factors which tended to affect scores, be it students’ own personalities, or even just geography – Toronto students, as we know, are just friggin’ miserable, no matter where they’re enrolled.  But, when read within its proper context (mainly, by restricting comparisons to similarly-sized institutions), it was helpful.

Still, what made the data valid and useful to institutions was precisely what eventually killed it as a publishable product.  The year-to-year reliability assured institutions that something real was being measured, but it also meant that new results rarely generated any surprises.  Good headlines are hard to come by when the data doesn’t change much, and that poses a problem for a daily newspaper.  The Globe stuck with the experiment for a decade, and good on them for doing so; but in the end, the lack of novelty made continuation a tough sell.

So is this the end of satisfaction ratings?  A number of institutions who use the data have contacted us to say that they’d like the survey to continue.  Over the next week or so, we’ll be in intensive talks with institutions to see if this is possible.  Stay tuned – or, if you’d like to drop us a note with your views, you can do so at, info@higheredstrategy.com.

October 26

Research Rankings Reloaded

You’ll recall that a couple of months ago we released a set of research rankings; you may also remember that complaints were raised about a couple of issues in our methodology. Specifically, critics argued was that by including all permanent faculty we had drawn the net too wide, and that we should have excluded part-timers.

Well, we’ve now re-done the analysis, and are releasing them today as an annex to our original publication for all to see. Two key things to highlight about the changes are (i) the effect of excluding part-time professors is more significant in SSHRC disciplines than in NSERC-ones, and (ii) the use and function of part-time professors appear seems to differ systematically on linguistic grounds. Compared to part-time professors at anglophone universities, those at francophone universities are both more numerous and have profiles which resemble those of adjuncts at anglophone institutions (whereas in Anglophone institutions, part-timers look reasonably similar to the full-time population).

At the top of the table, not a great deal changes – it’s still the same top six in both SSHRC and NSERC totals (though ordinal position does change slightly – McGill slips ahead of UBC to top spot in the SSHRC rankings because of much better performance on research income). Neither is there much change in the bottom quartile or so. In the middle ranks though, where institutions are more tightly bunched together, small changes in scores can cause some pretty big changes in rankings; University of Manitoba goes from 27th to 17th in the NSERC rankings; UQAM (which has by far the country’s largest contingent of part-time faculty) jumps from 43rd to 17th in the SSHRC ones. In fact, francophone institutions generally did a lot better with the revisions. What we had initially assumed was a “language effect” – poor results driven by the fact that publishing in French limits readership and hence citations – may in fact have been driven by employment patterns.

But for most institutions, contrary to the expectations of some of our critics, not much changed. Even where it did, gains in the one field were sometimes offset by losses in the other (Ottawa, for instance, rose five places in Arts but fell two in Science). Which makes sense: the only way excluding part-timers would change outcomes would be if your institution used part-timers in a significantly different way than others do.

Now, you may still dislike our rankings because you don’t like field-normalization, or the specific metrics used, or whatever. That’s cool: debating metrics is important and we’d love to engage constructively on that. But let’s bury the canard that somehow an improper frame significantly skewed our results. It didn’t. The results are the results. Deal with ‘em.

Page 1 of 3123