HESA

Higher Education Strategy Associates

Category Archives: Uncategorized

March 01

Under-managed universities

I have been having some interesting conversations with folks recently about “overwork” in academia.  It is clear to me that a lot of professors are absolutely frazzled.  It is also clear to me that on average professors work hard – not necessarily because The Man is standing over them with a whip but because as a rule academics are professional and driven, and hey, status within academia is competitive and lots of people want to keep up with the Joneses.

But sometimes when I talk to profs – and for context here the ones I speak to most often are ones roughly my own age (mid-career) or younger – what I hear a lot of is about work imbalance (i.e. some professors are doing more work than others) or, to put it more bluntly, how much “deadwood” there is in universities (the consensus answer is somewhere between 20-30%).  And therefore, I think it is reasonable to ask the question: to what extent do some people’s “overwork” stem from the fact that some professors aren’t pulling their weight?

This is obviously something of a sticky question, and I had an interesting time discussing it with a number of interlocutors of twitter last week.  My impression is that opinion roughly divides up into three camps:

1)      The self-righteous Camp.  “This is ridiculous I’ve never heard professors talking like this about each other, we all work hard and anyway if anyone is unproductive it’s because they’re dealing with kids or depressed due to the uncaring, neoliberal administration smashing its boot into the face of academia forever…”

2)      The Hard Science Camp. “Well, you know there are huge differences in workload expectation across the institution – do you know how much work it is to run a lab? Those humanities profs get away with murder…”

3)       The “We’ve earned it” Camp “Hey look at all the professions where you put in the hours at the start and get to relax later on. We’re just like that. Would you want to work hours like a junior your whole life? And by the way older profs just demonstrate productivity on a broader basis than just teaching and research….”

There is probably something to each of these points of view.  People do have to juggle external priorities with academic ones at some points in their lives; that said, since most of the people who made the remarks about deadwood have young kids themselves, I doubt that explains the phenomenon. There probably are different work expectations across faculties; that said, in the examples I was using, my interlocutors were talking about people in their own units, so that’s doesn’t affect my observation, much.  Perhaps there are expectations of taking it easier as careers progress, but I never made the argument that deadwood is related to seniority so the assumption that this was what caused deadwood was… interesting).  So while acknowledging that all of these points may be worthwhile, I still tend to believe that at least part of the solution to overwork is dealing with the problem of work imbalances.

Now, at some universities – mainly ones which have significantly upped their research profile in the last couple of decades – this might genuinely be tough because the expectations of staff who were hired in the 1970s or 1980s might be very, very different than the expectations of ones hired today.  Places like Ryerson or MacEwan are obvious examples, but can also be true at places like Waterloo, which thought of itself as a mostly undergraduate institution even into the early 1990s.  Simply put, there is a huge generational gap at some universities in how people understand “the job” because they were hired in totally different contexts.

What strikes me about all of this is that neither management nor – interestingly – labour seem to have much interest in measuring workload for the purpose of equalizing it.  Sure, there’s lots of bean counting, especially in the sciences, especially when it comes to research contracts and publications and stuff like that.  But what’s missing is the desire to use to adjust individuals’ work loads in order to reach common goals more efficiently.

My impression is that in many departments, “workload management” means, at most, equalizing undergraduate teaching requirements.  Grad supervisions?  Those are all over the place.  “Service”?  Let’s not even pretend that’s well-measured.  Research effort?  Once tenure has been given, it’s largely up to individuals how much they want to do.  The fiercely competitive may take on 40 or 50 hours a week on top of their other duties, others much less.  Department heads – usually elected by professors in the department themselves – have limited incentive and means to get the overachievers to maybe cool it sometimes and the underachievers to up their game.

In short, while it’s fashionable to say that professors are being “micro-managed” by universities, I would argue that on the rather basic task of regulating workload for common good, academics are woefully under-managed.  I’d probably go even further and say most people know they are undermanaged and many wish it could change.  But at the end of the day, academics as a voting mass on Senates and faculty unions consistently seem to prefer undermanagement and “freedom” to management and (perhaps) more work fairness.

I wonder why this is. I also wonder if there is not a gender component to the issue.

What do you think?  Comments welcome.

February 28

The “Not Enough Engineers” Canard

Yesterday I suggested that Ottawa might be as much of the problem in innovation policy as it is the solution.  Today I want to make a much stronger policy claim: that Canada has a uniquely stupid policy discourse on innovation.   And as Exhibit A in this argument I want to present a piece posted over at Policy Options last week.

The article was written by Kat Nejatian, a former staffer to Jason Kenney and now CEO of a payment technology company (OVERCONFIDENT TECH DUDE KLAXON ALERT).  Basically the piece suggests that the whole innovation problem is a function of inputs: not enough venture capital and not enough engineers.  Let me take those two pieces separately.

First comes a claim that Canada’s Venture Capital funding is following further and further behind the United States.  He quotes a blog post from Wellington Financial saying: American venture-capital-backed companies raised US$93.37 per capita in 2006, while in Canada we raised US$45.76 per capita. Nearly a decade later, in 2015, US companies had doubled their performance, raising an average of US$186.23 per capita, while Canadian companies had only inched up to US$49.42.

There are two problems here.  First, these figures are in USD at current exchange rates.  You may remember that 2006 was an extraordinarily good year for the Canadian dollar, and 2015 less so, so this isn’t the best comparison in the world.  Second, they in no way match up with other published data on venture capital as a percentage of GDP.  The reference years are different, but the Conference Board noted that the VC funding as a percentage of GDP grew in Canada from .06 to .1% of GDP between 2009 and 2013, and now stands second in the world only to the US (the US grew from .13% to .18% while all of Europe fell back sharply).  And Richard Florida noted in The Atlantic that in terms of VC funding per capita, Toronto is the only non-American city which cracks the world’s top 20.  I am not sure what to make of these differences; I expect some of it has to do with definitions of venture capital (early-stage vs. late-stage for example).  But looking at more than one data point throws Nejatian’s hypothesis into doubt.

But the bigger whopper in this article has to do with the claim that Canada does not educate enough engineers.  Now forget the fact that the number of engineering graduates has very little to do with success in innovation, even if you define innovation a narrowly as Nejatian does (i.e. as tech and nothing else).  His numbers are simply and outrageously wrong.  He claims Canada produced only 12,000 new Engineering grads; in fact, the number of undergraduate degrees awarded in Architecture & Engineering in 2014 was 18,000, and that’s excluding math and computer science (another 5,400), not to mention new graduate degrees in both those areas (another 11,700).  He claims the UK produces 3.5 times the number of engineers per capita that Canada does.  It doesn’t; there is a gap, but it’s not very big – 9% of their degrees go to engineers compared to 8% of ours (see figure below).  He repeats the scare claim – demolished long ago by Vivek Wadhwa among others – that India is going to eat our lunch because it graduates 1.5 million engineers per year.  This argument needs to go back to 2006 where it belongs: only a tiny percentage of these engineers are of the calibre of North American E-schools, and one recent Times of India  piece suggested that 93% of them were not actually employable (which sounds like an exaggeration but still points to a significant underlying problem).

Figure 1: Science & Engineering Degrees as % of Total Degrees Awarded, Selected OECD Countries

OTTSYD 2017-02-27-1

(See what I mean?  The US has the smallest percentage of undergraduate degrees in engineering and yet it leads everyone else in tech…yet apparently that doesn’t matter to Nejatian – all that matters is MOAR ENGINEERS.  I mean, if we increase our proportion of degrees in engineering by about 60% we could be as innovative as…Italy?)

I could go on, but you get the picture.  This is a terrible argument using catastrophically inaccurate data and yet it gets a place in what is supposed to be our country’s premier publication on public policy.  It’s appalling.  But it fits with the way we talk about innovation in this country.  We focus on inputs rather than processes and relationships.  We see a lack of inputs and immediately try to work out how to increase them rather than asking i) do these inputs actually matter or ii) why are they low in the first place (actually, the only redeeming feature about this article is that it doesn’t make any recommendations, which given the quality of the analysis is really a blessing for all concerned).

Could Canada do with a few more engineers?  Probably.  It’s the one field of study where incomes of new graduates are still rising in real terms, which suggests the demand could support a greater supply.  But the causal link between Engineers and innovation is a vast oversimplification.  If we want better policy in this country, we need to start by improving the quality of the discourse and analysis.  Policy Options has done us all a disservice by letting this piece go out under their name.

February 27

Can Ottawa Do Innovation?

The National Post’s David Akin had a useful article last week entitled Canada Has Failed at Innovation for 100 years: Can The Trudeau Government Change That?  Read it, it’s good.  It’s based around a new-ish Peter Nicholson article in Canadian Public Policy which is unfortunately not available without a subscription.  But Nicholson’s argument appears to be: we’ve done pretty well our entire history as a country copying or importing technology from Americans: what exactly is it that Ottawa is going to do to “shock” us into becoming a massive innovator?

Good question.  But I have a better question: does it make any sense that the federal government is leading on these kinds of policies?  Wouldn’t provinces bet better suited to the job?  Knee-jerk centralists (my guess: probably half my subscribers) probably find that suggestion pretty horrific.  But hear me out.  There are a number of really good reasons why Ottawa probably isn’t best placed to lead on this file.

First: innovation policy is to a large extent is about people and skills.  And skills policy has been fully in the hands of provincial governments for over twenty years now.  We accept that provincial governments are closer to local labour markets and local business for skills purposes.  Surely the same is also true for innovation?

Second: Canada is huge.  We’re not like Singapore or Israel or Taiwan, where industries are essentially homogenous across the entire country.  We are more like China or the US, where a single industry might look completely different in one part of the country than another.  If you haven’t already read Run of the Red Queen: Government, Innovation, Globalization and Economic Growth in China by Dan Breznitz and Michael Murphree, I recommend it.  Besides showing how innovation can be profitable even when it is not of the “new product”/”blue sky” (a truth to which our current government seems utterly oblivious), it shows how the structure of a single industry (in this case, IT) can be utterly different in different parts of a single country.  That’s also true in Canada.  And it’s why it’s tough to draw up decent national policies on a sectoral level.

(A corollary to that second point, which I made back here: because the country is so big, any government attempt to play the “cluster” game in the name of improved innovation is bound to get wrapped up in regional politics pretty quickly.  Anyone who’s watched Montreal and Toronto’s unseemly jockeying for a single big federal investment in Artificial Intelligence will know what I mean.)

Over the course of the past twenty years, of course, many provinces have set up their own innovation ministries or agencies.  But apart from the partial exceptions of Ontario and Quebec, they tend to be poor cousins of the federal ministry: understaffed and not especially well-resourced.  As a result, they’re not at present any more effective than Ottawa in driving innovation.  But that could change with more effective investment.  And of course, Ottawa would always have a role to play: if nothing else, its authority over competition policy means it will always have levers which it can and should use to promote innovation (even if at present it seems extremely reluctant to use this particular lever).

In short, it’s worth considering the hypothesis that it’s not “Canada” which has failed at innovation, but Ottawa.

February 23

Garbage Data on Sexual Assaults

I am going to do something today which I expect will not put me in good stead with one of my biggest clients.  But the Government of Ontario is considering something unwise and I feel it best to speak up.

As many of you know, the current Liberal government is very concerned about sexual harassment and sexual assault on campus, and has devoted no small amount of time and political capital to getting institutions to adopt new rules and regulations around said issues.  One can doubt the likely effectiveness of such policies, but not the sincerity of the motive behind them.

One of the tools the Government of Ontario wishes to use in this fight is more public disclosure about sexual assault.  I imagine they have been influenced with how the US federal government collects and publishes statistics on campus crime, including statistics on sexual assaults.  If you want to hold institutions accountable for making campuses safer, you want to be able to measure incidents and show change over time, right?

Well, sort of.  This is tricky stuff.

Let’s assume you had perfect data on sexual assaults by campus.  What would that show?  It would depend in part on the definitions used.  Are we counting sexual assaults/harassment which occur on campus?  Or are we talking about sexual assaults/harassment experiences by students?  Those are two completely different figures.  If the purpose of these figures is accountability and giving prospective students the “right to know” (personal safety is after all a significant concern for prospective students), how useful is that first number?  To what extent does it make sense for institutions to be held accountable for things which do not occur on their property?

And that’s assuming perfect data, which really doesn’t exist.  The problems multiply exponentially when you decided to rely on sub-standard data.  And according to a recent Request for Proposals placed on the government tenders website MERX, the Government of Ontario is planning to rely on some truly awful data for its future work on this file.

Here’s the scoop: the Ministry of Advanced Education and Skills Development is planning to do two surveys: one in 2018 and one in 2024.  They plan on getting contact lists of emails of every single student in the system – at all 20 public universities, 24 colleges and 417 private institutions and handing them over to a contractor so they can do a survey. (This is insane from a privacy perspective – the much safer way to do this is to get institutions to send out an email to students with a link to a survey so the contractor never sees the names without students’ consent).  Then they are going to send out an email to all those students – close to 700,000 in total – offering $5/per head to answer a survey.

Its not clear what Ontario plans to do with this data.  But the fact that they are insistent that *every* student at *every* institution be sent the survey suggests to me that they want the option to be able to analyze and perhaps publish the data from this anonymous voluntary survey on a campus by campus basis.

Yes, really.

Now, one might argue: so what?  Pretty much every student survey works this way.  You send out a message to as many students as you can, offer an inducement and hope for the best in terms of response rate.  Absent institutional follow-up emails, this approach probably gets you a response rate between 10 and 15% (a $5 incentive won’t move that many students)  Serious methodologists grind their teeth over those kinds of low numbers, but increasingly this is the way of the world.  Phone polls don’t get much better than this.  The surveys we used to do for the Globe and Mail’s Canadian University Report were in that range.  The Canadian University Survey Consortium does a bit better than that because of multiple follow-ups and strong institutional engagement.  But hell, even StatsCan is down to a 50% response rate on the National Graduates Survey.

Is there non-response bias?  Sure.  And we have no idea what it is.  No one’s ever checked.  But these surveys are super-reliable even if they’re not completely valid.  Year after year we see stable patterns of responses, and there’s no reason to suspect that the non-response bias is different across institutions.  So if we see differences in satisfaction of ten or fifteen percent from one institution from another, most of us in the field are content to accept that finding.

So why is the Ministry’s approach so crazy when it’s just using the same one as every one else?  First of all, the stakes are completely different.  It’s one thing to be named an institution with low levels of student satisfaction.  It’s something completely different to be called the sexual assault capital of Ontario.  So accuracy matters a lot more.

Second, the differences between institutions are likely to be tiny.  We have no reason to believe a priori that rates differ much by institutions.  Therefore small biases in response patterns might alter the league table (and let’s be honest, even if Ontario doesn’t publish this as a league table, it will take the Star and the Globe about 30 seconds to turn it into one).  But we have no idea what the response biases might be and the government’s methodology makes no attempt to work that out.

Might people who have been assaulted be more likely to answer than those who did not?  If so, you’re going to get inflated numbers.  Might people have reasons to distort the results?  Might a Men’s Rights group encourage all its members to indicate they’d been assaulted to show that assault isn’t really a women’s issue?  With low response rates, it wouldn’t take many respondents to get that tactic to work.

The Government is never going to get accurate overall response rates from this approach.  They might, after repeated tries, start to see patterns in the data: sexual assault is more prevalent in institutions in large communities than in small ones, maybe; or it might happen more often to students in certain fields of study than others.  That might be valuable.  But if the first time the data is published all that makes the papers is a rank order of places where students are assaulted, we will have absolutely no way to contextualize the data, no way to assess its reliability or validity.

At best, if the data is reported system-wide, the data will be weak.  A better alternative would be to go with a smaller random sample and better incentives so as to obtain higher  response rates.  But if it remains a voluntary survey *and* there is some intention to publish on a campus-by campus basis, then it will be garbage.  And garbage data is a terrible way to support good policy objectives.

Someone – preferably with a better understanding of survey methodology – needs to put a stop to this idea.  Now.

February 22

Notes for the NDP Leadership Race

As contestants start to jump into the federal NDP leadership race, it’s only a matter of time before someone starts promising free tuition to all across the land.  Now, I’m not going to rehash why free tuition is both regressive and undesirable (though if you really want to take a gander through the archives on free tuition, have a look here).  But I do think I can do some public service by talking about federalism and higher education, or rather: what the feds can and cannot do in this sphere.

The entire Canadian constitution is based around a compromise on education dating from 1864.  Upper Canada came to the Quebec conference with one overriding aim: representation by population in Parliament, so that their superior population would give them the most seats in Parliament.  Lower Canada agreed if and only if a second, local, and equal tier of government was created which would have jurisdiction over education and health, because over-their-dead-bodies were a bunch of (mostly) Orangemen going to get their hands on a hallowed set of (mostly) French catholic institutions.

There’s nothing in there that stops Ottawa’s ability to give money to individuals for the purpose of education.  This is why, despite all the sturm und drang, Quebec never put up a legal fight to the Canada Millennium Scholarship Foundation: Ottawa can give cash to whoever it wants, whenever it wants.  But when it comes to dealing with institutions, their ability to direct money to areas of provincial jurisdiction is subject to provincial veto.  The provinces accept (with limits, in Quebec’s case) that the feds can flow money to institutions for the purposes of academic research.  Hence the Canadian Foundation for Innovation.  They do not accept that it can send money to institutions for operating purposes.

(Historical footnote: there was a period where nine out of ten of them were prepared to accept this.  Back in the mid-1950s, there was a ruse in which the federal government handed tens of millions of dollars every year (a lot back then) to Universities Canada – then known as the National Conference of Canadian Universities and Colleges – which it would then distribute to institutions.  In theory this was a canny work-around to the constitution.  In practice, it stalled because Duplessis blew a gasket and told Quebec universities that if they touched a dime of that money, he’d take it out of their provincial funding.  Pierre Elliott Trudeau then wrote a wonderful article in la Cite called “Federal Grants to Universities” explaining why Duplessis was 100% right and St. Laurent was in kookooland, constitutionally speaking.  It’s a great article, read it if you can.  Anyway, this arrangement lasted into the 1960s, when the feds got out of this arrangement and moved into per-capita grants instead.  And that door is now shut: there is no going back through it.)

Politically, there is a fantasy shared by some on the political left that the federal government can simply re-acquire policy leadership in the post-secondary field by passing an act of Parliament and adding great wodges of cash to existing transfers… with strings attached.  I’ve previously (here) torn a strip off the idea of a federal Post-Secondary Education Act, but let me focus here specifically on the idea that a generalized fiscal transfer could actually affect tuition fees.  Let’s just imagine how that discussion would go.

Ottawa: we want to give each of you money so that you bring your tuition fees to zero.  Quebec and Newfoundland, your fees are about $3000, so we’ll give you that per student…

Ontario: Our fees are $7500 a student or so.  Fork it over.

Quebec and Newfoundland: Hold it.

I could go on here about the nuances of fiscal federalism, but basically that’s the problem in a nutshell (for my American readers: in some less disastrous timeline, Hillary Clinton is facing exactly this problem as she attempts to implement her free tuition promise for public universities). There are ways the federal government could bribe provinces into lowering tuition.  In fact, something like that actually happened in Nova Scotia as a result of the NDP-Liberal budget deal in the minority Parliament of 2005.  But you wouldn’t necessarily get them to lower by an equal amount, and you definitely wouldn’t get them to go to zero because they have vastly different starting points.

So, here’s the quick heads-up to all prospective New Democrat leadership candidates: even if it wanted to, the Government of Canada has no sensible way to eliminate tuition nationally.  If you do manage to form a government, this will be broken promise #1.  So don’t promise it.  Instead, think about ways to support students which don’t involve tuition.  There is a whole whack of things you could do with student assistance instead.  And the best part is: if you use student aid as a tool instead of tuition, you can channel aid to those who actually need it most.

February 21

Two Studies to Ponder

Sometimes, I read research reports which are fascinating but probably wouldn’t make for an entire blog post (or at least a good one) on their own.  Here are two from the last couple of weeks.

Research vs. Teaching

Much of the rhetoric around universities’ superiority over other educational providers is that their teachers are also at the forefront of research (which is true if you ignore sessionals, but you’d need a biblically-sized mote in your eye to miss them).  But on the other hand, research and teaching present (to some extent at least) rival claims on an academic’s time, so surely if more people “specialized” in either teaching or research, you’d get better productivity overall, right?

Anyone trying to answer this question will come up pretty quickly against the problem of how to measure excellence in teaching.   Research is easy enough: count papers or citations or whatever other kind of bibliometric outcome takes your fancy.  But measuring teaching is hard.  One line of research tries to measure the relationship between research productivity and things like student evaluations and peer ratings.  Meta-analyses show zero correlation between the two: high research output has no relationship with perceived teaching quality.  Another line of research looks at research output versus teaching output in terms of contact hours.  No surprise there: these are in conflict.  The problem with those studies is that the definitions of quality are trivial or open to challenge.  Also, very few studies do very much to control for things like discipline type, institutional type, class size, stage of academic career, etc.

So now along comes a new study by David Figlio and Morton Schapiro of Northwestern University, which has a much more clever way of identifying good teaching.  They look specifically at professors teaching first year courses and ask the question: what is the deviation in grades each of their students receives in follow-up courses in the same subject. This is meant to measure whether or not professors are “inspiring” their students.  Additionally, the measure how many students actually go on from each professor’s first year class to major in a subject.  The first is meant to measure “deep learning” and the second to measure how well professors inspire their students.  Both measures are certainly open to challenge, but they are still probably better than the measures used in earlier studies.

Yet the result is basically the same as those earlier studies: having a better publishing record is uncorrelated with teaching quality measures: that is, some good researchers have good teaching outputs while others don’t.

Institutions should pay attention to this result.  It matters for staffing and tenure policies.  A lot.

Incubator Offsets

Christos Kolympiris of Bath University and Peter Klein of Baylor University have done the math on university incubators and what they’ve found is that there are some interesting opportunity costs associated with them.  The paper is gated, but a summary can be found here.  The main one is that on average, universities see a decrease in both patent quality (as measured by patent citations) and licensing revenues after establishing an incubator.  Intriguingly, the effect is larger at institutions with lower research income, suggesting that the more resources are constrained, the likelier it is that incubator funding is being drawn from other areas of the institutional research effort, which then suffer as a result.

(My guess, FWIW, is that it also has to do with limited management attention span.  At smaller institutions, there are fewer people to do oversight and hence a new initiative takes away managerial focus in addition to money).

This intriguing results is not an argument against university or polytechnic incubators; rather, it’s an argument against viewing such initiatives as purely additive.  The extent to which they take resources away from other parts of the institution needs to be considered as well.  To be honest, that’s probably true of most university initiatives, but as a sector we aren’t hardwired to think that way.

Perhaps we should be.

February 20

Canada’s Rankings Run-up

Canada did quite well out of a couple of university rankings which have come out in the last month or so: the Times Higher education’s “Most International Universities” ranking, and the QS “Best Student Cities” ranking.  But there’s actually less to this success than meets the eye.  Let me explain.

Let’s start with the THE’s “Most International” ranking.  I have written about this before, saying it does not pass the “fall-down-laughing” test which is really the only method of testing a ranking’s external validity.  In previous years, the ranking was entirely about which institutions had the most international student, faculty and research collaborations.  These kinds of criteria inevitably favour institutions in small countries with big neighbours and disfavour big countries with few neighbours, it was no surprise that places like the University of Luxembourg and the Qatar University would top the list, and the United States would struggle to put an institution in the top 100.  In other words, the chosen indicators generated a really superficial standard of “internationalism” that lacked credibility (Times readers were pretty scathing about the “Qatar #1 result).

Now as a result of this, the Times changed it methodology.  Drastically.  They didn’t make a big deal of doing so (presumably not wishing to draw more attention to the rankings’ earlier superficiality), but basically, i) they added a fourth set of indicators (worth 25% of total) for international reputation based on THE’s annual survey of academics and ii) they excluded any institution which didn’t receive at least 100 in said academic survey.  (check out Angel Calderon’s critique of the new rules here for more details, if that sort of thing interests you).  That last one is a big one: in practice it means the universe for this ranking is only about 200 institutions.

On the whole, I think the result is a better ranking and confirms more closely to what your average academic on the street thinks of as an “international” university.  Not surprisingly, places like Qatar and Luxembourg suddenly vanished from the rankings.  Indeed, as a result of those changes fully three-quarters of the institution that were ranked in 2016 disappeared from the rankings in 2017.  Not surprisingly, Canadian universities suddenly shot up as a result.  UBC jumped from to 40th to 12th, McGill went from 76th to 23rd, Alberta from 110th to 31st, Toronto from 128th to 32nd, and so on.

Cue much horn-tooting on social media from those respective universities for these huge jumps in “internationality”.  But guys, chill.  It’s a methodology change.  You didn’t do that: the THE’s methodologists did.

Now, over to the second set of rankings, the QS “Best Student Cities”, the methodology for which is here.  The ranking is comprised of 22 indicators spread over six areas: university quality (i.e. how highly-ranked, according to QS, are the institutions in that city), “student mix”, which is a composite of total student numbers, international student numbers and some kind of national tolerance index,; “desirability”, which is a mix of data about pollution, safety, livability (some index made up by the Economist), corruption (again, a piece of national-level data) and students’ own ratings of the city (QS surveys students on various things); “employer activity”, which is mostly based on an international survey of employers about institutional quality, “affordability”, and “student view” (again, from QS’ own proprietary data.

Again, Montreal coming #1 is partly the result of a methodology change. This is the first year QS added student views to the mix, and Montreal does quite well on that front’ eliminate those scores and Montreal comes third.  And while the inclusion of student views in any ranking is to be applauded, you have to wonder about the sample size.  QS says they get 18,000 responses globally…Canada represents about 1% of the world’s students and Montreal institutions represent 10-15% of Canadian students, so if the responses are evenly distributed, that means there might be 20 responses from Montreal in the sample (there’s probably more than that because responses won’t be evenly distributed, but my point is we’re talking small numbers).  So I have my doubts about the stability of that score.  Ditto on the employer ratings, where Montreal somehow comes top among Canadian cities, which I am sure is news to most Canadians.  After all, where Montreal really wins big is on things like “livability” and “affordability”, which is another way of saying the city’s not in especially great shape economically.

So, yeah, some good rankings headlines for Canada: but let’s understand that nearly all of it stems from methodology changes.  And what methodologists give, they can take away.

February 17

Four Mega-trends in International Higher Education – Economics

If there’s one word everyone can agree upon when talking about international education, it’s “expensive”. Moving across borders to go to school isn’t cheap and so it’s no surprise that international education really got big certain after large developing countries (mainly but not exclusively China and India) started getting rich in the early 2000s.

How rich did these countries get? Well, for a while, they got very rich indeed. Figure 1 shows per capita income for twelve significant student exporting countries, in current US dollars, from 1999 to 2011, with the year 1999 as a base. Why current dollars instead of PPP? Normally, PPP is the right measure, but this is different because the goods we’re looking at are themselves priced in foreign currencies. Not necessarily USD, true – but we could run the same experiment with euros and we’d see something largely similar, at least from about 2004 onwards. So as a result figure 1 is capturing both changes in base GDP and change in exchange rates.

Figure 1: Per Capita GDP, Selected Student Exporting Countries, 1999-2011 (1999=100), in current USD

Figure 1: Per Capita GDP, Selected Student Exporting Countries, 1999-2011 (1999=100), in current USD

And what we see in figure 1 is that every country saw per capita GDP rise in USD, at least to some degree. The growth was least in Mexico (70% over 12 years) and Egypt (108%). But in the so-called “BRIC” countries world’s two largest countries, the growth was substantially bigger – 251% in Brazil, 450% in India, 626% in China, and a whopping 1030% in Russia (and yes, that’s from an artificially low-base on Russia in 1999, ravaged by the painful transition to a market economy and the 1998 wave of bank failures, but if you want to know why Putin is popular in Russia, look no further). Without this massive increase in purchasing power, the recent flood of international students would not have been possible.

But….but but but. That graph ends in 2011, which was the last good year as far as most developing countries are concerned. After that, the gradual end to the commodity super-cycle changed the terms of trade substantially against most of these countries, and in some countries local disasters as well (e.g. shake-outs of financial excess after the good years, sanctions, etc) caused GDP growth to stall and exchange rates to fall. The result? Check out figure 2. Of the 10 countries in our sample, only three are unambiguously better off in USD terms now than they were in 2011: Egypt, Vietnam, and (praise Jesus) China. Everybody else is worse off or (in Nigeria’s case) will be once the 2016 data come in.

Figure 2: Per Capita GDP, Selected Student Exporting Countries, 2011-2015 (2011=100), in current USD

Figure 2: Per Capita GDP, Selected Student Exporting Countries, 2011-2015 (2011=100), in current USD

Now, it’s important not to over-interpret this chart. We know that many of these countries have been able to maintain. Yes, reduced affordability makes it harder for student to study abroad – but we also know that global mobility has continued to increase even as many countries have it the rough economically (caveat: a lot of that is because of continued economic resilience in China which has yet to hit the rough). Part of the reason is that if a student wants to study abroad and can’t make it to the US, he or she won’t necessarily give up on the idea of going to a foreign university or college: they might just try to find a cheaper alternative. That benefits places which have been pummelled by the USD in the last few years – places like Canada, Australia and even Russia.

In short: economics matters in international higher education, and economic headwinds in much of the world are making studying abroad a more challenging prospect than they did five years ago. But big swings in exchange rates can open up opportunities for new providers.

February 16

How to Fund (3)

You all may remember that in early 2015, the province of Ontario announced it was going to review its university funding formula.  There was no particular urgency to do so, and many were puzzled as to “why now”?  The answer, we were told, was that the Liberal government thought it could make improvements in the system by changing the funding structure.  Specifically, they said in their consultation document that they thought they could use a new formula to improve i) improve quality/student experience, ii) support differentiation, iii) enhance sustainability, iv) increase transparency and accountability.

Within the group of maybe 100 people who genuinely understand this stuff, I think the scoffing over points iii) and iv) were audible as far as the Maritimes.  Transparency and accountability are nice, but you don’t need a new funding formula to get them.  The Government of Ontario could compel institutions to provide data any time it wants to (and often does).  If institutions are “insufficiently transparent” it means government isn’t asking for the right data.

As for enhancing sustainability?  HA!  At a system-level, sustainability means keeping costs and income in some kind of balance.  Once it became clear that there was no extra government money on the table for this exercise, and that tuition fees were off the table, and they would not use the formula to in any way rein in staff salaries or pensions (as I suggested back here) , everybody said “ok, guess nothing’s happening on that front” (we were wrong, as it turned out, as we’ll see in a second).  But the bit about quality, student experience and differentiation got people’s attention.  That sounded like incentivizing certain things.  Output-like things, which would need to be measured and quantified.  So the government was clearly entertaining the idea of some output-based measures, even as late as December 2015 when the report on the consultation went out (see that report here).  Indeed, the number one recommendation was, essentially, “the ministry should apply an outcomes-based lens to all of its investments).

One year later, the Deputy Minister for Advanced Education sent out a note to all institutions which included the following passage:

 The funding formulas are meant to support stability in funding at a time when the sector is facing demographic challenges while strengthening government’s stewardship role in the sector. The formulas also look to create accountable outcomes, beyond enrollment, that reflect the Strategic Mandate Agreements (SMAs) of each institution.

 As you know, our goal is to will focus our sector on high-quality student outcomes and away from a focus on growth. As such, the funding formula models are corridors which give protection on the downside and do not automatically commit funds for growth on the upside.

Some of that may require translation but the key point does not: all of a sudden, funding formulas were not about applying an outcome based lens on investment, they were about “stability”.  Outcomes, yes, but only as they apply to each institution’s SMA, and no one I know in the sector thinks that the funding envelope that will be devoted to promoting SMAs is going to be over five percent.  Which, given that tuition is over 50% of income, means that maybe, at best, we’re looking to about 2% of total funding might be outcome-based.  As I’ve said before, this is not even vaguely enough to affect institutional behaviour.

What happened?  My guess is it’s a mix of four things.  First, there was a change of both Minister and Deputy Minister and that’s always a crap shoot.  Priorities change, sometimes radically.  Second, the university sector made its displeasure known.  They didn’t do it very publicly, and I have no insider knowledge of what kind of lobbying occurred, but clearly, a number of people argued very strenuously that this was a Bad Idea.  One that gored oxes.  Very Bad.  Third, it finally dawned on some people at the top of the food chain that a funding formula change, in the absence of any new revenue tools, meant some institutions would win, and others would lose.  And as the provincial government’s recent 180 on Toronto toll roads has shown, this low-in-the-polls government is prepared to go a long way to avoid making any new “losers”.

Finally, that “sustainability” thing came back in a new form.  But now it was no longer about making the system sustainable, but about finding ways to make sure that a few specific small institutions with precarious finances (mostly but not exclusively in northern Ontario) didn’t lose out as adverse demographics and falling student numbers began to eat into their incomes.  Hence the language about corridors “giving protection on the downside”.  It’s ridiculous for three reasons.  One, it’s a half-solution because institutions vulnerable to demographic decline lose at least as much from lost tuition revenue as they do in lost government grant.  Two, it’s a departing horse/open barn door issue: the bulk of the demographic shift has already happened and so to some extent previous losses are just going to be locked in.  Three – and this is most important – the vulnerable institutions make up maybe 8% of total enrolments.  Building an entire new funding system just to solve a problem that affects 8% of students is…I don’t know.  I’m kind of lost for words.  But I bet if you looked it up in the dictionary it would be under “ass backwards”.

And that, my friends, is how Ontario blew a perfectly good chance to introduce a sensible, modern performance-based funding system.  A real shame.  Perhaps others can learn from it.

February 15

How to Fund (2)

As I noted yesterday, in Canada we have some kind of phobia about output-based funding.  In the 1990s, Ontario and Alberta introduced, and then later killed, key performance indicators with funding attached.  Quebec used to pay some money out to institutions based on the number of degrees awarded, not just students enrolled, but they killed that a few years ago too (I’m sure the rumour that it did so because McGill did particularly well on that metric is totally unfounded).

Now, there is no doubt that the history of performance indicators in Canada hasn’t been great.  Those Ontario performance indicators from the 1990s?  They were cockamamie and deserved to die (student loan defaults as a performance measure?  Really?  When defaults are more obviously correlated with program of study, geographic location, and the business cycle?).  But even sensible measures like student completion rates get criticized by the usual suspects (hi OCUFA!), and so governments who even think about basing funding on outputs rather than inputs have to steel themselves to being accused of making institutions “compete” for funding, of creating “winner and losers,” of “neoliberalism,” yadda yadda.  You know the story.

Yet output based funding is not some kind of extremist idea.  Leave aside the nasty United States, where two-thirds of states have some kind of performance-based funding, all of which one way or another are based on student progress and completion.  Let’s look to wonderful, humane Europe, home to all ideas that are progressive and inclusive in higher education.  How do they deal with output-based funding formulae?

Let’s start with Denmark and England, both of which essentially offer 100% of their teaching-related funding on an output basis (these are both countries where institutions are funded separately for research and teaching), because although their formulas are essentially enrolment-weighted ones like Ontario’s and Quebec’s, they only fund courses which students successfully finish.  (Denmark also has another slice of teaching funding which is based on “on-time” student completion).  Students don’t finish, the institution doesn’t get paid.  Period.

Roughly two-thirds of higher education funding in Finland – yes, vicious neo-liberal Finland – is output-based.  A little more than half of that comes from the student side, based on credit progression, degree completions and the number of employed graduates.  On the research side, output-based funding is based on number of doctorates awarded, publications, and the outcome of research competitions.  It’s a similar situation in the Netherlands where over half the teaching component of funding comes from the number of undergraduate and master’s degrees awarded, while well over half the research funding comes from doctorates awarded plus various metrics of research performance.

All throughout Europe we see similar stories, though few have quite as much funding at risk on performance measures as the four above.  Norway and Italy both have performance-based components (mostly based on degree completions) of their systems which involve 15-25% of total funding.  France provides five percent of its institutional funding based on the number of master’s and bachelor’s degree completions (the latter adjusted in a very sensible way for the quality of the institutions’ students’ baccalaureat results).  Think about that for a moment.  This is France, for God’s sake, a country whose public service laughs at the concept of value for money and in which a major-party Presidential candidate can advocate for 32-hour week and not be treated as an absolute loon.  Yet they think some output-oriented funding is just fine.

I could go on: all German Länder have at least some performance-based funding both for student completions and research output, though the structure of these incentives varies significantly.  The Czech Republic, Slovenia, and Flemish Belgium also all have performance-based systems (mainly for student completions).  New Zealand provides 5% of total institutional funding based on a variety of success/completion measures (the exact measures vary a bit, properly, depending on the type of institution).  Finally, Austria and Estonia have mission-based funding systems, but in both cases measures looking at research performance and student completions indicators which form part of their reporting systems.

You get the picture.  Output-based funding is common.  It’s not revolutionary.  It’s been used in many countries without much fuss.  Have there been transition teething troubles?  Yes there have (particularly in Estonia); but with a little foresight and planning those can be mitigated.

And why have they all adopted this kind of funding?  Because funding is an essential tool in steering the system.    Governments can use output based funding to purchase institution’ attention and get them to focus on key outcomes.  If, on the other hand, they simply hand over money based on the number of students institutions enroll, then what gets incentivized are larger institutions, not better institutions.

Ontario, with its recent formula review, had a golden opportunity to introduce some of these principles to Canada.  It failed to so.  I’ll explain why tomorrow.

Page 3 of 2012345...1020...Last »