HESA

Higher Education Strategy Associates

Tag Archives: Reports

June 06

Making “Applied” Research Great Again

One of the rallying calls of part of the scientific community over the past few years is how under the Harper government there was too much of a focus on “applied” research and not enough of a focus on “pure”/”basic”/”fundamental research.  This call is reaching a fever pitch following the publication of the Naylor Report (which, to its credit, did not get into a basic/applied debate and focussed instead on whether or not the research was “investigator-driven”, which is a different and better distinction).  The problem is that that line between “pure/basic/fundamental” research and applied research isn’t nearly as clear cut as people believe, and the rush away from applied research risks throwing out some rather important babies along with the bathwater.

As long-time readers will know, I’m not a huge fan of a binary divide between basic and applied research.  The idea of “Basic Science” is a convenient distinction created by natural scientists in the aftermath of WWII as a way to convince the government to give them money the way they did during the war but without having soldier looking over their shoulder.  In some fields (medicine, engineering), nearly all research is “applied” in the sense that it there are always considerations of the end-use for the research.

This is probably a good time for a refresher on Pasteur’s Quadrant.  This concept was developed by Donald Stokes, a political scientist at Princeton, just before his death in 1997.  He too thought the basic/applied dichotomy was pretty dumb, so like all good social scientists he came up with a 2×2 instead.  One consideration in classifying science is whether or not it involved a quest for fundamental understanding; the other was whether or not the researcher had any consideration for end-use.   And so what you get is the following:

June 6 -17 Table 1

(I’d argue that to some extent you could replace “Bohr” with “Physics” and “Pasteur” with “Medicine” because it’s the nature of the fields of research and not individual researchers’ proclivities, per se, but let’s not quibble).

Now what was mostly annoying about the Harper years – and to some extent the Martin and late Chretien years – was not so much that the federal government was taking money out of the “fundamental understanding” row and into the “no fundamental understanding” row (although the way some people go on you’d be forgiven for thinking that), but rather than it was trying to make research fit into more than one quadrant at once.  Sure, they’d say, we’d love to fund all your (top-left quadrant) drosophilia research, but can you make sure to include something about its eventual (bottom-right quadrant) commercial applications?  This attempt to make research more “applied” is and was nonsense, and Naylor was right to (mostly) call for an end to it.

But that is not the same thing as saying we shouldn’t fund anything in the bottom-right corner – that is, “applied research”.

And this is where the taxonomy of “applied research” gets tricky.  Some people – including apparently the entire Innovation Ministry, if the last budget is any indication – think that the way to bolster that quadrant is to leave everything to the private sector, preferably in sexy areas like ICT, Clean Tech and whatnot.  And there’s a case to be made for that: business is close to the customer, let them do the pure applied research.

But there’s also a case to be made that in a country where the commercial sector has few big champions and a lot of SMEs, the private sector is always likely to have some structural difficulties doing the pure applied research on its own.  It’s not simply a question of subsidies: it’s a question of scale and talent.  And that’s where applied research as conducted in Canada’s colleges and polytechnics comes in.  They help keep smaller Canadian companies – the kinds that aren’t going to get included in any “supercluster” initiative – competitive.  You’d think this kind of research should be of interest to a self-proclaimed innovation government.  Yet whether by design or indifference we’ve heard nary a word about this kind of research in the last 20 months (apart perhaps from a renewal of the Community and College Social Innovation Fund).

There’s no reason for this.  There is – if rumours of a cabinet submission to respond to the Naylor report are true – no shortage of money for “fundamental”, or “investigator-driven” research.  Why not pure applied research too?  Other than the fact that “applied research” – a completely different type of “applied research”, mind you – has become a dirty word?

This is a policy failure unfolding in slow motion.  There’s still time to stop it, if we can all distinguish between different types of “applied research”.

May 08

Naylor Report, Part II

Morning all.  Sorry about the service interruption.  Nice to be back.

So, I promised you some more thoughts about the Fundamental Science Review.  Now that I’ve lot of time to think about it, I think I’m actually surprised by what it doesn’t say, says and how many questions remain open.

What’s best about the report?  The history and most of the analysis are pretty good.  I think a few specific recommendations (if adopted) might actually be a pretty big deal – in particular the one saying that the granting councils should stop any programs forcing researchers to come up with matching funding, mainly because it’s a waste of everyone’s time.

What’s so-so about it?  The money stuff for a start.  As I noted in my last blog post, I don’t really think you can justify a claim to more money based on “proportion of higher ed investment research coming from federal government”.  I’m more sympathetic to the argument that there needs to be more funds, especially for early career researchers, but as noted back here it’s hard to argue simultaneously that institutions should have unfettered rights to hire researchers but that the federal government should be pick up responsibility for their career progression.

The report doesn’t even bother, really, to make the case that more money on basic research means more innovation and economic growth.  Rather, it simply states it, as if it were a fact (it’s not).  This is the research community trying to annex the term “innovation” rather than co-exist with it.  Maybe that works in today’s political environment; I’m not sure it improves overall policy-making.  In some ways, I think it would have been preferable to just say: we need so many millions because that’s what it takes to do the kind of first-class science we’re capable of.  It might not have been politic, but it would have had the advantage of clarity.

…and the Governance stuff?  The report backs two big changes in governance.  One is a Four Agency Co-ordinating Board for the three councils plus the Canada Foundation for Innovation (which we might as well now call the fourth council, provided it gets an annual budget as recommended here), to ensure greater cross-council coherence in policy and programs.  The second is the creation of a National Advisory Committee on Research and Innovation (NACRI) to replace the current Science, Technology and Innovation Council and do a great deal else besides.

The Co-ordinating committee idea makes sense: there are some areas where there would be clear benefits to greater policy coherence.  But setting up a forum to reconcile interests is not the same thing as actually bridging differences.  There are reasons – not very good ones, perhaps, but reasons nonetheless – why councils don’t spontaneously co-ordinate their actions; setting up a committee is a step towards getting them to do so, but success in this endeavour requires sustained good will which will not necessarily be forthcoming.

NACRI is a different story.  Two points here.  The first is that it is pretty clear that NACRI is designed to try to insulate the councils and the investigator-driven research they fund from politicians’ bright ideas about how to run scientific research.  Inshallah, but if politicians want to meddle – and the last two decades seem to show they want to do it a lot – then they’re going to meddle, NACRI or no.  Second, the NACRI as designed here is somewhat heavier on the “R” than on the “I”.  My impression is that as with some of the funding arguments, this is an attempt to hijack the Innovation agenda in Research’s favour.  I think a lot of people are OK with this because they’d prefer the emphasis to be on science and research rather than innovation but I’m not sure we’re doing long-term policy-making in the area any favours by not being explicit about this rationale.

What’s missing?  The report somewhat surprisingly punted what I expected to be a major issue: namely, the government’s increasing tendency over time to fund science outside the framework of the councils in such programs as the Canada Excellence Research Chairs (CERC) and the Canada First Research Excellence Fund (CFREF).  While the text of the report makes clear the authors’ have some reservations about these programs, the recommendations are limited to a “you should review that, sometime soon”.  This is too bad, because phasing out these kinds of programs would be an obvious way to pay for increase investigator-driven funding (though as Nassif Ghoussoub points out here  it’s not necessarily a quick solution because funds are already committed for several years in advance).  The report therefore seems to suggest that though it deplores past trends away from investigator-driven funding, it doesn’t want to see these recent initiatives defunded, which might be seen in government as “having your cake and eating it too”.

What will the long-term impact of the report be? Hard to say: much depends on how much of this the government actually takes up, and it will be some months before we know that.  But I think the way the report was commissioned may have some unintended adverse consequences.  Specifically, I think the fact that this review was set up in such a way as to exclude consideration of applied research – while perfectly understandable – is going to contribute to the latter being something of a political orphan for the foreseeable future.  Similarly, the fact that the report was done in isolation from the broader development of Innovation policy might seem like a blessing given the general ham-fistedness surrounding the Innovation file, in the end I wonder if the end result won’t be an effective division of policy, with research being something the feds pay universities do and innovation something they pay firms to do.  That’s basically the right division, of course, but what goes missing are vital questions about how to make the two mutually reinforcing.

Bottom line: it’s a good report.  But even if the government fully embraces the recommendations, there are still years of messy but important work ahead.

April 18

Naylor Report, Take 1

People are asking why I haven’t talked about the Naylor Report (aka the Review of Fundamental Science) yet.  The answer, briefly, is i) I’m swamped ii) there’s a lot to talk about in there and iii) I want to have some time to think it over.  But I did have some thoughts about chapter 3, where I think there is either an inadvertent error or the authors are trying to pull a fast one (and if it’s the latter I apologize for narking on them).  So I thought I would start there.

The main message of chapter 3 is that the government of Canada is not spending enough on inquiry-driven research in universities (this was not, incidentally, a question the Government of Canada asked of the review panel, but the panel answered it anyway).  One of the ways that the panel argues this point is that while Canada has among the world’s highest levels of Research and Development in the higher education sector – known as HERD if you’re in the R&D policy nerdocracy – most of the money for this comes from higher education institutions themselves and not the federal government.  This, that say, is internationally anomalous and a reason why the federal government should spend more money.

Here’s the graph they use to make this point:

Naylor Report

Hmm.  Hmmmmm.

So, there are really two problems here.  The first is that HERD can be calculated differently in different countries for completely rational reasons.  Let me give you the example of Canada vs. the US.  In Canada, the higher education portion of the contribution to HERD is composed of two things: i) aggregate faculty salaries times the proportion of time profs spend on research (Statscan occasionally does surveys on this – I’ll come back to it in a moment) plus ii) some imputation about unrecovered research overhead.  In the US, it’s just the latter.  Why?  Because the way the US collects data on HERD, the only faculty costs they capture are the chunks taken out of federal research grants.  Remember, in the US, profs are only paid 9 months per year and at least in the R&D accounts, that’s *all* teaching.  Only the pieces of research grant they take out as summer salary gets recorded as R&D expenditure (and also hence as a government-sponsored cost rather than a higher education-sponsored one).

But there’s a bigger issue here.  If one wants to argue that what matters is the ratio of federal portion of HERD to the higher-education portion of HERD, then it’s worth remembering what’s going on in the denominator.  Aggregate salaries are the first component.  The second component is research intensity, as measured through surveys.  This appears to be going up over time.  In 2000, Statscan did a survey which seemed to show the average prof spending somewhere between 30-35% of their time on research. A more recent survey shows that this has risen to 42%.  I am not sure if this latest co-efficient has been factored into the most recent HERD data, but when it does, it will show a major jump in higher education “spending” (or “investment”, if you prefer) on research, despite nothing really having changed at all (possibly it has been and it is what explains the bump seen in expenditures in 2012-13)

What the panel ends up arguing is for federal funding to run more closely in tune with higher education’s own “spending”.  But in practice what this means is: every time profs get a raise, federal funding would have to rise to keep pace.  Every time profs decide – for whatever reasons – to spend more time on research, federal funds should rise to keep pace.  And no doubt that would be awesome for all concerned, but come on.  Treasury Board would have conniptions if someone tried to sell that as a funding mechanism.

None of which is to say federal funding on inquiry-driven research shouldn’t rise.  Just to say that using data on university-funded HERD might not be a super-solid base from which to argue that point

July 25

The low-wage graduate problem

The week before last, the Canadian Centre for the Study of Living Standards (CSLS) put out a report (available hereon trends on low-paid employment  in Canada from 1997 to 2014 (meaning full-time jobs occupied by 20-64 year olds where the hourly earnings are less than 66% of the national median).  It’s an interesting and not particularly sensationalist report based on Labour Force Survey public-use microdata; however one little factoid has sent many people into a tizzy.  Apparently, the percentage of people with Master’s or PhDs who are in low-wage jobs (where the hourly earnings are less than two-thirds of the national median) had jumped from 7.7% to 12.4%.  This has led to a lot of commentary about over-education, yadda yadda, from the Globe and Mail, the CBC, and so on.

This freak-out is a bit overdone. I won’t argue that the study is good news, but I think there are some things going on underneath the numbers which aren’t given enough of an airing in the media.

First of all, as CSLS explains in great detail, the two important findings are that the incidence of low-wage work in the economy has stayed more or less stable, and second, Canadians on the whole are a lot more educated than they used to be.  This leads to a compositional paradox: even though all seven levels of education saw increases in the incidence of low-wages (see Figure below), overall the fraction of Canadians with low wage jobs dropped ever-so-slightly from 27.9% in 1997 to 27.6% in 2014.

ottsyd 20160721-1

Now you have to be careful about interpretation here, particularly with respect to charges of “over-education”.  Yes, the proportion of grads in low-wage jobs is going up.  But the average wage income of university graduates is actually increasing: between 1995 and 2010, it rose by 6% after inflation.  And that’s while the number of people in the labour force with a university degree increased by 94%, and the proportion of the labour force with a university degree jumped from 19.3% to 28.7% (I would break out data on Masters/PhD specifically if I could, but public Statscan data does not separate Bachelors from higher degrees). 

What that tells us is that the economy is creating a lot more high-paying jobs which are being filled by an ever-expanding number of graduates.  But at the same time, more graduates are in low-wage jobs, which suggests that while averages are increasing, so is variance around the mean.

Another factor at work here is immigration.  Since the mid-1990s, the number of immigrants over 25 with university degrees has increased from 815,000 (23.2% of all degree holders) to 1.87 million (33% of all degree holders).  It’s not clear how many of those have graduate degrees (thanks Statscan!) but I think it’s reasonable to assume, given the way our immigration points system works, that the proportion of immigrants with advanced degrees is even higher.

The problem is that immigrants with degrees – particularly more recent immigrants – have a really hard time in the Canadian labour market, particularly at the start (see a great Statscan paper on this here).  To some extent this is rational because the degrees and the skills they confer are genuinely not compatible (see my earlier post on this), and to some extent it reflects various forms of discrimination, but that’s not the point here.  There are over one million new immigrants with degrees over the past fifteen or so years, many of them from overseas institutions.  The CSLS-inspired freak-out is about the fact that over the past 17 years the number of degree-holders has increased by 450,000 (of which 130,000 are at the Master’s/PhD level).  Simple logic suggests that most of the problem people are seeing in the CSLS data is more about our inability to integrate educated immigrants than it is about declining returns to education among domestic students.  I know the data CSLS uses doesn’t allow them to look at the results by where a degree was earned, but I’d bet serious money this is the crux of the problem.

So, you know, chill everybody.  Canadian graduates still do OK in the end.  And remember that comparisons of educational outcomes over time that don’t control for immigration need to be taken with a grain of salt.

February 18

Consumerism Dragging Down Student Achievement? Not so Fast

So, there was an interesting article from Studies in Higher Education making the rounds on social media yesterday. Written by a trio of UK researchers, the article is entitled “The Student-as-Consumer Approach in Higher Education and its Effects on Academic Performance”, and is – miraculously – available ungated, here. The short version is that students who have a consumerist attitude towards education tend to have lower academic performance. For those who bewail the encroachment of consumerist attitudes in higher education, this obviously was like a delicious, refreshing glass of I-told-you-so.

Upon closer inspection, though, this study isn’t quite all it’s cracked up to be. The basic set-up here was that the authors asked a group of students across a number of UK institutions to fill-in a questionnaire. The questionnaire asked a number of demographic questions, a set of questions about “learner identity” (basically attitudes towards learning, which in North America would be called “engagement”), and a set of questions labelled “consumer orientation”. It also asked about field of study, whether a student is paying for tuition him/herself (under the UK’s income-contingent system, only 20% said their parents were paying for their fees), and what their target grades were. The conclusion was that all of these things had some kind of significant effect on grades, but that a consumerist attitude had a mediating effect on all of them, all in a negative direction.

The most obvious problem here is that the dependent variable is a self-report of academic achievement. Anyone who’s ever worked with self-reports of grades will tell you: that’s a pretty iffy data source.

The next problem is that “consumerism”, as a notion, was constructed in a questionable way. The researchers asked students 15 questions, and based on their answers created an index of consumerist orientation. Some of those questions seem alright, for instance: “I think of my university degree as a product I am purchasing”, or “I am entitled to leave university with a good degree because I am paying for it” (a “good degree” in the UK means a degree with a high classification, which is sort of the same thing as a high GPA over here). However, some of the questions that make up the consumerist ranking are, to my mind,  indicators of having a utilitarian rather than a consumer outlook (e.g., “I only want to learn things which will help me in my career”, “my lecturers should round up my final grade a point or two if I am close to the next grade boundary”), and some of them are actually indicators of financial uncertainty (e.g. “I regularly think of the financial cost of my degree”). Throw all that together in a single index and it’s not clear to me that this is actually all that useful.

And this leads to the final problem: there are no family background demographics in the survey. Which means we don’t really know how much of the engagement and consumerism variables are really just reflections of class background and cultural capital. It’s fairly safe to assume that students from lower class backgrounds in the UK – especially first-generation students – would be more likely to be financially insecure, and more likely to have a utilitarian attitude towards school (that is, they attend for specific career goals). By lumping all of those factors into the “consumerist” category, what the authors may simply be picking up is that grades and class origins are positively correlated. Which we already kinda knew.

In other words: interesting hypothesis, but this study isn’t all that persuasive. Case unproven – so far, at least.

November 25

The 2015 OECD Education at a Glance

So the OECD’s Education at a Glance was published yesterday.  It’s taken a couple of months longer than usual because of the need to convert  into the new International Standard Classification of Education (ISCED) system.  No, don’t ask; it’s better not to know.

I won’t say there’s a whole lot new in this issue that will be of interest to PSE-types.  One point of note is that Statscan has – for no obvious or stated reason – substantially restated Canadian expenditure on tertiary educational institutions, downwards.  In last year’s edition, you may recall that they claimed 2011 spending was 2.8% of GDP, which I thought was a tad high (I couldn’t get it to go over 2.43%).  They are now saying that last year was in fact 2.6% of GDP, and this year is 2.5%.  That still puts Canada well ahead of most countries, and more than 50% ahead of the OECD average.

Figure 1: Selected OECD Countries’ Spending on Tertiary Education as a Percentage of Gross Domestic Product

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Next, the shift to the ISCED system has produced a slight change to the way attainment data is presented.  Basically, they make it easier to tease-out different levels of attainment above the bachelor’s level; but this makes no difference for Canada, because we can’t actually measure these things.  The problem is our Labor Force Survey, which has a very vague and sketchy set of responses on educational attainment (basically, you can only answer “college” or “university” on attainment, so our college numbers include all sorts of weird private short course programs, and our university numbers make no distinction between types of degrees).  Still, for what it’s worth, here’s how attainment rates for young Canadians (age 25-34) stack up against other countries.

Figure 2: Selected OECD Countries’ Tertiary Attainment Rates, 2012

2

 

 

 

 

 

 

 

 

 

 

 

 

Those of you familiar with the “Canada’s number 1” rhetoric that accompanied previous EAG releases may do a double-take at this graph.  Yes, certainly, Canada is still close to the top if you include all of post-secondary education.  But it used to be that we were also at – or close to – the top on university education, as well; now, we’re actually below the OECD average.  What the heck is going on?

Well, it helps to look back a decade or so to see what the picture looked like then.

Figure 3: Selected OECD Countries’ Tertiary Attainment Rates, 2003

3

 

 

 

 

 

 

 

 

 

 

 

 

Much of what has changed is the way this data is presented.  First, the old 5A/5B excluded attainment at the doctoral level, which the new system does not.  Since European countries tend to have slightly higher doctoral degree award rates than we do, this cuts the difference a bit.  A bigger issue is that fact that post-Bologna, a lot of European countries simply did away with short-cycle degrees from polytechnics and fachochschule, and re-classified them as university degrees.  Finland thus went from a system with 23% attainment at 5A (university) level, and 17% at 5B (college or polytechnic) level, to a system that is now simply 40% degree level, or above.  In other words, tertiary attainment rates are exactly the same in Finland as they were a decade ago, but credentials have simply been re-labelled.  Something similar also happened in Germany.

While reclassification explains part of the change, it doesn’t explain it all.  Some countries are genuinely seeing much bigger increases in university attainment than we are.  There is South Korea, where attainment rates ballooned from 47% of all 25-34 year olds in 2003, to 68% in just a decade (30% to 45% at the university level alone), as well as Australia, where university attainment has gone from 25% to 38%.

Those are some quite amazing numbers.  Makes you wonder why we can’t do that, as well.

November 19

Stories Arts Faculties Tell Themselves

Here at HESA towers, we’ve been doing some work on how students make decisions about choosing a university (if you’re interested: the Student Decisions Project was a multi-wave, qualitative, year-long longitudinal study that tracked several hundred Grade 12 students as they went through the PSE research, application, and enrolment process.  We also took a more targeted qualitative look, specifically at Arts, with the national Prospective Arts Students Survey).  We’ve been trying to do the same for colleges, but it’s a much trickier demographic to survey.

In both studies, one of the questions we asked is what students really want from their education.

Now at one level, this question is kind of trite.  We know from 15 years of surveys from the Canadian Undergraduate Survey Consortium that students go to university: i) to get better jobs; ii) because they like learning about a particular field; and also, iii) to make friends, and enjoy the “university experience”.

Where it gets a little trickier, however, is when you break this down by particular fields of study.  With most faculties, there tends to be a positive reason to attend.  However, when it comes to Arts, enrolment is often seen as a fall-back option – it’s something you do if you don’t have concrete goals, or if you can’t do anything else.  Now, Arts faculties tend to take the positive here, and spin this as students wanting to “find themselves”. But in deploying this bit of spin, Arts faculties often end up heading in the wrong direction.

One of the problems here is that the notion of students “finding themselves” (not a term students themselves use) is not as straightforward as many think. Broadly, there are three possible definitions.  The first situates “finding yourself” in academic terms: by exploring a lot of different academic options, a student finds something that interests her/him, and becomes academically engaged.  This is one of the reasons that Arts faculties are built around a smorgasbord model, which lets students “taste” as many things as possible, and hence “discover” themselves.

But that’s not the only possible definition of “finding oneself”.  There is another option, in which students essentially view PSE as a cooling out period where they can “find” what they want to do, in a vocational sense.  Yes, they are taking courses, but since they recognize that Arts courses don’t lead directly to employment, they are more or less marking time while they discover how to make their way in the employment world, and think about how and where they want to live.  Then there is a third, slightly different take, in which students view “finding themselves” as the process by which they acquire transversal skills, and the skills of personal effectiveness needed to be successful adults.  School is something they do while they are learning these skills, often for little reason other than that going to school is something they have always done, and in many cases are expected to do.

Though all of these interpretations of “finding yourself” have some currency among students, it probably shouldn’t come as a surprise to learn that the one about “finding yourself” being a voyage of academic discovery is, in fact, the least frequently mentioned by incoming students.  Now, maybe they come around to this view later on, but it is not high on the list of reasons they attend in the first place.  To the extent that they have specific academic interests as a reason for enrolling in Arts, they tend to be just that: specific – they want to study Drama, or History, or whatever.

Which raises two questions: if this is true, what’s the benefit of Arts faculties maintaining such a wide breadth of requirements?  And second, why aren’t Arts faculties explicitly building-in more transversal skills elements into their programs?  Presumably, there would be a significant advantage in terms of recruitment for doing so.  Someone should give it a whirl.

April 01

Some Inter-Provincial Finance Comparisons

Last week, I blogged about how OECD figures showed Canada had the highest level of PSE spending in the world, at 2.8% of GDP.  Many of you wrote to me asking: i) if the picture was the same when we looked at other measures, like per-capita spending or spending per-student; and, ii) could I break things down by province, instead of nationally.  I am ever your servant, so I tried working on this.

I quickly came up against a problem, which was simply that I could in no way replicate the OECD numbers.  Using numbers from FIUC (for universities) and FINCOL (for colleges), the biggest expenditure number I could come up with for the 2011-12 year was $41.75 billion in institutional income.  Dividing this by the 2011 GDP figure of $1.72 billion used in Education at a Glance (itself inexplicably about 3% smaller than the $1.77 billion figure Statscan reports for 2011) gives me 2.43%, rather than the 2.8% Statscan reported to OECD.  There is presumably an explanation for this (my best guess is that it has something to with student assistance), and I have emailed some folks over there to see what’s going on.  But in the meantime, we can still have some fun with inter-provincial comparisons.

Let’s start with what provinces spend on universities:

Figure 1: University Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

In most provinces, total university expenditure is right around two percent of GDP.  Only in two provinces (Saskatchewan, Alberta) is it significantly below this, and only in two (Nova Scotia, Prince Edwards Island) is it significantly above.  In terms of public expenditure, the average across the country is about one percent of GDP.  Nova Scotia, at 3.2%, is likely by some distance the highest-spending jurisdiction in the entire world.

Now, some of you are no doubt wondering: how the heck can Nova Scotia universities spend two and a half times what Alberta universities spend (in GDP terms) when the latter are so bright and shiny and the former are increasingly looking a little battered?  Well, I’ll get more into this tomorrow, but the quick answer is: Alberta’s GDP is eight times higher than Nova Scotia’s, but it only has about three times as many students.

Of course, universities aren’t the whole story.  Let’s look at colleges:

Figure 2: College Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

This is a wee bit more interesting.  Most provinces are bunched closely around the 0.5% of GDP mark, except for Quebec and Prince Edward Island.  If we were using international standards here, where college is usually interpreted as being ISCED level 5 (or level 5B before the 2011 revision), Quebec’s figures would be much lower because CEGEP programs leading to university are considered level 4 (that is, post-secondary, but not actually tertiary), and hence would be excluded.

But PEI is the real stunner here: apparently Holland College accounts for nearly 1.2% of GDP.  This sounds ludicrous to me and I have no explanation for it, but having looked up Holland College’s financials it seems to check out.

Here’s the combined picture:

Figure 3: Total PSE Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

So, what we see here is that most provinces again cluster around spending 2.5% of GDP, which would put their spending roughly on par with the world’s second-biggest spender, Korea (but slightly behind the United States).  Saskatchewan, at 2% of GDP, would still be ranked very highly, while Alberta, at 1.73% would be only a bit above the OECD average.

The crazy stuff is at the other end: PEI and Nova Scotia, where higher education spending exceeds 3.75% of GDP.  And yeah, their GDP is lower than most of the rest of the country (GDP/capita in those two provinces, at $39,800 and $41,500, respectively, is less than half what it is in Alberta), but there are lots of OECD countries with GDPs of roughly that level of income (e.g. Spain) who spend about a third as much on education.

Tomorrow, we’ll look a bit more at per-student spending.

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

November 25

Graduate Income Data Miracle on the Rideau

My friend and colleague Ross Finnie has just published a remarkable series of papers on long-term outcomes from higher education, which everyone needs to go read, stat.

What he’s done is taken 13 years of student data from the University of Ottawa and linked it to income tax data held by Statistics Canada.  That means he can track income patterns by field of study, not over the puny 6-24 month period commonly used by provincial surveys, or the new 36-month standard the National Graduate Survey now uses, but for up to 13 years out.  And guess what?  Those results are pretty good.  After only five years out, all fields of study are averaging at least $60K per year in annual income.  Income does flatten out pretty quickly after that, but by then, of course, people are earning a pretty solid middle-class existence – even the much-maligned Arts grads.

Figure 1: Average Post-Graduation Income of Class of 1998 University of Ottawa Graduates, by Field of Study and Number of Years After Graduation, in Thousands of 2011 Constant Dollars

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

One of the brilliant things about this data set is that you can not only compare across fields of study in a single cohort, but also you can compare across years for a single field of study.  Finnie’s data shows that in Math/Science, Humanities, Social Science, and Health, income pathways did not vary much between one cohort and another: a 2008 History grad had basically the same early income pathway as one from 1998.  In two other fields, though, it was a different story.  The first is Business, where the 1998 cohort clearly had it a lot better than its later counterparts; after two years out, that cohort was making $10K per year more than later ones, a lead that was then maintained for the rest of their career.  In ICT, the fate of various cohorts was even more diverse.

Figure 2: Average Post-Graduation Income, Selected Cohorts of University of Ottawa Engineering/Computer Science Graduates, by Number of Years After Graduation, in Thousands of 2011 Constant Dollars

unnamed-1

 

 

 

 

 

 

 

 

 

 

 

 

This is pretty stunning stuff: thanks to the dot-com bust, the first-year incomes of engineering and computer science graduates in 2004 was exactly half what it was in 2000 ($40,000 vs. $80,000).  If anyone wants to know why kids don’t flock to ICT as a career, consider uncertain returns as a fairly major reason.

Also examined is the question of income by gender:

Figure 3: Average Post-Graduation Income of Class of 1998 University of Ottawa Graduates, by Gender and Number of Years After Graduation, in Thousands of 2011 Constant Dollars

unnamed-2

 

 

 

 

 

 

 

 

 

 

 

 

Two interesting things are at work with respect to gender.  The initial income gap of $10,000 in the first year after graduation gap is almost entirely a field-of-study effect: take out Engineering/Computer Science, and earnings are almost the same.  But after that, the gap widens at a pretty continuous pace for all fields of study.  It’s most pronounced in Business, where top-quartile male incomes really blow the averages out, but the pattern is the same everywhere.  Because of the way the data is collected, it’s impossible to say how much of this reflects differences in labour-market participation and hours worked, and how much of this is differences in hourly pay, but the final result – a gender gap of $20,000 to $25,000 in average earnings, regardless of field of study – is pretty striking.

Are there caveats to this data?  Sure.  It’s just one university, located in a town heavy on government and ICT work.  My guess is that elsewhere, things might not look so good in Humanities and Social Science, and ICT outcomes may be less boom-and-bust-y.  But fortunately, Ross is on this one: he is currently building a consortium of institutions across the country to replicate this process, and build a more comprehensive national picture.

Let me press this point a bit on Ross’ behalf: there is no good reason why every institution in the country should not be part of this consortium.  If your institution is not part of it, ask yourself why.  This is the most important new source of data on education Canada has had in over a decade.  Everyone should contribute to it.

 

 

Nb. One tiny quibble about the papers is that they present everything in monochrome graphic form – no tabular data.  To make the above figures, I’ve had to eyeball the data and re-enter it myself.  Apologies for any deviations from the original.

Page 1 of 3123