HESA

Higher Education Strategy Associates

Category Archives: Data

April 04

How to Think about “Better Higher Education Data”

Like many people, I am in favour of better data on the higher education sector.  But while this call unites a lot of people, there is remarkably little thinking that goes into the question of how to achieve it.  This is a problem, because unless we arrive at a better common understanding of both the cost and the utility of different kinds of data, we are going to remain stuck in our current position.

First, we need to ask ourselves what data we need to know versus what kinds of data it would be nice to know.  This is of course, not a value-free debate: people can have legitimate differences about what data is needed and what is not.  But I think a simple way to at least address this problem is to ask of any proposed data collection: i) what questions does this data answer?  And ii) what would we do differently if we knew the answer to that question?  If the answer to either question is vague, maybe we should put less emphasis on the data.

In fact, I’d argue that most of the data that institutions and government are keen to push out are pretty low on the “what would we do differently” scale.  Enrolments (broken down in various ways), funding, etc – those are all inputs.  We have all that data, and they’re important: but they don’t tell you much about what’s going right or going wrong in the system.  They tell you what kind of car you’re driving, but not your speed or direction.

What’s the data we need?  Outputs.  Completion rates.  Transitions rates.  At the program, institutional and system level. Also outcomes: what happens to graduates?  How quickly do they transition to permanent careers. And do they feel their educational career was a help or a hindrance to getting the career they wanted?  And yeah, by institution and (within reason) by program.  We have some of this data, in some parts of the country (BC is by far the best at this) but even here we rely far too heavily on some fairly clumsy quantitative indicators and not enough on qualitative information like: “what do graduates three years out think the most/least beneficial part of their program was?”

Same thing on research.  We need better data on PhD outcomes.  We need a better sense of the pros and cons of more/smaller grants versus fewer/larger ones.  We need a better sense of how knowledge is actually transferred from institutions to firms, and what firms do with the knowledge in terms of turning them into product or process innovations.  Or, arguably, on community impact (though there I’m not completely convinced we even know yet what the right questions are).

Very few of these questions can be answered through big national statistical datasets about higher education.  Even when it comes to questions like access to education, it’s probably far more important that we have more data on people who do not go to PSE than to have better or speedier data access to enrolment data.  And yet we have nothing on this, and haven’t had since the Youth in Transition Survey ended.  But that’s expensive and could cost upwards of ten millions of dollars.  Smaller scale, local studies could be done for a fraction of the cost – if someone were willing to fund them.

There are actually enormous data resources available at the provincial and institutional level to work from.  Want to find individuals who finished high school but didn’t attend PSE?  Most provinces now have individual student numbers which could be used to identify these individuals and bring them into a study.  Want to look at program completion rates?  All institutions have the necessary data: they just choose not to release it.

All of which is to say: we could have a data revolution in this country.  But it’s not going to come primarily from better national data sets run by Statistics Canada.  It’s going to come from lots of little studies creating a more data-rich environment for decision-making.  It’s going to come from governments and institutions changing their data mindsets from one where hoarding data is the default to one where publishing is the default.  It’s going to come from switching focus from inputs to outputs.

Even more succinctly: we are all part of the problem.  We are all part of the solution.  Stop waiting for someone else to come along and fix it.

April 03

Data on Race/Ethnicity

A couple of week ago, CBC decided to make a big deal about how terrible Canadian universities were for not collecting data on race (see Why so many Canadian universities Know so little about their own racial diversity). As you all know, I’m a big proponent of better data in higher education. But the effort involved in getting new data has to be in some way proportional to the benefit derived from that data. And I’m pretty sure this doesn’t meet that test.

In higher education, there are only two points where it is easy to collect data from students: at the point of application, and at the point of enrolment. But here’s what the Ontario Human Rights Code has to say about collecting data on race/ethnicity in application forms:

Section 23(2) of the Code prohibits the use of any application form or written or oral inquiry that directly or indirectly classifies an applicant as being a member of a group that is protected from discrimination. Application forms should not have questions that ask directly or indirectly about race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, record of offences, age, marital status, family status or disability.

In other words, it’s 100% verboten. Somehow, CBC seems to have missed this bit. Similar provisions apply to data collected at the time of enrolment –a school still needs to prove that there is a bona fide reason related to one’s schooling in order to require a student to answer the question. So generally speaking, no one asks a question at that point either.

Now, if institutions can’t collect relevant data via administrative means, what they have to do to get data on race/ethnicity is move to a voluntary survey. Which in fact they do, regularly. Some do a voluntary follow-up survey of applicants through Academica, others attach race/ethnicity questions on the Canadian Undergraduate Survey Consortium (CUSC) surveys, others attach it to NSSE. Response rates on these surveys are not great: NSSE sometimes gets 50% but that’s the highest rate available. And, broadly speaking, they get high-level data about their student body. The data isn’t great quality because of the response rate isn’t fabulous and the small numbers mean that you can’t really subdivide ethnicity very much (don’t expect good numbers on Sikhs v. Tamils), but one can know at a rough order of magnitude what percentage of the student body is visible minority, what percentage self-identifies as aboriginal, etc. I showed this data at a national level back here.

Is it possible to get better data? It’s hard to imagine, frankly. On the whole, students aren’t crazy about being surveyed all the time. NSSE has the highest response rate of any survey out there, and CUSC isn’t terrible either (though it tends to work on a smaller sample size). Maybe we could ask slightly better questions about ethnicities, maybe we could harmonize the questions across the two surveys. That could get you data at institutions which cover 90% of institutions in English Canada (at least).

Why would we want more than that? We already put so much effort into these surveys: why go to all kinds of trouble to do a separate data collection activity which in all likelihood would have worse response rates than what we already have?

It would be one thing, I think, if we thought Canadian universities had a real problem in not admitting minority students. But the evidence at the moment the opposite: that visible minority students in fact attend at a rate substantially higher than their share of the population. It’s possible of course that some sub-sections of the population are not doing as well (the last time I looked at this data closely was a decade ago, but youth from the Caribbean were not doing well at the time). But spending untold dollars and effort to get at that problem in institutions across country when really the Caribbean community in Canada is clustered in just two cities (three, if you count the African Nova Scotians in Halifax)? I can’t see it.

Basically, this is one of those cases where people are playing data “gotcha”. We actually do know (more or less) where we are doing well or poorly at a national level. On the whole, where visible minorities are concerned, we are doing well. Indigenous students? Caribbean students? That’s a different story. But we probably don’t need detailed institutional data collection to tell us that. If that’s really what the issue is, let’s just deal with it. Whinging about data collection is just a distraction.

March 27

Losing Count

Stop me if you’ve heard this story before: Canada is not sufficiently innovative, and part of the reason is that we don’t spend enough on research.  It’s not that we don’t spend enough on *public* research; adjusted for GDP, we actually do above-average on that.  What pulls us down is in international comparisons corporate R & D.  Our narrow-minded, short-sighted, resource-obsessed business class spends far less on R&D than its equivalents in most other country, and that is what gives us such a low overall R&D spend.

Familiar?  It should be; it’s been standard cant in Canada for a couple of decades at least.  And it gets used to argue for two very specific things.  There’s the argument which basically says “look, if private R&D is terrible, we’ll just have to make it up on the public side, won’t we?”, and where else to spend but on university research?  (Universities Canada used to make this argument quite a bit, but not so much lately AFAIK).  Then there’s the argument that says: well, since under the linear model of innovation in which public “R” leads to private “D”, the problem must be that public “R” is too theoretical on insufficiently focussed on areas of national industrial strength – and what we really need to do is make research more applied/translational/whatever.

But what if that story is wrong?

Last year, the Impact Centre at the University of Toronto put out a little-noticed paper called Losing Count. It noted a major problem related to the collection and reporting of R&D.  Starting in 1997, Statistics Canada adopted a definition of Research and Development which aligned with Canada’s tax laws.  This makes perfect sense from a reporting point of view, because it reduces the reporting burden on big corporations (they can use the same data twice).  But from a measuring Canada against other countries perspective, it’s not so good, because it means the Canadian statistics are different from those in the rest of the world.

Specifically, Canada since 1997 has under-reported Business R&D in two ways.  First, it does not report any R&D in the social sciences and humanities.  All those other OECD countries are reporting research in business, financial management, psychology, information science, etc., but we are not.  Second, work that develops or improves materials, products and processes, but that draws on existing knowledge rather than new scientific or new technological advances is not counted as Research & Development in Canada but is counted elsewhere.

How big a problem is this?  Well, one problem is that literally every time the Canada Revenue Agency tightens eligibility for tax credits, reported business R&D falls.  As this has happened a number of times over the past two decades, it may well be that our declining business R&D figures are actually a function of stricter tax laws than they are of changing business activity.  As for the difference in absolute amount being measured, it’s impossible to say.  The authors of the study took a sample of ten companies (which they recognize as not being scientific in any way) and determined that if the broader, more OECD-consistent definition were used, spending on R&D salaries would rise by a factor of three.  If that were true across the board (it probably isn’t) it would shift Canada from being one of the world’s weakest business R&D performers to one of the best.

Still, even if this particular result is not generalizable, the study remains valuable for two reasons.  First, it underlines how tough it is for statistical agencies to capture data on something as fluid and amorphous as research and development in a sensible, simple way.  And second, precisely because data is so hard to collect, international comparisons are extremely hard to make.  National data can be off by a very wide factor simply because statistical agencies make slightly different decisions about to collect data efficiently.

The takeaway is this:  the next time someone tells a story about how innovation is being throttled by lack of business spending on research (compared to say, the US or Sweden), ask them if they’ve read Losing Ground.  Because while this study isn’t the last word on the subject, it poses questions that no one even vaguely serious about playing in the Innovation space should be able to ignore.

February 23

Garbage Data on Sexual Assaults

I am going to do something today which I expect will not put me in good stead with one of my biggest clients.  But the Government of Ontario is considering something unwise and I feel it best to speak up.

As many of you know, the current Liberal government is very concerned about sexual harassment and sexual assault on campus, and has devoted no small amount of time and political capital to getting institutions to adopt new rules and regulations around said issues.  One can doubt the likely effectiveness of such policies, but not the sincerity of the motive behind them.

One of the tools the Government of Ontario wishes to use in this fight is more public disclosure about sexual assault.  I imagine they have been influenced with how the US federal government collects and publishes statistics on campus crime, including statistics on sexual assaults.  If you want to hold institutions accountable for making campuses safer, you want to be able to measure incidents and show change over time, right?

Well, sort of.  This is tricky stuff.

Let’s assume you had perfect data on sexual assaults by campus.  What would that show?  It would depend in part on the definitions used.  Are we counting sexual assaults/harassment which occur on campus?  Or are we talking about sexual assaults/harassment experiences by students?  Those are two completely different figures.  If the purpose of these figures is accountability and giving prospective students the “right to know” (personal safety is after all a significant concern for prospective students), how useful is that first number?  To what extent does it make sense for institutions to be held accountable for things which do not occur on their property?

And that’s assuming perfect data, which really doesn’t exist.  The problems multiply exponentially when you decided to rely on sub-standard data.  And according to a recent Request for Proposals placed on the government tenders website MERX, the Government of Ontario is planning to rely on some truly awful data for its future work on this file.

Here’s the scoop: the Ministry of Advanced Education and Skills Development is planning to do two surveys: one in 2018 and one in 2024.  They plan on getting contact lists of emails of every single student in the system – at all 20 public universities, 24 colleges and 417 private institutions and handing them over to a contractor so they can do a survey. (This is insane from a privacy perspective – the much safer way to do this is to get institutions to send out an email to students with a link to a survey so the contractor never sees the names without students’ consent).  Then they are going to send out an email to all those students – close to 700,000 in total – offering $5/per head to answer a survey.

Its not clear what Ontario plans to do with this data.  But the fact that they are insistent that *every* student at *every* institution be sent the survey suggests to me that they want the option to be able to analyze and perhaps publish the data from this anonymous voluntary survey on a campus by campus basis.

Yes, really.

Now, one might argue: so what?  Pretty much every student survey works this way.  You send out a message to as many students as you can, offer an inducement and hope for the best in terms of response rate.  Absent institutional follow-up emails, this approach probably gets you a response rate between 10 and 15% (a $5 incentive won’t move that many students)  Serious methodologists grind their teeth over those kinds of low numbers, but increasingly this is the way of the world.  Phone polls don’t get much better than this.  The surveys we used to do for the Globe and Mail’s Canadian University Report were in that range.  The Canadian University Survey Consortium does a bit better than that because of multiple follow-ups and strong institutional engagement.  But hell, even StatsCan is down to a 50% response rate on the National Graduates Survey.

Is there non-response bias?  Sure.  And we have no idea what it is.  No one’s ever checked.  But these surveys are super-reliable even if they’re not completely valid.  Year after year we see stable patterns of responses, and there’s no reason to suspect that the non-response bias is different across institutions.  So if we see differences in satisfaction of ten or fifteen percent from one institution from another, most of us in the field are content to accept that finding.

So why is the Ministry’s approach so crazy when it’s just using the same one as every one else?  First of all, the stakes are completely different.  It’s one thing to be named an institution with low levels of student satisfaction.  It’s something completely different to be called the sexual assault capital of Ontario.  So accuracy matters a lot more.

Second, the differences between institutions are likely to be tiny.  We have no reason to believe a priori that rates differ much by institutions.  Therefore small biases in response patterns might alter the league table (and let’s be honest, even if Ontario doesn’t publish this as a league table, it will take the Star and the Globe about 30 seconds to turn it into one).  But we have no idea what the response biases might be and the government’s methodology makes no attempt to work that out.

Might people who have been assaulted be more likely to answer than those who did not?  If so, you’re going to get inflated numbers.  Might people have reasons to distort the results?  Might a Men’s Rights group encourage all its members to indicate they’d been assaulted to show that assault isn’t really a women’s issue?  With low response rates, it wouldn’t take many respondents to get that tactic to work.

The Government is never going to get accurate overall response rates from this approach.  They might, after repeated tries, start to see patterns in the data: sexual assault is more prevalent in institutions in large communities than in small ones, maybe; or it might happen more often to students in certain fields of study than others.  That might be valuable.  But if the first time the data is published all that makes the papers is a rank order of places where students are assaulted, we will have absolutely no way to contextualize the data, no way to assess its reliability or validity.

At best, if the data is reported system-wide, the data will be weak.  A better alternative would be to go with a smaller random sample and better incentives so as to obtain higher  response rates.  But if it remains a voluntary survey *and* there is some intention to publish on a campus-by campus basis, then it will be garbage.  And garbage data is a terrible way to support good policy objectives.

Someone – preferably with a better understanding of survey methodology – needs to put a stop to this idea.  Now.

January 27

A Slice of Canadian Higher Education History

There are a few gems scattered through Statistics Canada’s archives. Digging around their site the other day, I came across a fantastic trove of documents published by the Dominion Bureau of Statistics (as StatsCan used to be called) called Higher Education in Canada. The earliest number in this series dates from 1938, and is available here. I urge you to read the whole thing, because it’s a hoot. But let me just focus in on a couple of points in this document worth pondering.

The first point of interest is the enrolment statistics (see page 65 of the PDF, 63 of the document). It won’t of course surprise anyone to know that enrolment at universities was a lot smaller in 1937-38 than it is today (33,600 undergraduates then, 970,000 or so now), or that colleges were non-existent back then. What is a bit striking is the large number of students being taught in universities who were “pre-matriculation” (i.e. high school students). Nearly one-third of all students in universities in 1937-38 had this “pre-matric” status. Now, two-thirds of these were in Quebec, where the “colleges classiques” tended to blur the line between secondary and post-secondary (and, in their new guise as CEGEPs, still kind of do). But outside of British Columbia, all universities had at least some pre-matric, which would have made these institutions quite different from modern ones.

The second point of interest is the section on entrance requirements at various universities (page 12-13 of the PDF, p. 10-11 of the document). With the exception of UNB, every Canadian university east of the Ottawa River required Latin or Greek in order to enter university, as did Queens, Western and McMaster. Elsewhere, Latin was an alternative to Mathematics (U of T), or an alternative to a modern language (usually French or German). What’s interesting here is not so much the decline in salience of classical languages, but the decline in salience of any foreign language. In 1938, it was impossible to gain admission to a Canadian university without first matriculating in a second language, and at a majority of them a third language was required as well. I hear a lot of blah blah about internationalization on Canadian campuses, but 80 years on there are no Canadian universities which require graduates to learn a second language, let alone set this as a condition of entry. An area, clearly, where we have gone backwards.

The third and final bit to enjoy is the section on tuition fees (page 13), which I reproduce here:

ottsyd-20170126

*$1 in 1937-38 = $13.95 in 2016
**$1 in 1928-29 = $16.26 in 2016

Be a bit careful in comparing across years here: because of deflation, $100 in 1928 was worth $85 in 1937 and so institutions which kept prices stable in fact saw a rise in income in real terms. There are a bunch of interesting stories here, including the fact that institutions had very different pricing strategies in the depression. Some (e.g. McGill, Saskatchewan, Acadia) increased tuition while others (mostly Catholic institutions like the Quebec seminaries and St. Dunstan’s) either held the line or reduced costs. Also mildly amusing is the fact that McGill’s tuition for in-province students is almost unchanged since 1937-38 (one can imagine the slogan: “McGill – we’ve been this cheap since the Rape of Nanking!”).

The more interesting point here is that if you go back to the 1920s, not all Canadian universities were receiving stable and recurrent operating grants from provincial governments (of note: nowhere in this digest of university statistics is government funding even mentioned). Nationally, in 1935, all universities combined received $5.4 million from provincial governments – and U of T accounted for about a quarter of that. For every dollar in fees universities received from students, they received $1.22 from government. So when you see that universities were for the most part charging around $125 per students in 1937-38, what that means is that total operating funding per student was maybe $275, or a shade under $4500 per student in today’s dollars. That’s about one-fifth of today’s operating income per student.

While most of that extra per-student income has gone towards making institutions more capital-intensive (scientific facilities in general were pretty scarce in the 1930s), there’s no question that the financial position of academics had improved. If you take a quick gander at page 15, which shows the distribution of professorial salaries, you’ll see that average annual salaries for associate profs was just below $3500, while those for full professors was probably in the $4200 range. Even after for inflation, that means academic salaries were less than half what they are today. Indeed, one of the reasons tenure was so valued back then was that job security made up for the not-stellar pay. Times change.

In any case, explore this document on your own: many hours (well, minutes anyway) of fun to be had here.

January 18

More Bleak Data, But This Time on Colleges

Everyone seems to be enjoying data on graduate outcomes, so I thought I’d keep the party going by looking at similar data from Ontario colleges. But first, some of you have written to me suggesting I should throw some caveats on what’s been covered so far. So let me get a few things out of the way.

First, I goofed when saying that there was no data on response rates from these surveys. Apparently there is and I just missed it. The rate this year was 40.1%, a figure which will make all the economists roll their eyes and start muttering about response bias, but which anyone with field experience in surveys will tell you is a pretty good response for a mail survey these days (and since the NGS response rate is now down around the 50% mark, it’s not that far off the national “gold standard”).

Second: all this data on incomes I’ve been giving you is a little less precise than it sounds. Technically, the Ontario surveys do not ask income, they ask income ranges (e.g. $0-20K, $20-40K, etc). When data is published either by universities or the colleges, this is turned into more precise-looking figures by assigning the mid-point value of each and then averaging those points. Yes, yes, kinda dreadful. Why can’t we just link this stuff to tax data like EPRI does? Anyways, that means you should probably take the point values with a pinch of salt: but the trend lines are likely still meaningful.

Ok, with all that out of the way, let’ turn to the issue of colleges. Unfortunately, Ontario does not collect or display data on college graduates’ outcomes the way they do for universities. There is no data around income, for instance. And no data on employment 2 years after graduation, either. The only real point of comparison is employment 6 months after graduation, and even this is kind of painful: for universities the data is available only by field of study; for colleges, it is only available by institution. (I know, right?) And even then it is not even calculated on quite the same basis: universities include graduates with job offers while the college one does not. So you can’t even quite do an apples-to-apples comparison, even at the level of the sector as a whole. But if you ignore that last small difference in calculation and focus not on the point-estimates but on the trends, you can still see something interesting. Here we go:

Figure 1: Employment Rates 6 months after Graduation, Ontario Universities vs. Ontario Colleges, by Graduating Cohort, 1999-2015

ottsyd-20170117

So, like I said, ignore the actual values in Figure 1 because they’re calculated in two slightly different ways; instead, focus on the trends. And if you do that, what you see is (a blip in 2015 apart), the relationship between employment rates in the college and university sector looks pretty much the same throughout the period. Both had a wobble in the early 2000s, and then both took a big hit in the 2008 recession. Indeed, on the basis of this data, it’s hard to make a case that one sector has done better than another through the latest recession: both got creamed, neither has yet to recover.

(side point: why does the university line stop at 2013 while the college one goes out to 2015? Because Ontario doesn’t interview university grads until 2 years after grad and then asks them retroactively what they were doing 18 months earlier. So the 2014 cohort was just interviewed last fall and it’ll be a few months until their data is released. College grads *only* get interviewed at 6 months, so data is out much more quickly)

What this actually goes is put a big dent in the argument that the problem for youth employment is out-of-touch educators, changing skill profiles, sociologists v. welders and all that other tosh people were talking a few years ago. We’re just having more trouble than we used to integrating graduates into the labour market. And I’ll be taking a broader look at that using Labour Force Survey data tomorrow.

January 13

Restore the NGS!

One of the best things that Statistics Canada ever did in the higher education field was the National Graduates’ Survey (NGS). OK, it wasn’t entirely Statscan – NGS has never been a core product funded from the Statscan budget but rather funded periodically by Employment and Social Development Canada (ESDC) or HRDC or HRSDC or whatever earlier version of the department you care to name – but they were the ones doing the execution. After a trial run in the late 1970s (the results of which I outlined back here), Statscan tracked the results of the graduating cohorts of 1982, 1986, 1990, 1995, 2000 and 2005 two and five years after graduation (technically, only the 2-year was called NGS – the 5-year survey was called the Follow-up of Graduates or FOG but no one used the name because it was too goofy). It became the prime way Canada tracked transitions from post-secondary education to the labour market, and also issues related to student debt.

Now NGS was never a perfect instrument. Most of the income data could have been obtained much more simply through administrative records, the way Ross Finnie is currently doing at EPRI. We could get better data on student debt of provinces ever got their act together and actually released student data on a consistent and regular basis (I’m told there is some chance of this happening in the near future). It didn’t ask enough questions about activities in school, and so couldn’t examine the effects of differences in provision (except for, say, Field of Study) on later outcomes. But for all that it was still a decent survey, and more to the point one with a long history which allowed one to make solid comparisons over time.

Then, along comes budget cutting exercises during the Harper Government. ESDC decides it only has enough money for one survey, not two. Had Statscan or ESDC bothered to consult anyone about what to do in this situation, the answer would almost certainly have been: keep the 2-year survey and ditch the 5-year one. The 5-year survey was always beset with the twin problems of iffy response rates and being instantly out of date by the time it came out (“that was seven graduating classes ago!” people would say – “what about today’s graduates”?). But the 2-year? That was gold, with a decent time series going back (in some topic areas) back almost 30 years. Don’t touch that, we all would have said, FOR GOD’S SAKE DON’T TOUCH IT, LEAVE IT AS IT IS.

But of course, Statscan and ESDC didn’t consult and they didn’t leave it alone. Instead of sticking with a 2-years out survey, they decided to do a survey of students three years out, thereby making the results for labour market transitions totally incompatible with the previous six iterations of the survey. They spent millions to get a whole bunch of data which was hugely sub-optimal because they murdered a perfectly good time-series to get it.

I have never heard a satisfactory explanation as to why this happened. I think it’s either a) someone said: “hey, if we’re ditching a 2-year and a 5-year survey, why not compromise and make a single 3-year survey?” or b) Statscan drew a sample frame from institutions for the 2010 graduating class, ESDC held up the funding until it was too late to do a two-year sample and then when it eventually came through Statscan said, “well we already have a frame for 2010, so why not sample them three years out instead of doing the sensible thing and going back and getting a new frame for the 2011 cohort which would allow us to sample two years out”. To be clear, both of these possible reasons are ludicrous and utterly indefensible as a way to proceed with a valuable dataset, albeit in different ways. But this is Ottawa so anything is possible.

I have yet to hear anything about what, if anything, Statscan and ESDC plan to do about surveying the graduating cohort of 2015. If they were going to return to a two-year cycle, that would mean surveying would have to happen this spring; if they’re planning on sticking with three, the survey would happen in Spring 2018. But here’s my modest proposal: there is nothing more important about NGS than bringing back the 2-year survey frame. Nothing at all. Whatever it takes, do it two years out. If that means surveying the class of 2016 instead of 2015, do it. We’ll forget the Class of 2010 survey ever happened. Do not, under any circumstances, try to build a new standard based on a 3-year frame. We spent 30 years building a good time series at 24 months out from graduation. Better have a one-cycle gap in that time series than spend another 30 years building up an equally good time-series at 36 months from graduation.

Please, Statscan. Don’t mess this up.

November 29

Faculty Salary Data

We haven’t looked at Faculty salary data in awhile.  Time for a gander.

Let’s compare data from the years 2009-2010 and 2014-15: a nice round five years.  The data for 2009-2010 is from the old Statistics Canada UCASS survey, discontinued but recently revived; the 2014-14 data is from the National Faculty Data Pool, an organization set up by Canadian Universities to keep the UCASS going after it was defunded.  I have restricted the sample to the 38 institutions which appear in both datasets.  A few institutions chose not to participate in the NFDP exercise, most significantly Montreal, Laval, Sherbrooke, UNBC, Winnipeg, Brandon, St. FX, Cape Breton and Mount Saint Vincent; Victoria is excluded because its data is not available from 2009-10.  On the whole, these missing institutions tend to have lower salaries than other universities in Canada, and as a result, the national averages that arise from this exercise are going to be somewhat higher than a true national average.   So, focus on the change over time (which is very accurate, for institutions accounting for over 80% of professors across the country) and not the averages.

Got that?  OK, good.  On to figure 1, which shows average change in professorial salaries by rank.  For purposes of comparability, the 2009-10 data is shown in 2014-equivalent dollars.

Figure 1: Average Canadian Professorial Salaries by Rank, 2009-10 and 2014-15, in constant 2014 dollars

ottsyd-20161129-1

So, what we see here is that across all ranks, faculty salaries for tenured and tenure-track professors have increased faster than inflation since 2009-10.  The increase was largest for both full and associate profs at just over 5%, while for assistant professors the figure is just 1.1%.  However, the average rise in real salaries across all ranks is a whopping 12.4% over five years – or roughly 2.3% per year on top of inflation (for comparison: economy wide, average wage rates over the same four years rose by just 1.5% or 0.3% per year).  How is this possible?  Simple: the professoriate is aging, and a greater fraction of professors are now in the upper (and better-paid) ranks than was the case five years ago.  Progression Through the Ranks makes a huge difference.

Now, let’s compare Canadian salaries to American ones, using the annual American Association of University Professors’ Annual Report on the Economic Status of the Profession for 2014-15.  This is tricky for three reasons.  The first is the problem of differing exchange rates; I deal with this by using 2014 Purchasing Power Parity value ($1C = $0.85 US).  The second is that the US has a much wider variety of institutions which get included in their national statistics: at the top end there are a lot of very rich private universities and at the bottom there are a lot of institutions which are what we would call community colleges, neither of which are included in the Canadian data.  To deal with this I chose to compare professors at public doctoral institution in the US only with professors at 13 research-intensive universities in Canada for which the National Faculty Data Pool has data (i.e. U-15 minus Montreal and Laval).

The third and trickiest issue is how to account for the fact that American salaries cover 9 months of work while Canadian ones are for 12, with Americans free to top up their salary by up to 2 months’ worth of their regular salary (2/9 = 22%) with money from research grants (these are sometime called “summer salary”.  To show a range of possible comparators, I show 9-month US base salaries, 12-month salaries for those with summer salary, and a weighted average of the two, based on data from the National Research Council’s Assessment from Research-Doctorate Programs in the United States suggesting that 69% of academic staff at research institutions hold research grants.  Note that no data exists as to how often grant money gets used from summer salary; for lack of data I assume here that everyone who receives a grant takes the maximum two months, which almost certainly results in an overestimate for US salaries, so caveat emptor, etc.

With that in mind, Figure 2 provides the comparison of salaries across professors at public research universities in Canada and the US.

Figure 2: Canadian vs. US Professorial Salaries at Public Doctoral/Research Universities by Rank, 2014-15, in Canadian $ at PPP.

ottsyd-20161129-2

The quick conclusion from figure 2 is that base salaries in Canada are higher than those in the US, but that much of this goes away once research dollars are included, especially for full professors.  However, across all ranks, Canadian professors at research universities not only have higher average salaries ($144,153) than American ones ($127,298), and that this result remains true even if we look only at American professors with research grants ($134,879).

Now on to figure 3 where we look at changes in salaries over the past five years.  I’ve again restricted the comparison to research/doctoral universities, but for fun I’ve included US privates.

Figure 3: Real Change in Salaries, in Canadian at Public Doctoral/Research Universities by Rank, 2009-10 to 2014-15, Canada vs. US

ottsyd-20161129-3

Across all ranks at doctoral/research universities, Canadian research university professors’ salaries rose 13.3% after inflation.  For US privates, the equivalent was 2.9% and at US publics it was negative 0.8%.  At each individual rank, the differences are smaller (and in fact at the assistant professor level, rises in Canadian salaries are smaller than in the US).  Why the difference?  Well, mainly, it’s that in the two countries we are seeing two completely different demographic shifts.  In the US, a decreasing percentage of professors are of “full” status, whereas in Canada it is increasing.  Their lower ranks are growing, ours are shrinking.

I would just remind everyone that these stonking increases in compensation are occurring at a period which the Canadian Association of University Teachers (CAUT) continues to refer to as one of “austerity”.  I therefore propose that CAUT get on the phone to their counterparts in Greece and explain this fascinating model of austerity in which the average professor is receiving annual raises equal to 1.5 to 2.5% above inflation, year after year.  I bet they’d really get a kick out of it.

November 25

The Australian Experiment in Cutting Red Tape

One thing everybody hates is red tape – especially pointless reporting requirements which take up time, money and deliver little to no value.  Of late, Canadian universities have been talking more and more about various types of reporting burden and how they’d really like being freed from some of it.  For those interested in this subject, it’s instructive to see how the issue has been handled in Australia.

The peak university body in Australia (called – appropriately – Universities Australia) began the drumbeat on this issue about six years ago.  They commissioned an independent third party (self-interested note to university associations: 3rd party investigations give your policy positions credibility!) to provide an authoritative report on Universities’ reporting requirements.  The report went into exhaustive detail in terms of how much staff time and IT resources institutions devote to each of 18 separate data reports required by the commonwealth government.  What they found was that the median Australian institution was spending 2,000 days of staff time and $800-900,000 per year on these reports, roughly a third of which went on collecting data on research publications.

Now, that may not sound like much.  But that’s only data going to the federal ministry responsible for higher education.  It did not include reporting costs related to quality assurance bodies and submissions to the national higher education regulator(s).  It did not include the costs of research assessment exercises (and certainly didn’t count the cost of applying to various funding agencies for money, which is a whole other nightmare story in and of itself).  It did not count regulation related to state governments (Australia is a federation but in contrast to Canada, higher education is mostly but not exclusively a federal responsibility), or anything relating to its required reporting to the charities commission, reporting on government compacts (similar to Ontario’s MYAs), health agencies, the Australian Bureau of Statistics, professional registration bodies….the list goes on.

The point here is not that institutions should be free of reporting requirements.  If we want transparency and good system stewardship, we need institutions to be providing a lot of data – in many cases much more data than they currently provide.  The point is that nobody is co-ordinating those data requests and making any effort to reduce overlap.  If you’re getting data/reporting requests from a dozen or more different bodies, it would be useful if those bodies spoke to each other once in awhile.  Also, as a general principle, or keep regulatory requests to what is absolutely needed rather than what regulators might just like to know (appallingly, the Government of Ontario last year attached a rider to a childcare bill gave itself the right to take any piece of data held by an Ontario university or college, including physical and mental health records, something which in my line of work is known as “as far away from good practice as humanly possible”).

There were, I think, two key suggestions which came out of this exercise. One was that they government should be required to post a comprehensive annual list and timetable of institutional reporting.  This was less for the universities’ benefit than the government’s: it helps to be actively reminded about other people’s reporting burdens.  The second was a very sensible suggestion about how a streamlining of requirements could be handled by the creation of a national central data repository.  The design of this system is shown in the figure below.

unnamed

This is similar in spirit to the way North American universities have created “common data sets” in reaction to requests for information from rankers and guide-book makers; where it differs is that it brings data customers into the heart of the data collection process, and it explicitly requires them to put data out into statistical reports for public consumption.  In other words, part of the quid pro quo for more streamlined reporting is more transparent reporting.  Which is a lesson I think Canadian institutions should take to heart.

The results of this were mixed.  The government held its own hearings on regulation which led to significant cuts to the main higher education regulator, TEQSA, which left the university somewhat relieved (they got a much lighter-touch regulator as a result) and somewhat horrified (while they liked a light touch for themselves, they were panicked at the prospect of light touch regulation for the country’s many private providers).  As for the report commissioned by Universities Australia, while the Government responded to the review in a very positive manner  very little in terms of concrete change seems to have happened.

Still, these reports change the tone of the discussion around higher education at least for a time.  It would be useful to try something similar here – especially in Ontario.

November 24

Who’s More International?

We sometimes think about international higher education as being “a market”. This is not quite true: it’s actually several markets.

Back in the day, international education was mostly about graduate students; specifically, at the doctoral level. Students did their “basic” education at home and then went abroad to get research experience or simply emigrate and become part of the host country’s scientific structure. Nobody sought these students for their money; to the contrary these students were usually getting paid in some way by their host institution. They were not cash cows they did (and still do) contribute significantly to their institutions in other ways, primarily as laboratory workhorses.

In this market, the United States was long the champion since its institutions were the world’s best and could attract top students from all over the world. In absolute terms, it is still the largest importer of doctoral students. But in percentage terms, many other countries have surpassed it. Most of them, like Switzerland, are pretty small and small absolute numbers of international students nevertheless make up a huge proportion of the student body (in this case, 55%). The UK and France, however, are both relatively large markets, and despite their size they now lead the US in terms of percentage of doctoral students who are international (42 and 40% vs 35%). Canada, at 27%, is at right about the OECD average.

Figure 1: International Students at Doctoral Level as Percentage of Total
ottsyd-20161123-1

Let’s turn now to Master’s students, who most definitely *are* cash-cows. Master’s programs are short degrees, mainly acquired for professional purposes and thus people are prepared to pay a premium for good ones. The biggest market here are for fields like business, engineering and some social sciences. Education could be a very big market for international Master’s but tends not to be  because few countries (or institutions, for that matter) seem to have worked out the secret for international programs in what is, after all a highly regulated profession. In any case, this market segment is where Australia and the UK absolutely dominate, with 40 and 37% of their students being international. Again, Canada is a little bit better than the OECD average (14% vs. 12%).

Figure 2: International Students at Master’s Level as Percentage of Total
ottsyd-20161123-2

Figure 3 turns to the market which is largest in absolute terms: undergraduate students. Percentages here tend to be smaller because domestic undergraduate numbers are so large, but we’re still talking about international student numbers in the millions here. The leader here is – no, that’s not a misprint – Austria at 19% (roughly half of them come from Germany – for a brief explainer see here). Other countries at the top will look familiar (Great Britain, New Zealand, Australia) and Canada doesn’t look to bad, at 8% (which strikes me as a little low) compared to an OECD average of 5%. What’s most interesting to me is the US number: just 3%. That’s a country which – in better days anyway – has an enormous amount of room to grow its international enrollment and if it hadn’t just committed an act of immense self-harm would have be a formidable competitor for Canada for years to come.

Figure 3: International Students at Bachelor’s Level as Percentage of Total

ottsyd-20161123-3

Finally, let’s look at sub-baccalaureate credentials, or as OECD calls them, “short-cycle” programs. These are always a little bit complicated to compare because countries’ non-university higher education institutions and credentials are so different. Many countries (e.g. Germany) do not even have short-cycle higher education (they have non-university institutions, but they still give out Bachelor’s degrees). In Canada, obviously, the term refers to diplomas and certificates given out by community colleges. And Canada does reasonably well here: 9% of students are international, compared to 5% across OECD as a whole. But look at New Zealand: 24% of their college-equivalent enrollments are made up of international. Some of those will be going to their Institutes of Technology (which in general are really quite excellent), but some of this will also be students from various Polynesian nations coming to attend one of the Maori Wānanga.
Figure 4: International Students in Short-Cycle Programs as Percentage of Total

ottsyd-20161123-4

Now if you look across all these categories, two countries stand out as doing really well without being either of the “usual suspects” like Australia or the UK. One is Switzerland, which is quite understandable. It’s a small nation with a few really highly-ranked universities (especially ETH Zurich), is bordered by three of the biggest countries in the EU (Germany, France, Italy), and it provides higher education in each of their national languages. The more surprising one is New Zealand, which is small, has good higher education but no world-leading institutions, and is located in the middle of nowhere (or, at least, 5000 miles from the nearest country which is a net exporter of students). Yet they seem to be able to attract very significant (for them, anyway) numbers of international students in all the main higher education niches. That’s impressive. Canadians have traditionally focused on what countries like Australia and the UK are doing in international higher education because of their past track record. But on present evidence, it’s the Kiwis we should all be watching, and in particular their very savvy export promotion agency Education New Zealand.

Wellington, anyone?

Page 2 of 1612345...10...Last »