HESA

Higher Education Strategy Associates

Author Archives: Alex Usher

May 31

Moral Panics About Kids in Basements

Every once in awhile you see a news story saying something along the lines of “oh my, so many people in their 20s living with their failure to launch, my God, won’t somebody do something” followed usually by some freaking out about housing prices and – if we’re really lucky – something about humanities and working at Starbucks as well.  Like this one from the CBC last week.

The excuse for that CBC piece was an American study which noted that 2014 was the first time that the most common type of dwelling for American 18-34 year olds was NOT under their own roof with a spouse/common-law partner.  That of course has as much or more to do with changing patterns of university attendance and family formation as it does with patterns of leaving the parental house.  Yes, the percentage living with their parents has risen from 20% to 32% over the last fifty-five (!) years (not exactly a fast-moving phenomenon), but the percentage living with a spouse/CLP has fallen from 62% to 32%.  In fact, if you read a little further down, you realize that the percentage of 18-34 year olds living at home in the US today  is…slightly lower than it was in 1940.

So, you know, maybe this isn’t such a big deal after all.

But still, people want to make it a story.  For instance, in that CBC story, they note that “according to the most recent Statistics Canada census, 42.3 per cent of people in their twenties lived at home in 2011. That’s well up from previous generations, including 32.1 per cent in 1991 and 26.9 per cent in 1981.”

That’s technically true.  But as with so many of the horror-stories about youth out there, most of the big changes happened before 1996, not recently.  Here are the actual census numbers:

Percentage of Canadians aged 20-29 living with parents, 1981-2011.

OTTSYD-30052016-1

In other words, this clearly isn’t a recent phenomenon.  It’s not something caused by the great recession or housing prices.  It’s something that’s happened quite gradually over time.  What could it be?

First, note that the issue has a lot more to do with 20-24 year olds than it does with 25-29 year-olds.  And what’s changed over the last 30 years among 20-24 year olds?  Higher education, mainly.  University participation rates have almost tripled in that time; colleges have increased as well.  Call me crazy, but I think that increased rates of study – and hence decreased labour market participation – might have an effect on living arrangements.   

Second, the youth demographic of this decade looks a lot different than the one of the 1980s.  It’s a lot less white, for one thing.  In 1981, visible minorities made up 4.7% of the Canadian population.  If you look at the 15-24 age group in 2011, it’s about 25%, three quarters of which are of Asian descent.  Asian cultures don’t really share the perspectives of those of British, French and North European descent about the need for kids to get out of the house early.  It would be useful to look at the “living at home” data by ethnicity over time; it may well be that part of the change we are seeing is due to the changing ethnic make-up of the country rather than actual changes in behaviour.

Now, it’s hard to quantify the effects of increased study and changing demographics, so there might be some underlying economic changes to blame for the changing rates of leaving home.  But I think those two factors offer enough common-sense explanation for this phenomenon that we should avoid making too big a deal over it.

May 30

The 2016 U21 Rankings

Universitas 21 is one of the higher-prestige university alliances out there (McGill, Melbourne and the National University of Singapore are among its members).  Now like a lot of university alliances it doesn’t actually do much. The Presidents or their alternates meet every year or so, they have some moderately useful inter-institution mobility schemes, that kind of thing.  But the one thing it does which gets a lot of press is that it issues a ranking every year.  Not of universities, of course (membership organizations which try to rank their own members tend not to last long), but rather of higher education systems.  The latest one is available here.

I have written about the U21 rankings before , but I think it’s worth another look this year because there have been some methodological changes and also because Canada has fallen quite a ways in the rankings.  So let’s delve into this a bit.

The U21 rankings are built around four broad concepts: Resources (which makes up 20% of the final score), Environment (20%), Connectivity (20%) and Output (40%), each of which is measured through a handful of variables (25 in all).  The simplest category is Resources, because all the data is available through OECD documentation.  Denmark comes top of this list – this is before any of the cuts I talked about back here  kick in, so we can expect it to fall in coming years.  Then in a tight bunch come Singapore, the US, Canada and Sweden. 

Next comes “Environment”, which is a weird hodge-podge of indicators around regulatory issues, institutional financial autonomy, percentages of students and academic staff who are female, a survey of business’ views of higher education quality and – my favourite – how good their education data is.  Now I’m all for giving Canada negative points for Statscan’s uselessness, but there’s something deeply wrong with any indicator of university quality which ranks Canada (34th) and Denmark (31st) behind Indonesia (29th) and Thailand (21st).  Since most of these scores come from survey responses, I think it would be instructive to publish the results of these responses, because they flat-out do not meet the fall-down-laughing test.

The Connectivity element is pretty heavily weighted to things like percentage of foreign students and staff and what percentage of articles are co-authored with foreign scholars.  For structural and geographical reasons, European countries (especially the titchy ones) tend to do very well on this measure and so they take all nine of the top nine spots.  New Zealand comes tenth, Canada eleventh.  The Output measure combines research outputs and measures of access, plus an interesting new one on employability.  However, because not all of these measures are normalized for system-size, the US always runs away with this category (though due to some methodological tweaks less so than they used to).  Canada comes seventh on this measure.

Over the last three years, Canada has dropped from third to ninth in the measures.  The table below shows why this is the case.

Canada’s U21 Ranking Scores by Category, 2012-2016

2016-05-29-1

In 2015, when Canada dropped from 3rd to 6th, it was because we lost points on “environment” and “connectivity”.  It’s not entirely clear to me why we lost points on the latter, but it is notable that on the former there was a methodological change to include the dodgy survey data I mentioned earlier, so this drop may simply reflect a methodological change.  This year, we lost points on resources which frankly isn’t surprising given controls on tuition and real declines in government funding in Canada.  But it’s important to note that the way this is scored, what matters is not whether resources (or resources per-student) are going up or down, it’s whether they are going up or down relative to the category leader – i.e. Denmark.  So even with no change in our funding levels, we could expect our scores to rise over the next few years.

May 27

Three Unconnected Thoughts on PSE and Aboriginal Peoples

1)      Changing Disciplines

In the last five years or so, I’ve seen a real change in the way Aboriginal students are moving through the country’s PSE system.  For a whole number of reasons, aboriginal students were traditionally concentrated either in humanities disciplines like history and sociology, or they were in disciplines which led to careers in social services or direct band employment (child care, police foundations, education, nursing).  STEM and Business fields simply weren’t in the picture.  That’s changed substantially over the past few years.  Aboriginally-focussed business programs are popping up all over the place.  Increasingly, we are seeing enrolments in STEM (though there is still a long way to go).  So what’s changed?

A couple of things, I think.  First, the demographics of First Nations students are changing.   Time was, a very high proportion of aboriginal students came in after quite a period out of school, typically in their mid-20s.  Nowadays, we are seeing a lot more students transition at an earlier age, direct from high school (and more often than not from urban, mainstream high schools).  On average, this background prepares them better for PSE than graduation from on-reserve schools. Hence, they tend to be applying for and getting access to more selective courses.

But this begs the question: what’s behind this shift at the secondary level?  A lot of it is demographics.  A greater proportion of First Nations youth are living in urban areas, and so on average they have better access to better schooling.  Drop-out rates are still high and there is much to be done to improve inner-city schools, but conditional on completing high school First Nations graduates seem about as prepared as mainstream students to deal with the rigors of PSE.

Another important factor here is the aging of the last generation to have experienced residential schools.  Parents pass on their views of education to their children; unsurprisingly, those who had been through residential schools weren’t always inclined to encourage their children to invest a lot of their identity in schooling. On top of poverty, racism, etc., this probably had a lot to do with low aboriginal participation rates until fairly recently.  But most residential schools closed in the 1970s; so most of the kids now coming through the system are the grandchildren of the last residential schools generation.  Soon it will be the great-grandchildren.  The bad memories of residential schools are by no means gone, but they are of less relevance in terms of pre-disposition to invest in schooling, and that matters.

Finally, there’s the money issue.  Institutions are finding it a whole lot easier now to raise funds for aboriginal scholarships or other focussed initiatives than they used to.  And that certainly improves the quality of the aboriginal student experience, which probably contributes to improved completion rates.

2)  Money

People are rightly getting peeved at the federal Government for having not come through on its promise to add $50 million in funding for First Nations education through the Post-Secondary Student Support Program (PSSSP).  I expect that’s a promise the Liberals will try to fulfill next year or the year after (there may be a delay as the Feds ponder the implications of the Daniels decision which puts Metis Canadians on the same legal footing as First Nations vis-à-vis the federal government.

But what people haven’t remarked on is the huge boost in funding that First Nations students could receive should they sign up for federal and provincial students aid.  In Ontario, virtually all on-reserve students will be eligible for $9,000 in grants through the new Ontario Student Grant: elsewhere, they will be eligible for at least $3,000 through the improved Canada Student Grant.  If First Nations make their students apply for this aid before applying to their bands for PSSSP, then all their students will have at least some base amount of funding.  That would mean bands wouldn’t need to give as much to each individual student, and could use freed-up funds to provide aid to more students, thus alleviating the well-known waiting list problem.  But that would take a bit of organization to make sure band educational counselors know how to help their students navigate the federal/provincial aid system.  Something our friends at Aboriginal Affairs might want to think about.

3) Truth & Reconciliation

Since the release of Justice (now Senator) Murray Sinclair’s report last year, some Canadian post-secondary institutions have made some extremely useful gestures towards reconciliation, like requiring all students to take  at least one class in aboriginal culture or history.  Which is great, except it’s not actually what Sinclair asked for.  Rather, he asked that students in specific professional programs (i.e. health and law) be required to take courses in Aboriginal health and law, respectively.  As I said at the time I thought this was a stretch and that prestigious law programs would resist this (quietly and passive aggressively, of course).

It’s been a year now – and to my knowledge (everybody please correct me if I am wrong) – no university law or medical school has adopted this proposal.  I wonder how long before this becomes an issue?

May 26

Taking Advantage of Course Duplication

I recently came across an interesting blogpost from a professor in the UK named Thomas Leeper (see here), talking about the way in which professors the world over spend so much time duplicating each others’ work in terms of developing curricula.  Some key excerpts:

” …the creation of new syllabi is something that appears to have been repeated for decades, if not centuries. And yet, it all seems rather laborious in light of the relatively modest variation in the final courses that each of us creates on our own, working in parallel.”

“… In the digital age, it is incredibly transparent that the particular course offerings at every department are nearly the same. The variation comes in the quality of lectures and discussion sections, the set of assignments required of students, and the difficulty of the grading.”

“We expend our efforts designing strikingly similar reading lists and spend much less time on the factors that actually differentiate courses across institutions: lecture quality, learning activities, and feedback provision… we should save our efforts on syllabus construction and spend that time and energy elsewhere in our teaching.”

Well, quite.  But I think you can push this argument a step further.  I’ve heard (don’t quote me because I can’t remember exactly where) that if you group together similar courses across institutions (e.g. Accounting 100, American History survey courses, etc.), then something like 25% of all credits awarded in the United States are accumulated in just 100 “courses”.  I expect numbers would not be entirely dissimilar in other Anglophone countries.  And though this phenomenon probably functions on some kind of power law – the next 100 courses probably wouldn’t make up 10% of all credits – my guess is your top 1000 courses would account for 50-60% or more of all credits.

Now imagine all Canadian universities decided to get together and make a really top-notch set of e-learning complements to each of these 1000 courses.  The kinds of resources that go into a top-notch MOOC  in order to improve the quality of each of these classes (like, for instance, the University of Alberta’s Dino 101).  Not that they would be taught via MOOC – teaching would remain the purview of individual professors – but that each have excellent dedicated on-line resources associated with each of them.  Let’s say they collectively put $500,000 into each of them, over the course of four years.   That would be $500M in total, or $125M per year.  Obviously, those aren’t investments any single institution could contemplate, but if we consider the investment from the perspective of the entire sector (which spends roughly $35 billion per year) this is chump change.   $120 per student.  A trifle.

So, a challenge to university Presidents and Provosts: why not do this?  We’re talking here about a quantum jump in learning resources available for half the credits undergraduate earn each semester.  Why not collectively invest money to improve the quality of the learning environment?  Why not free up professors’ time so they can focus more on lecture quality and feedback provision?  And to provinces and CMEC: why not create incentives for institutions to do precisely this?

A debate should be had.  Let’s have it.

May 25

Unbundling

Over the past few years, we’ve heard recurrent calls and/or predictions that higher education will soon be “unbundled”.  What does this term mean, exactly?

It’s a metaphor that’s been used in more than one way.  The unbundling allusion is mostly to the music industry which has seen technology allow consumers to unbundle its main product (albums) into smaller discrete chunks (songs), but there are also allusions here to the cable TV industry and to journalism.  Universities, it is argued, provide a whole bunch of disparate services, not all of which are of equal quality or are equally desired by the customer.  So why not have them unbundle their products (depending on the writer, this is either something devoutly to be encouraged or simply a long-term inevitability to be prepared for) and allow students to pick and choose between institutions and mix/match the offerings of various competitors?

Now, a fair question at this point is “what services, exactly, are to be unbundled”?  This varies somewhat depending on the author.  At the most basic level, authors who write about this agree that the teaching function and the certification function need to be unbundled (something I’ve written about recently).  This would – in theory – allow students to take courses from multiple institutions and mix and match them into their own self-directed program. 

But some people go much further.  American teacher-turned-venture capitalist Michael Staton, for instance, suggests that universities actually provide 12 different sets of services including content delivery (classes), pathways and sequencing (the arrangement of classes into a degree), a signal of educational achievement (credentials), provision of an affiliate network (friends made at school, alumni), meta-content (learning to think about how to think), and a rite of passage and a culture of personal exploration. 

Now, Staton’s model is useful if all we are trying to conceptualize is the various “products” which a university is actually selling (or bundling, if you will).  Most universities don’t actually think about this in a very clear manner; certainly not to the extent that they could explain to you how they distribute their resources across each of the “products”.  But Staton is at least half-serious in saying that other organizations could do these jobs just as well as universities, and that one could imagine students bundling together their own experience from different institutions’ offerings (if you want a sense of how he imagines this playing out, read this piece he authored a few years ago.

It’s an interesting idea.  The problem is that there is no evidence students actually want an unbundled experience.  Unbundling requires each student to knit together her own education experience from across various providers and platforms which is, to put it mildly, an enormous hassle.  Contrary to what the unbundling enthusiasts believe, the fact that the university integrates so many services in a single offering is one of the most important things it has going for it.

That said, simply being an integrator of services isn’t an argument for being the provider of said services.  Universities could outsource some of these functions, and in some places, they already do.  Already we see institutions starting to outsource certain student service functions which used to be considered core (like course advising).  In Europe, what we call “student life” functions are often performed by agencies outside the university (living quarters are sometimes managed by para-statal agencies which serve all the institutions in a city rather than just a single school).  Who’s to say it might not be better for all concerned if universities hired specialist companies to better manage content & sequencing of degrees, to measure skills acquisition in a degree outside of simple content mastery, or to build and provide access to cultivated global social networks for students? 

In other words, while there may not be much demand for students to purchase unbundled services, there may be some gains to be had in universities unbundling the provision of services.  Worth a ponder, anyway. 

May 24

Innovation Policy: Beyond Digital and Cleantech

So, earlier this month, federal Innovation Minister Navdeep Bains wrote an op-ed in the Toronto Star which lays out, as clearly as possible, where the current government’s thinking is with respect to Innovation policy.  Some of it is good, but some of it is dreck.

Let’s start with the good stuff :

“Innovation is fundamental to our continued growth and job creation, and it’s impossible to predict where and how disruption will happen. It can be in a start-up garage in Vancouver, a mine in Saskatoon, or a fishery in Saint John.”

Ok, so the mining in Saskatoon reference is a little odd (there’s exactly one mine in the city, a salt mine which straddles the city limits) , but that aside the Minister seems to understand that innovation (that is, the act of creating new business processes so as to add value) is something that needs to happen economy-wide.  And it’s something that needs to be bred into companies’ DNA.  So far, so good.

What’s a bit disturbing, though, is how quickly Bains goes from “this is something everybody needs to do” to “we’re going to be great in digital! We’re going to invest in cleantech!” and other forms of highly sector-specific boosterism.  Now, I get that governments want to be seen as leading us all towards the industries of the future and reaping the political rewards of said leadership, but man there is some serious cognitive dissonance at work here.  You can’t simultaneously believe that innovation policy is an economy-wide thing and then start babbling about how you plan to plough money into specific sectors.

Bains seems to have difficulty distinguishing between “innovation policy” and “innovation sectors”.  There is a difference.  Innovation policy, as Dan Breznitz underlines here, should focus on helping new technologies and business models flourish.  By definition, this policy has to be economy-wide, because these new technologies and models don’t exist yet.   “Innovation sectors” is a jargon-y term (used much more in Canada than elsewhere if Google is any guide) meaning roughly “sectors which attract a lot of venture capital”, or in practical terms: ICT, cleantech and biotech.

To be fair, Bains isn’t alone: most governments in Canada have this problem.  This is why so many of their innovation policies are scarcely more than “Digital! Cleantech! Woo!”  And there’s nothing wrong (in principle) with in trying to promote digital or cleantech sectors.  But we have to come clean that doing so is industrial policy, not innovation policy.  Similarly, Science policy is not innovation policy.  Neither is growth policy, and neither are policies promoting entrepreneurship.  They all feed on one another. They all (in theory) can complement one another, but they are different.  And if you confuse them the result will be bad policy.

Like I said, pretty much everyone starts their innovation policy with “Digital! Cleantech! Woo!”  And universities and colleges are complicit in this because that’s the easiest way for them to get their hooks into whatever flood of money is going to come out of government when innovation talk gets going.  But the mark of good innovation policy is the extent to which it transcends this kind of simplistic formula.  Here’s hoping the Liberals figure out how to do this in the next few months.

May 20

The Times Higher Education “Industry Income” Rankings are Bunk

A few weeks ago, the Times Higher Education published a ranking of “top attractors of industry funds”.  It’s actually just a re-packaging of data from its major fall rankings exercise: “industry dollars per professors” is one of its thirteen indicators and this is just that indicator published as a standalone ranking.  What’s fascinating is how at odds the results are with published data available from institutions themselves.

Take Ludwig-Maximillans University in Munich, the top university for research income according to THE.  According to the ranking, the university collects a stonking $392,800 in industry income per academic.  But a quick look at the university’s own facts and figures page reveals a different story.  The institution says it receives €148.4 million in “outside funding”.  But over 80% of that is from the EU, the German government, or a German government agency.  Only €26.7 million comes from “other sources”.  This is at a university which has 1492 professors.  I make that out to be 17,895 euros per prof.  Unless the THE gets a much different $/€ rate than I do, that’s a long way from $392,800 per professor.  In fact, the only way the THE number makes sense is if you count the entire university budget as “external funding” (1492 profs time $392,800 equals roughly $600M, which is pretty close to the €579 million figure which the university claims as its entire budget).

Or take Duke, second on the THE list.  According to the rankings, the university collects $287,100 in industry income per faculty member.  Duke’s Facts and Figures page says Duke has 3,428 academic staff.  Multiply that out and you get a shade over $984 million.  But Duke’s financial statements indicate that the total amount of “grants, contracting and similar agreements” from non-government sources is just under $540 million, which would come to $157,000 per prof, or only 54% of what the Times says it is.

The 3rd place school, the Korea Advanced Institute of Science and Technology (KAIST), is difficult to examine because it seems not to publish financial statements or have a “facts & figures” page in English.  However, assuming Wikipedia’s estimate of 1140 academic staff is correct, and if we generously interpret the graph on the university’s research statistics page as telling us that 50 of the 279 billion Won in total research expenditures comes from industry, then at current exchange rates that comes to  a shade over $42 million, or $37,000 per academic. Or, one-seventh of what the THE says it is.

I can’t examine the fourth-placed institution, because Johns Hopkins’ financial statements don’t break out its grant funding by public and private sources.  But tied for fifth place is my absolute favourite, Anadolou University in Turkey, which allegedly has $242,500 in income per professor.  This is difficult to check because Turkish universities appear not to publish their financial documents.  But I can tell you right now that this is simply not true. On its facts and figures page, the university claims to have 2,537 academic staff (if you think that’s a lot, keep in mind Anadolu’s claim to fame is as a distance ed university. It has 2.7 million registered students in addition to the 30,000 or so it has on its physical campus, roughly half of whom are “active”). For both numbers to be true, Anadolu would have to be pulling in $615 million/year in private funding, and that simply strains credulity.  Certainly, Anadolu does do quite a bit of business – a University World News article from 2008 suggests that it was pulling in $176 million per year in private income (impressive, but less than a third of what is implied by the THE numbers), but much of that seems to come from what we would typically call “ancillary enterprises” – that is, businesses owned by the university – rather than  external investment from the private sector.

I could go through the rest of the top ten, but you get the picture.  If only a couple of hours of googling on my part can throw up questions like this, then you have to wonder how bad the rest of the data is.   In fact, the only university in the top ten where the THE number might be something close to legit is that for Wageningen University in the Netherlands.  This university lists €101.7 million in “contract research”, and has 587 professors.  That comes out to a shade over €173,000 (or about $195,000 per professor) which is at least spitting distance from the $242,000 claimed by THE.  The problem is, it’s not clear from any Wageningen documentation I’ve been able to find how much of that contract research is actually private sector.  So it may be close to accurate, or it may be completely off.

The problem here is a problem common to many rankings systems.  It’s not that the Times Higher is making up data, and it’s not that institutions are (necessarily) telling fibs.  It’s that if you hand out a questionnaire to a couple of thousand institutions who, for reasons of local administrative practice, define and measure data in many different ways, and ask for data on indicators which do not have a single obvious response (think “number of professors”: do you include clinicians?  Part-time profs?  Emeritus professors?), you’re likely to get data which isn’t really comparable.  And if you don’t take the time to verify and check these things (which the THE doesn’t, it just gets the university to sign a piece of paper “verifying that all data submitted are true”), you’re going to end up printing nonsense. 

Because THE publishes this data as a ratio of two indicators (industry income and academic staff) but does not publish the indicators themselves, it’s impossible for anyone to work out where the mistakes might be. Are universities overstating certain types of income, or understating the number of professors?  We don’t know.  There might be innocent explanations for these things – differences of interpretation that could be corrected over time.  Maybe LMU misunderstood what was meant by “outside revenue”.  Maybe Duke excluded medical faculty when calculating its number of academics.  Maybe Anadolu excluded its distance ed teachers and included ancillary income.  Who knows? 

The problem is that the Times Higher knows that these are potential problems but does nothing to rectify them.  It could be more transparent and publish the source data so that errors could be caught and corrected more easily, but it won’t do that because it wants to sell the data back to institutions.  It could spend more time verifying this data, but it has chosen to hide instead behind sworn statements from universities. To do more would be to reduce profitability. 

The only way this is ever going to be solved is if institutions themselves start making their THE submissions public, and create a fully open database of institutional characteristics.  That’s unlikely to happen because institutions appear to be at least as fearful of full transparency as the THE.  As a result, we’re likely to be stuck with fantasy numbers in rankings for quite some time yet.

May 19

PhDs in the Humanities

I had the good fortune earlier this week of speaking to the Future of the Humanities PhD conference at Carleton University. It was an interesting event, full of both faculty and students who are thinking about ways to reform a system takes students far too long to navigate. They asked me for my thoughts, so I gave them. Here’s a precis.

One of the most intractable problems with the PhD (and not just in the humanities) is that it serves a dual purpose. First, it’s a piece of paper that says “you’re a pretty good researcher”; second, it’s a piece of paper that says “you too can become a tenured university professor! Maybe.”

The problem is that “maybe”: lots of people can meet the standard of being a good researcher, but that doesn’t mean there will be university professor jobs for them. Simply, more people want to be professors than there are available spots; eventually, the system says yes to some and no to others. Right now we let the job market play that role. But what if those in charge of doctoral programs themselves played a more active role? What if there was more selectivity at the end of – say – the first or second year of a doctorate? Those deemed likeliest to get into academia would then end up on one track, and the others would be told (gently) that academia was unlikely for them, and offered a place in a (possibly shorter-duration) “professional PhD” track designed to train highly skilled workers destined for other industries. Indeed, some might want to be put on a professional PhD track right from the start.

If you’re going to be selective early on in doctoral programs, then you probably want to re-design their front-ends so they’re not all coursework and – especially – comps. Apparently in some programs, it is not unusual to take three years to complete coursework and comps – and this is after someone has done a master’s degree. This is simply academic sadism. It is certainly important for students heading into the teaching profession to have a grasp of the overall literature. But is it necessary for this work to be placed entirely at the beginning of a program, acting as a barrier to students doing what they really want to do, which is research?

Instead, why not have students and their supervisors jointly work out at the start of a program what literature needs to be covered, and agree to a structured program of covering it over the length of the program? Ideally, students would have their tests at the end, close to the time when they would be going on the job market. You’d need to front-end load the more methodological stuff (so those who end up in the professional stream get it too), but apart from that this seems perfectly feasible.

Of course, that implies that departments – and more importantly individual doctoral supervisors – are prepared to do the work to create individual degree plans with students and actually stick to them. There is really no reason why a five- or even a four-year doctorate – that is, one whose length roughly coincides with the funding package – is impossible. But expectations have to be clear and met on both sides. Students should have a clear roadmap telling them roughly what they’ll be doing each term until they finish; professors need to hold them to it but equally, professors also need to be held to their responsibilities in keeping students on track.

Many people think of PhDs as “apprenticeships”. But that doesn’t imply just hanging around and watching the “masters”. Go read any apprenticeship standards documents: employers have a very detailed list of skills that they need to impart to apprentices over a multi-year apprenticeship, and both the apprentice and the journeyman have to sign off every once in a while that such skills have been taught.

Or, take another form of knowledge-worker apprenticeship: medicine. When medical students pick a specialty like internal medicine or oncology, they are embarking on a form of multi-year apprenticeship – but one which is planned in detail, containing dozens of assessment points every year. Their assessments cover not just subject matter expertise, but also the aspiring medic’s communication skills, teamwork skills, managerial skills, etc. All things that you’d think we’d want our doctoral students to acquire.

So how about it, everyone? Medical residencies as a model for shorter, more intensive PhDs? Can’t be worse than what we’re doing now.

May 18

Canadian B-Schools and Economic Growth

If there is one thing university Presidents desire, it is to be useful to society – and preferably to the government of the day, too.  After all, post-Humboldt, universities exist to strengthen the state.  The better a university does that, the more it will be appreciated and, hopefully, the better funded it will be.  So it has always struck me as a bit odd how little universities (an business schools in particular) have really done in order to help work on the causes of Canada’s perennially sluggish economy.

Canada’s fundamental economic problem is that outside the resource sector, companies struggle to reach scale.  Outside the oligopolistic telecoms and banking sectors, we are a nation of small and medium businesses.  Judging by the party manifestos in last year’s elections, many people like things that way.  Small businesses are good and deserving of lower tax rates, big businesses are bad and deserve to be taxed more heavily. 

The problem with this little story is that it is simply wrong.  Big businesses are crucial to innovation and hence to economic growth.  Big businesses are the ones that have the money to invest in R & D.  They are the ones that can make long-terms commitments to training employees (if you don’t think firm size plays a role in Germany’s ability to sustain its apprenticeship system, you aren’t paying attention). People may be rightly cautious about the power of capital and its influence on the political process; but that doesn’t mean we shouldn’t encourage the formation of large companies in the first place.  Ask the Swedes: their social democracy would never have existed without very large companies like Volvo, Saab and Ikea.

And so the key question is: why don’t we have bigger domestic companies in Canada?  Oh sure, we have the occasional behemoth (i.e., Nortel, RIM) but we don’t seem to do it in a non-ephemeral way, or do it across the board.  And when our companies do start getting big, they often sell out to foreign companies.

We can point fingers in a whole bunch of directions – one favorite is a lack of appropriate venture capital.  But to a considerable degree, it’s a question of management.  Universities like to talk about how they are teaching entrepreneurship but getting people to start businesses and getting those businesses to grow are two very different propositions.  We seem not to have a lot of managers who can take companies from their first million in sales to their first ten million in sales, or to take our businesses out of the Canadian context and into a global one (if you haven’t yet read Andrea Mandel-Campbell’s Why Mexicans Don’t Drink Molson, on this subject, do – it’s revealing).   And for that matter, how is it that our venture capital industry still seems more comfortable with mining projects than life science or biotech?

Can it be – say it softly – a question of education?

We pretend that success in innovation is a function of prowess in tech.  But to a large degree, it’s a function of management prowess: how can staff be better motivated, how can processes be changed to add value, how well can business or investment opportunities be spotted.  Might it be that the education of our business elite doesn’t include the right training to do these things? 

To be clear here, I don’t really have any evidence about this one way or the other.  No one does.  But if I were a university president, or a business dean, it’s a question I’d be asking myself.  Because if there’s an economic conundrum that needs solving its this one, and if there’s any way in which universities can contribute, they should.

May 17

How Rich are China’s Universities?

Last week, Mike Gow at the Daxue blog linked to some interesting data recently published by the Chinese government with respect to the budgets of the country’s top universities.  It only covers those institutions which report to the Ministry of Education (and therefore misses some important institutions like the University of Science and Technology of China (which reports to the Chinese Academy of Sciences) and the Harbin Institute of Technology (which reports to the Ministry of Industry and Information Technology).  It suggests that, at the very top of the Chinese system, there are some jaw-dropping amounts of money being spent.

Let’s focus just on the C9 schools (the Chinese equivalent of the U-15/Russell Group/AAU/G-8, or at least on the seven for which data was provided).  Here is the data for 2015-16:

Table 1: Income & Enrollments of Top Chinese Universities

2016-05-17-1

*From Wikipedia.  I know, I know, but it’s all I had.

**Using the Big Mac Index to covert from RMB to USD at rate of 3.57 to 1

Now, the jaw-droppingness of these figures depends a lot on whether you think it makes more sense to compare institutional buying power based on market exchange rates or based on purchasing power parity (PPP).  For universities, which pay salaries in local currency but compete for staff and pay for journals and scientific journals in an international market, there are some good arguments either way.  It should also be noted that it’s not 100% clear what is and is not in these figures.  Does Tsinghua’s figure include the holding companies that own shares in all of Tsinghua’s spin-off businesses?  Unclear.  My guess would be that it includes income from those businesses but not the businesses themselves – but it’s hard to know for sure.

Comparing these numbers to those of top American universities is somewhat fraught, because of the way American universities account for income from their teaching hospitals.  Thus Duke reports about twice as much income per student as Harvard because one includes medical billings and the other does not; if you correct for this, the two institutions are about the same.  Correcting as best I can for teaching-hospital income, and excluding Rockefeller University because it doesn’t really have any students and excluding Caltech (which has about $1 million/student in revenue) because it’s such an outliers and would break my nice graph, the top five in the US and the top seven in China looks like this:

Figure 1: Total Income, Chinese C9 Universities vs. Top 5 US universities, in USD at PPP

2016-05-17-2 

The basic point here is that Peking and Tsinghua are – on a PPP basis at least, and excluding medical income on the US side without being sure that it is excluded on the Chinese side – at least roughly in the same league as Harvard, though not quite in the same league as MIT, Stanford and Johns Hopkins.  The rest of the Chinese universities trail a bit: the poorest of these, Xi’an Jiao Tong, would be at about the level of Berkeley if you use a PPP comparison, and Florida State if you use the exchange rate.

Now let’s move to the UK, where the top five universities in terms of dollars per student are Cambridge, Imperial College, University College London, Oxford and Edinburgh.    The comparison changes quite a bit depending on whether or not one uses PPP or exchange rates.  On a PPP basis, Tsinghua and Peking would lead all UK universities; on an exchange-rate basis, they would be 5th and 6th – that is, behind Cambridge, UCL, ICL and Oxford but still ahead of Edinburgh.  Either way it suggests that, financially at least, the top Chinese universities are on a similar playing field as the top UK ones.

Figure 2: Total Income, Chinese C9 Universities vs. Top 5 UK universities, in USD at Exchange and PPP

2016-05-17-3

Next, let’s go to Canada.  Here are the top five Canadian schools compared with the top seven Chinese ones.  On a PPP basis, UBC is the only Canadian university which would crack the top seven in China.  But on an exchange-rate basis, all of our top five would come ahead of Nanjing and close to Fudan.

Figure 3: Total Income, Chinese C9 Universities vs. Top 5 Canadian universities, in USD at Exchange and PPP

2016-05-17-4 

Finally, let’s take a look at Australia, where universities are frankly much less well-funded than elsewhere.  On a PPP basis, even the weakest of the C9 – Xi’an Jiao Tong – would come ahead of the best-funded Australian institutions (Australian National University).  On an exchange-rate basis, ANU would rise ahead of Xi’an Jiao Tong and Nanjing, but would still lag behind the other Chinese institutions, by a factor of 2:1 in the case of Peking and Tsunghua.

Figure 4: Total Income, Chinese C9 Universities vs. Top 5 Australian universities, in USD at Exchange & PPP

2016-05-17-5 

The bottom-line is that while most Chinese universities are still a ways away from the top international standards in terms of income, expenditure, research base, etc., at the very top it seems that the C9 institutions are now very much in the global elite as far as funding is concerned.  They are not yet there as far as research output is concerned – only Peking and Tsinghua make the Times Higher Top 100 and none make the Shanghai Academic Rankings of World Universities – but that’s only a matter of time.  Rankings (and prestige) are a result of cumulative effort and financing.  Another decade with these kinds of numbers will make a very big difference indeed.

Page 5 of 99« First...34567...102030...Last »