HESA

Higher Education Strategy Associates

June 14

Two Approaches to Student Success

I’ve recently been doing a little bit of work recently on student success and I am struck by the fact that there are two very different approaches to student success, depending on which side of the Atlantic you are sitting on.  I’m not sure one is actually better than the other, but they speak to some very different conceptions of where student success happens within an institution.

(To be clear, when I say “student success” I mostly mean “degree/program completion”.  I recognize that there are evolving meanings of this which mean something more/different.  Some are extending the notion to not just completion but career success – or at least success in launching one’s career; others suggest completion is overrated as a metric since some students only attend to obtain specific skills and never intend to complete, and if these students drop out in order to take a job, that’s hardly a failure.  I don’t mean to challenge either of these points, but I’m making a point about the more traditional definition of the term).

What I would call the dominant North American way of thinking about student success is that it is an institutional and, to a lesser extent, a faculty matter rather than something dealt with at the level of the department or the program.  We throw resources from central administration (usually, from Institutional Research) to identify “at-risk” students.  We use central resources to bolster students’ mental health, and hire councillors, tutors and academic support centrally as well.  Academic advisors tend to be employed by faculties rather than the central admin, but the general point still stands – these are all things that are done on top of, and more or less without reference to, the actual academic curriculum.

The poster child for this kind of approach is Georgia State University (see articles here and here).  It’s an urban university with very significant minority enrolments, one that at the turn of the century had a completion rate of under 30%.  By investing heavily in data analytics and – more importantly – in academic tutors and advisors (I’ve heard but can’t verify that their ratio of students to advisors is 300:1 or less, which is pretty much unimaginable at a Canadian university).  Basically, they throw bodies at the problem.  Horrible, dreaded, non-academic staff bloat bodies.  And it works: their retention rates are now up over 50 percent – their improvement among minority students has been a whopping 32 percentage points.

But what they don’t seem to do is alter the curriculum much.  It’s a very North American thing, this.  The institution is fine, it’s students that have to make adjustments, and we have an army of counsellors to help them do so.

Now, take a gander at a fascinating little report from the UK called What Works: Student retention and success change programme phase 2  In this project, a few dozen individual retention projects were put together across 13 participating institutions, piloted and evaluated.  The projects differed from place to place, but they were built on a common set of principles, the first and most important one being as follows; “interventions and approaches to improve student retention and success should, as far as possible, be embedded into mainstream academic provision”.

So what got piloted were mostly projects that involved some adjustment to curriculum, either in terms of the on-boarding process (e.g. “Building engagement and belonging through pre-entry webinars, student profiling and interactive induction”) or the manner in which assessments are done (e.g., “Inclusive assessment approaches: giving students control in assignment unpacking”) or simply re-doing the curriculum as a whole (e.g. “Active learning elements in a common first-year engineering curriculum”).

That is to say, in this UK program, student success was not treated as an institutional priority dealt with by non-academic staff.  It was treated as a departmental-level priority, dealt with by academic staff.

I would say at most North American universities this approach is literally unimaginable.  Academic staff are not “front-line workers” who deal with issues like academic preparedness; in fact, often professors who do try to work with a student and refer them to central academic or counselling services will discover they cannot follow up an individual case with central services because the latter see it as a matter of “client confidentiality”.  And outside of professional faculties, our profs teach individual courses of their own choosing rather than jointly manage and deliver a set curriculum which can be tweaked.  Making a curriculum more student-friendly assumes there is a curriculum to alter, rather than simply a basket of courses.

Part of this is a function of how university is conceptualized.  In North America, we tend to think that students choose an institution first and a program of study later (based on HESA’s research on student decisions, I think this is decreasingly the case, but that’s another story). So, when we read all the Vince Tinto-related research (Tinto being the guru of student retention studies, most of which is warmed-over Durkheim) about “belonging”, “fit” and so on, we assume that what students are dropping out of is the institution not the program, and assign responsibilities accordingly.  But in Europe, where 3-year degrees are the norm and they don’t mess around with things like breadth requirements, the assumption is you’re primarily joining a program of study, not an institution.  And so when Europeans read Tinto, they assume the relevant unit is the department or program, not the institution.

But also I think the Europeans – those interested in widening access and participation, anyway – are much less likely to think of the problem as being one of how to get students to adapt to university and its structures.  Quite often, they reverse the problem and say “how can the institution adapt itself to its students”?

It’s worth pondering, maybe, whether we shouldn’t ask that question more often, ourselves.  I think particularly when it comes to Indigenous students, we might be better served with a more European approach.

 

June 13

Who Should Benefit from Skills Training Money?

We seem to be in a period in Canada where money for “skills” is in vogue, mainly because it is seen as a panacea for lots of quite separate problems. At a really high-order level, you’ve got the Innovation ministry in Ottawa pounding the drum on skills because the tech industry says skills are a bottleneck to whatever kind of tech-powered Nirvana the Minister imagines Canada to be headed towards. And then you’ve got the Employment and Social Development Ministry and to an extent the PMO who see investing in skills as a social cohesion play – more skills means more jobs means we won’t look like the US rust belt and we can avoid a nasty bout of populism up here.

I’ll focus on the social cohesion play for the rest of this blog because no one wants to hear my views on the Innovation Minister’s views on coding again (though on the off-chance you do, see here). The mostly-unacknowledged problem we have right now is that there are three entirely separate possible foci for skills training, and very little (as far as I can tell) strategic direction about where we should be spending our dollars.

The first direction is pretty simple: providing money to train people who have lost their jobs. This is what Employment Insurance has done for over forty years. It’s not particularly good at it; but then, it’s not like any other countries are particularly good at it either. Re-training people mid-career is hard, especially if they don’t have an especially high skill level to begin with.

Then there’s the second direction, the one the Harper government tried to take us down, a bit. And that’s providing money to employers to provide more skills to their own workforce. The argument here – in part – is that we can to some extent prevent unemployment by subsidizing companies to invest more in their employees’ skills and thus make them more competitive. This is essentially what the Canada job Grant was supposed to do.

Finally, there’s a third direction, which is to help people develop their own skills, shall we say, prophylactically. That is, help workers get whatever skills they think they need in order to make themselves more flexible, and more employable. Now obviously you don’t need policy to do this – in a market economy people can invest in their own skills as they like, but repeated evidence from around the globe shows that if you do that then you tend to get a Matthew effect: those that already have tend to get more.

It’s never entirely clear why those with lower skills don’t invest on their own. Is it money?  Time? General disinclination to spending time in a classroom? But they don’t, so people are always looking for ways to try to entice them.  Not many schemes have worked well, however. One notable attempt to do this in the UK via something called “individual learning accounts” ended in dramatic failure and mass fraud.

One obvious possibility to encourage this kind of training would be to create some kind of guarantee for workers to be able to take time off for training, an idea which was contained in the 1985 MacDonald Commission (which dubbed it a “Time Bank”). Ontario could have gone this route last month when it announced its labour reform package, but instead decided to give everyone a third week of paid holiday instead. Missed opportunity.

The point I want to make here is not just that these are three different target “markets” for training, it’s that they are to some extents at cross-purposes with one another. For instance, employers really like the second type, because they get to direct where the training goes. They are somewhat less keen on the third kind, because if individuals are investing in their own skills, they are probably to some extent doing so to give themselves insurance and make themselves more mobile.

Because funds are not inexhaustible, there are trade-offs between subsidizing different types of training. At some point, if we’re going to get skills training policy right, the question of what skills, for who, and when, have to be answered openly and trade-offs have to be analyzed and debated. We’re not there yet – right now it’s mostly an inchoate “Need moar Skillz!” But we need to get there soon. Otherwise this is all a waste of time.

June 12

The Nordstrom Philologist

People are always nattering on about skills for the new economy, but apart from some truly unhelpful ideas like “everyone should learn to code”, they are usually pretty vague on specifics about what that means.  But I think I have solved that.

What the economy needs – or more accurately, what enterprises (private and public) need – is more Nordstrom Philologists.

Let me explain.

One of the main consequences of the management revolutions of the last couple of decades has been the decline of middle-management.  But, as we are now learning, one of the key – if unacknowledged – functions of middle-management was to act as a buffer between clients and upper management on the one side, and raw new employees on the other.  By doing so, they could bring said new employees along slowly into the culture of the company, show them the ropes and hold their hands a bit as they gained in confidence and ability in dealing with new and unfamiliar situations.

But that’s gone at many companies now.  New employees are now much more likely to be thrown headfirst into challenging situations.  They are more likely to be dealing with clients directly, which of course means they have greater responsibility for the firm’s reputation and its bottom line.  They are also more likely to have to report directly to upper management, which requires a level of communication skills and overall maturity which many don’t have.

When employers say young hires “lack skills”, this is what they are talking about.  Very few complain that the “hard skills” – technical skills related to the specific job – are missing. Rather, what they are saying is they lack the skills to deal with clients and upper management.  And broadly, what that means is, they can’t communicate well and they can’t figure out how to operate independently without being at the (senior) boss’ door every few minutes asking “what should I do now”?

When it comes to customer service, everyone knows Nordstrom is king.  And a large part of that has to do with its staff and its commitment to customer care.  Communications are at the centre of what Nordstrom does, but it’s not communicating to clients; rather, it’s listening to them.  Really listening, I mean: understanding what clients actually want, rather than just what they ask for.  And then finding ways to make sure they get what they need.  That’s what makes clients and/or citizens feel valued.  And it’s what the best employees know how to provide.

And then there’s philology* – the study of written texts.  We don’t talk much about this discipline anymore in North America since its constituent parts have it’s been partitioned into history, linguistics, religious studies and a tiny little bit into art history (in continental Europe it retains a certain independence and credibility as an independent discipline).  The discipline consists essentially in constructing plausible hypotheses from extremely fragmentary information: who wrote the Dead Sea scrolls?  Are those Hitler diaries real?  And so on.   It’s about understanding cultural contexts, piecing together clues.

Which is an awful lot like day-to-day business.  There’s no possible way to learn how to behave in every situation, particularly when the environment is changing rapidly.  Being effective in the workplace is to a large degree about developing modes of understanding and action based on some simple heuristics and a constant re-evaluation of options as new data becomes available.  And philology, the ultimate “figure it out for yourself” discipline, is excellent training for it (history is a reasonably close second).

That’s pretty much it.  Nordstrom for the really-listening-to-client skills, philology for the figuring-it-out-on-your-own-and-getting-stuff-done skills.  Doesn’t matter what line of business you’re in, these are the competencies employers need.  And similarly, it doesn’t matter what field of study is being taught, these are the elements that need to be slipped into the curriculum.

*(On the off-chance you want to know more about philology, you could do a lot worse than James Turner’s Philology: The Forgotten Origins of the Modern Humanities.  Quite a useful piece on the history of thought). 

June 09

Why we should – and shouldn’t – pay attention to World Rankings

The father of modern university rankings is James McKeen Cattell, a well-known early 20th-century psychologist, scientific editor (he ran the journals Science and Psychological Review) and eugenecist.  In 1903, he began publishing American Men of Science, a semi-regular rating of the country’s top scientists, as rated by university department chairs.  He then hit on the idea of counting how many of these scientists were graduates of the nation’s various universities.  Being a baseball enthusiast, it seemed completely natural to arrange these results top to bottom, as in a league table.  Rankings have never looked back.

Because of the league table format, reporting on rankings tends to mirror what we see in sports.  Who’s up?  Who’s down?  Can we diagnose the problem from the statistics?  Is it a problem attracting international faculty?  Lower citation rates?  A lack of depth in left-handed relief pitching?  And so on.

The 2018, QS World University Rankings, released last night, are another occasion for this kind of analysis.  The master narrative for Canada – if you want to call it that – is that “Canada is slipping”.  The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51st) and that we also now have two fewer institutions in the top 200, (Calgary fell from 196th to 217th and Western from 198 to 210th) than we used to.

People pushing various agendas will find solace in this.  At UBC, blame will no doubt be placed on the institution’s omnishambular year of 2015-16.  Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.

This is wrong for a couple of reasons.  The first is that it is by no means clear that Canadian institutions are in fact slipping.  Sure, we have two fewer in the 200, but the number in the top 500 grew by one.  Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant.  Even the one high-profile “failure” – UBC –  only saw its overall score fall by one-tenth of a point; the fall in the rankings was more due to an improvement in a clutch of Asian and Australian universities.

The second is that in the short-term, rankings are remarkably impervious to policy changes.  For instance, according to the QS reputational survey, UBC’s reputation has taken exactly zero damage from l’affaire Gupta and its aftermath.  Which is as it should be: a few months of communications hell doesn’t offset 100 years of scientific excellence.  And new money for research may help less than people think. In Canada, institutional citations tend to track the number of grants received more than the dollar value of the grants.  How granting councils distribute money is at least as important as the amount they spend.

And that’s exactly right.  Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of twelve months.  Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes which reduced the impact of medical research relative to other fields of study).

So it matters that Canada has three universities which are genuinely top class, and another clutch (between four and ten, depending on your definition), which could be called “world-class”.  It’s useful to know that, and to note if any institutions have sustained, year-after-year changes either up or down.  But this has yet to happen to any Canadian university.

What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements.  Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140.   There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.

June 08

That Andrew Scheer Free Speech Promise

That Andrew Scheer Free Speech Promise

You may recall that a few weeks ago I profiled the higher education/science/youth proposals of the various Conservative Party leadership hopefuls.  You may also recall that the candidate who eventually won the context, Andrew Scheer, had one proposal that distinguished him from the rest of the pack, to wit:

In addition, Scheer pledges that “public universities or colleges that do not foster a culture of free speech and inquiry on campus” will “not have support from the federal government”.  He then lists the tri-councils and CRCs as specific funding mechanisms for which institutions would not be eligible: it is unclear if the ban would include CFI and – more importantly – CSLP.  Note that the ban would only cover public institutions; private (i.e. religious) institutions would be able to limit free inquiry – as indeed faith-based institutions do for obvious reasons – and still be eligible for council funding.

Scheer elaborated on this just once in the media, so far as I can tell, telling the National Post on April 19th  of troubling trends on campus “a pro-life group having its event cancelled at Wilfrid Laurier University; a student newspaper at McGill refusing to print pro-Israel articles; and protest surrounding University of Toronto professor Jordan Peterson for his views on gender pronouns.”

A lot of people are scratching their heads about this.  What the heck is that all about? they wondered.  Does he actually mean it?  And how would that work, exactly, anyway?

Let’s take those questions separately.

What the heck is this about?  It is hard to see any pressing cases of people who were simply not allowed to speak led to this statement.  The stuff at Wilfrid Laurier was perhaps childish but no one was prevented from speaking.  The McGill Daily is perhaps wrong in its editorial policy, but freedom of the press also means the freedom not to publish things.  And students at U of T are presumably as free to criticize Jordan Peterson as Peterson himself is free to be an obnoxious jerk.

There certainly have been cases where speakers have been shouted out or prevented from speaking on campus because of protests but not recently (I can think of two off the top of my head: Benjamin Netanyahu at Concordia in 2002 and Anne Coulter at the University of Ottawa in 2010.)  If we broaden the complaint to other things like taking down “free speech walls” because of perceived derogatory comments, or preventing abortion rights groups from displaying pictures of aborted fetuses at tabled in busy hallways because it’s gross and upsetting, you can probably come up with stuff that is slightly more recent.   But I suspect he’s really reacting to south-of-the-border stuff that happens to make the news up here, in particular the events at Middlebury College, a tony liberal arts school in Vermont where controversial social scientist Charles Murray was fairly violently and crudely prevented from speaking back in January.

(If you want a really exhaustive compendium of perceived slights to free speech in Canadian universities, the Justice Centre for Constitutional Freedoms provides one annually in its Campus Freedom Index, the 2016 edition of which is available here).

So if there’s no urgent policy problem here, what’s it about?  My guess is that roughly akin to Stephen Harper’s anti-census position.   It’s a way of throwing meat to the party’s populist faction without actually adopting a fully populist platform.

Does he actually mean it?  Hard to tell, but my guess is that he does, at least to the extent that he wants to be able to have a platform to talk about unaccountable lefty cultural institutions.  It’s not a loosely-worded pledge: specific exemptions have been carved out for faith-based institutions (part of Scheer’s base), which I think suggests this isn’t something he came up with on the back of a cocktail napkin. Plus, in his victory speech, he went out of his way to repeat the pledge, which he was in no way obliged to do.  And he got a huge cheer from the crowd for doing so.  Antithetical to higher education sensibilities it may be, but it plays with the base.

(And yes, it has been pointed out a few times that among the people who might be most put out about this would be the pro-Israel types who try to stop the “BDS/End Israeli Apartheid” rhetoric on campuses, and who were a particular target of Tory woo-ing during the Harper years.  I think the safe assumption is that Scheer knows and doesn’t care).

How would that work, exactly, anyway?  This of course is the big question, and the one that has most people scratching their heads.  Presumably there would be some kind of complaint process: but to whom would the complaint be addressed?  What kind of body in Ottawa would have the ability to a) judge whether or not an institution was or was not promoting a culture of free speech and b) the power to order a remedy in the form of full or partial removal of funding from federal granting councils?  Does Scheer really think an administrative body could do this?  How would institutions not litigate both of these decisions into outer space?

Also, how does Scheer imagine that provincial governments – you know, the ones which actually have the constitutional responsibility to run and regulate postsecondary education – will react to any intrusion into their jurisdiction?  Can you imagine, for instance, how the Andrew Potter fiasco at McGill would have escalated if Andrew Scheer had taken Potter’s side?  The province, which genuinely lost its mind on the subject for about a couple of days (one major newspaper ran a piece comparing Potter to Rwandan genocidaires), would have gone berserk if Ottawa had tried to meddle.

These complications, in the end, are probably going to be used to create a graceful way to get out of the specifics of the pledge.  No government is going to put the University of Toronto’s hundreds of millions of tri-council funding at risk over a spat about personal pronouns, or yank nine figures worth of funding from McGill because it doesn’t like the Daily’s editorial policy on Israeli settlers in occupied Palestine.  But it might really want to keep those issues in the public eye for partisan purposes.

So, in the event of a Conservative victory, expect an office to be created that would report on “violations” of campus free speech, no doubt staffed by former authors of the Campus Freedom Index.  Money won’t be placed at risk, but institutions would be put on notice.  And Conservative partisans would be delighted.  Which is the real point of the policy.

One can deplore this attitude, of course.  But the point is that Scheer believes that beating this drum will increase his chances of winning power.  He’s probably not completely wrong to think that.  Wiser heads will spend time in the coming months pondering why bashing universities is popular to begin with.

June 07

National Patterns of Student Housing

The other day I published a graph on student housing in Canada and the United States that seemed to grab a lot of people’s attention.  It was this one:

Figure 1: Student Living Arrangements, Canada vs. US

June 7-17 Figure 1

People seemed kind of shocked by this and wondered what causes the differences.  So I thought I’d take a shot at answering this.

(caveat: remember, this is data from a multi-campus survey and I genuinely have no idea how representative this is of the student body as a whole.  The ACHA-NCHA survey seems to skew more towards 4-year institutions than the Canadian sample, and it’s also skewed geographically towards western states.  Do with that info what you will)

Anyways, my take on this is basically that you need to take into consideration several centuries worth of history to really get the Canada-US difference.  Well into the nineteenth century, the principal model for US higher education was Cambridge and Oxford, which were residential colleges.  Canada, on the other hand, looked at least as much to Scottish universities for inspiration, and the Scots more or less abandoned the college model during the eighteenth century.  This meant that students were free to live at home, or in cheaper accommodations in the city, which some scholars think contributed to Scotland having a much more accessible system of higher education at the time (though let’s not get carried away, this was the eighteenth century and everything is relative).

Then there’s the way major public universities got established in the two countries.  In the US, it happened because of the Morrill Acts, which created the “Land-Grant” Universities which continue to dominate the higher education systems of the Midwest and the South.  The point of land-grant institutions was to bring education to the people, and at the time, the American population was still mostly rural.  Also, these new universities often had missions to spread “practical knowledge” to farmers (a key goal of A&M – that is, agricultural and mechanical – universities), which tended to support the establishment of schools outside the big cities.  Finally, Americans at the time – like Europeans – believed in separating students from the hurly-burly of city life because of its corrupting influence.  The difference was that Europeans achieved usually achieved this by walling off their campuses (e.g. La Sapienza in Rome), while Americans did it by sticking their flagship public campuses out in the boonies (e.g. Illinois Urbana-Champaign).   And as a result of sticking so many universities in small towns, a residential system of higher education emerged more or less naturally.

In Canada, none of this happened because the development of our system lagged the Americans’ by a few decades.  Our big nineteenth-century universities – Queen’s excepted – were located in big cities.  Out west, provincial universities, which were the equivalent of the American land-grants, didn’t get built until the population urbanized, which is why the Universities of Manitoba, Saskatchewan and Alberta are in Winnipeg, Saskatoon and Edmonton instead of Steinbach, Estevan and Okotoks.  The corollary of having universities in big cities was that it was easier to follow the Scottish non-residential model.

The Americans could have ditched the residential model during the transition to a mass higher education in the 1950s, but by that time it had become ingrained as the norm because it was how all the prestigious institutions did things.  And of course, the Americans have some pretty distinctive forms of student housing too.  Fraternities and sororities, often considered a form of off-campus housing in Canada, are very much part of the campus housing scene in at least some parts of the US (witness the University of Alabama’s issuing over $100 million in bonds to spruce up its fraternities).

In short, the answer to the question of why Americans are so much more likely to live on campus than Canadian students is “historical quirks and path dependency”.  Given the impact these tendencies have on affordability, that’s a deeply unsatisfying answer, but it’s a worthwhile reminder that in a battle between sound policy and historical path dependency, the latter often wins.

 

June 06

Making “Applied” Research Great Again

One of the rallying calls of part of the scientific community over the past few years is how under the Harper government there was too much of a focus on “applied” research and not enough of a focus on “pure”/”basic”/”fundamental research.  This call is reaching a fever pitch following the publication of the Naylor Report (which, to its credit, did not get into a basic/applied debate and focussed instead on whether or not the research was “investigator-driven”, which is a different and better distinction).  The problem is that that line between “pure/basic/fundamental” research and applied research isn’t nearly as clear cut as people believe, and the rush away from applied research risks throwing out some rather important babies along with the bathwater.

As long-time readers will know, I’m not a huge fan of a binary divide between basic and applied research.  The idea of “Basic Science” is a convenient distinction created by natural scientists in the aftermath of WWII as a way to convince the government to give them money the way they did during the war but without having soldier looking over their shoulder.  In some fields (medicine, engineering), nearly all research is “applied” in the sense that it there are always considerations of the end-use for the research.

This is probably a good time for a refresher on Pasteur’s Quadrant.  This concept was developed by Donald Stokes, a political scientist at Princeton, just before his death in 1997.  He too thought the basic/applied dichotomy was pretty dumb, so like all good social scientists he came up with a 2×2 instead.  One consideration in classifying science is whether or not it involved a quest for fundamental understanding; the other was whether or not the researcher had any consideration for end-use.   And so what you get is the following:

June 6 -17 Table 1

(I’d argue that to some extent you could replace “Bohr” with “Physics” and “Pasteur” with “Medicine” because it’s the nature of the fields of research and not individual researchers’ proclivities, per se, but let’s not quibble).

Now what was mostly annoying about the Harper years – and to some extent the Martin and late Chretien years – was not so much that the federal government was taking money out of the “fundamental understanding” row and into the “no fundamental understanding” row (although the way some people go on you’d be forgiven for thinking that), but rather than it was trying to make research fit into more than one quadrant at once.  Sure, they’d say, we’d love to fund all your (top-left quadrant) drosophilia research, but can you make sure to include something about its eventual (bottom-right quadrant) commercial applications?  This attempt to make research more “applied” is and was nonsense, and Naylor was right to (mostly) call for an end to it.

But that is not the same thing as saying we shouldn’t fund anything in the bottom-right corner – that is, “applied research”.

And this is where the taxonomy of “applied research” gets tricky.  Some people – including apparently the entire Innovation Ministry, if the last budget is any indication – think that the way to bolster that quadrant is to leave everything to the private sector, preferably in sexy areas like ICT, Clean Tech and whatnot.  And there’s a case to be made for that: business is close to the customer, let them do the pure applied research.

But there’s also a case to be made that in a country where the commercial sector has few big champions and a lot of SMEs, the private sector is always likely to have some structural difficulties doing the pure applied research on its own.  It’s not simply a question of subsidies: it’s a question of scale and talent.  And that’s where applied research as conducted in Canada’s colleges and polytechnics comes in.  They help keep smaller Canadian companies – the kinds that aren’t going to get included in any “supercluster” initiative – competitive.  You’d think this kind of research should be of interest to a self-proclaimed innovation government.  Yet whether by design or indifference we’ve heard nary a word about this kind of research in the last 20 months (apart perhaps from a renewal of the Community and College Social Innovation Fund).

There’s no reason for this.  There is – if rumours of a cabinet submission to respond to the Naylor report are true – no shortage of money for “fundamental”, or “investigator-driven” research.  Why not pure applied research too?  Other than the fact that “applied research” – a completely different type of “applied research”, mind you – has become a dirty word?

This is a policy failure unfolding in slow motion.  There’s still time to stop it, if we can all distinguish between different types of “applied research”.

June 05

Student Health (Part 3)

You know how it is when someone tries to make a point about Canadian higher education using data from American universities? It’s annoying.  Makes you want to (verbally) smack them upside the head. Canada and the US are different, you want to yell. Don’t assume the data are the same! But of course the problem is there usually isn’t any Canadian data, which is part of why these generalizations get started in the first place.

Well, one of the neat things about the AHCA-NCHA campus health survey I was talking about last week is that it is one of the few data collection instruments that is in use on both sides of the border. Same questions, administered at the same time to tens of thousands of students on both sides of the border. And, as I started to look at the data for 2016, I realized my “Canada is different” rant is – with respect to students and health at least – almost entirely wrong. Turns out Canadian and American students are about as alike as two peas in a pod. It’s kind of amazing, actually.

Let’s start with basic some basic demographic indicators, like height and weight. I think I would have assumed automatically that American students would be both taller and heavier than Canadian ones, but figure 1 shows you what I know.

Figure 1: Median Height (Inches) and Weight (Pounds), Canadian vs. US students.

OTTSYD-1

Now, let’s move over to issues of mental health, one of the key topics of the survey. Again, we see essentially no difference between results on either side of the 49th parallel.

Figure 2: Within the last 12 months have you been diagnosed with/treated for…

OTTSYD-2

What about that major student complaint, stress? The AHCA-NCHA survey asks students to rate the stress they’ve been under over the past 12 months. Again, the patterns in the two countries are more or less the same.

Figure 3: Within the last 12 months, rate the stress you have been under.

OTTSYD-3

One interesting side-note here: students in both countries were asked about issues causing trauma or being “difficult to handle”. Financial matters were apparently more of an issue in Canada (40.4% saying yes) than in the US (33.7%). I will leave it to the reader to ponder how that result lines up with various claims about the effects of tuition fees.

At the extreme end of mental health issues, we have students who self-harm or attempt suicide. There was a bit of a difference on this one, but not much, with Canadian students slightly more likely to indicate that they had self-harmed or attempted suicide.

Figure 4: Attempts at Self-harm/suicide.

OTTSYD-4

What about use of tobacco, alcohol and illicit substances? Canadian students are marginally more likely to drink and smoke, but apart from that the numbers look pretty much the same. The survey, amazingly, does not ask about use of opioids/painkillers, which if books like Sam Quinones’ Dreamland are to be believed have made major inroads among America’s young – I’d have been interested to see the data on that. It does have a bunch of other minor drugs – heroin, MDMA, etc, and none of them really register in either country.

Figure 5: Use of Cigarettes, Alcohol, Marijuana, Cocaine.

OTTSYD-5

This post is getting a little graph-heavy, so let me just run through a bunch of topics where there’s essentially no difference between Canadians and Americans: frequency of sexual intercourse, number of sexual partners, use of most illegal drugs, use of seat belts, likelihood of being physically or sexually assaulted, rates of volunteering….in fact among the few places where you see significant differences between Canadian and American students is with respect to the kinds of physical ailments they report. Canadian students are significantly more likely to report having back pain, Americans more likely to report allergies and sinus problems.

Actually, the really big differences between the two countries were around housing and social life. In Canada, less than 2% of students reported being in a fraternity/sorority, compared to almost 10% in the United States. And as for housing, as you can see Americans are vastly more likely to live on-campus and vastly less-likely to live at home. On balance, that means they are incurring significantly higher costs to attend post-secondary education. Also, it probably means campus services are under a lot more pressure in the US than up here.

Figure 6: Student Living Arrangements.

OTTSYD-6

A final point here is with respect to perceptions of campus safety. We all know the differences in rates of violent crimes in the two countries, so you’d expect a difference in perceptions of safety, right? Well, only a little bit, only at night and mostly- off-campus. Figure 7 shows perceptions of safety during the day and at night, on campus and in the community surrounding campus.

Figure 7: Perceptions of safety on campus and in surrounding community.

OTTSYD-7

In conclusion: when it comes to students health and lifestyle, apart from housing there do not appear to many cross-border differences. We seem to be living in a genuinely continental student culture.

June 02

Student Health (part 2)

Now you may have seen a headline recently talking about skyrocketing mental health problems among students.  Specifically, this one from the Toronto Star, which says, among other things:

A major survey of 25,164 Ontario university students by the American College Health Association showed that between 2013 and 2016, there was a 50-per-cent increase in anxiety, a 47-per-cent increase in depression and an 86-per-cent increase in substance abuse. Suicide attempts also rose 47 per cent during that period.

That’s a pretty stunning set of numbers.  What to make of them?

Part of what’s going on here is looking at the size of the increase instead of the size of the base.  If the incidence of something goes from 1% to 2% in the population, that can be accurately expressed either as “a one percentage point increase” or “IT DOUBLED!”.   The increase for the numbers on “attempted suicide in the last 12 months”, for instance, rose from 1.3% to 2.1%.  With such a tiny base, double-digit increases aren’t difficult to manufacture.

(in case you’re wondering whether these figures are a function of possible measurement error, the answer is no.  With a 40,000 student sample, the margin of error for an event that happens 1% of the time is 0.1, so a jump from 0.8% is well beyond the margin of error).

Now, the Star is correct, there is a very troubling pattern here – across all mental health issues, the results for 2016 are significantly worse than for 2013 and troublingly so.  But it’s still a mistake to rely on these figures as hard evidence for something major having changed.  As I dug into the change in figures between 2013 and 2016, I was amazed to see that in fact figures were not just worse for mental health issues, but for health and safety issues across the board.  Some examples:

  • In 2013, 53.4% of students said their health was very good or excellent, compared to just 45.3% three years later
  • The percentage of students whose Body Mass Index put them in the category of Class II Obese or higher rose from 3.15 to 4.3%, a roughly 35% increase.
  • The percentage of students with diabetes rose by nearly 40%, migraine headaches by 20%, ADHD by nearly 25%
  • Even things like incidence of using helmets when on a bicycle or motorcycle are down by a couple of percentage points each, while the percent saying they had faced trauma from the death or illness of a family member rose from 21% to 24%.

Now, when I see numbers like this, I start wondering if maybe part of the issue is an inconsistent sample base.   And, as it turns out, this is true.  Between 2013 and 2016, the institutional sample grew from 30 to 41, and the influx of new institutions changed the sample considerably.  The students surveyed in 2016 were far more likely to be urban, and less likely to have been white or straight.  They were also less likely to have been on varsity teams or fraternity/sorority members (and I suspect that last one tells you something about socio-economic background as well, but that’s perhaps an argument for another day).

We can’t tell for certain how much of the change in reported health outcomes have to do with the change in sample.  It would be interesting and helpful if someone could recalculate the 2016 data using only data from institutions which were present in the 2013 sample.  That would provide a much better baseline for looking at change over time.  But what we can say is that this isn’t a fully apples-to-apples comparison and we need to treat with caution claims that certain conditions are spreading in the student population.

To conclude, I don’t want to make this seem like a slam against the AHCA survey.  It’s great.  But it’s a snapshot of a consortium at a particular moment in time, and you have to be careful about using that data to create a time series.  It can be done – here’s an example of how I’ve done it with Canadian Undergraduate Survey Consortium data, which suffers from the same drawback.  Nor do I want to suggest that mental health isn’t an issue to worry about.  It’s clearly something which creates a lot of demand for services and the need to be met somehow (though whether this reflects a change in underlying conditions or a change in willingness to self-identify and seek help is unresolved and to some degree unresolvable).

Just, you know, be careful with the data.  It’s not always as straightforward as it looks.

 

June 01

Student Health (part 1)

I have been perusing a quite astonishingly detailed survey that was recently released regarding student health.  Run by the American College Health Association-National College Health Assessment, this multi-campus exercise has been run twice now in Canada – once in 2013 and once in 2016.  Today, I’m going to look at what the 2016 results say, which are interesting in and of themselves.  Tomorrow, I’m going to look at how the data has changed since 2013 and why I think some claims about worsening student health outcomes (particularly mental health) need to be viewed with some caution.  If I get really nerdy over the weekend, I might do some Canada-US comparisons, too.

Anyways.

The 2016 study was conducted at 41 public institutions across Canada.  Because it’s an American based survey, it keeps referring to all institutions as “colleges”, which is annoying.  27 of the institutions are described as “4-year” institutions (which I think we can safely say are universities), 4 are described as “2-year” institutions (community colleges) and 10 described as “other” (not sure what to make of this, but my guess would be community colleges/polytechnics that offer mostly three-year programs).  In total, 43,780 surveys were filled out (19% response rate), with a roughly 70-30 female/male split.  That’s pretty common for campus surveys, but there’s no indication that responses have been re-weighted to match actual gender splits, which is a little odd but whatever.

 

There’s a lot of data here, so I’m mostly going to let the graphs do the talking.  First, the frequency of students with various disabilities.  I was a little bit surprised that psychiatric conditions and chronic illnesses were as high as they were.

Figure 1: Prevalence of Disabilities

Figure 1 Prevalence of Disabilities

Next, issues of physical safety.  Just over 87% of respondents reported feeling safe on campus during the daytime; however, only 37% (61% of women, 27% of men, and right away you can see how the gender re-weighting issue matters) say that they feel safe on campus at night.  To be fair, this is not a specific worry about campuses: when asked about their feelings of personal safety in the surrounding community, the corresponding figures were 62% and 22%.  Students were also asked about their experiences with specific forms of violence over the past 12 months.  As one might imagine, most of the results were fairly highly gendered.

 

Figure 2: Experience of Specific Forms of Violence Over Past 12 Months, by Gender

Figure 2 Experience of Specific Forms of Violence

Next, alcohol, tobacco, and marijuana.  This was an interesting question as the survey not only asked students about their own use of these substances, but also about their perception of other students’ use of them.  It turns out students vastly over-estimate the number of other students who engage with these substances.  For instance, only 11% of students smoked cigarettes in the past 30 days (plus another 4% using e-cigarettes and 3% using hookahs), but students believed that nearly 80% of students had smoked in the past month.

 

Figure 3: Real and Perceived Incidence of Smoking, Drinking and Marijuana Use over Past 30 Days

Figure 3 Real and Perceived Incidence of Smoking

Figure 4 shows the most common conditions for which students had been diagnosed with and/or received treatment for in the last twelve months.  Three of the top ten and two of the top three were mental health conditions.

Figure 4: Most Common Conditions Diagnosed/Treated in last 12 Months

Figure 4 Most Common Conditions Diagnosed

Students were also asked separately about the kinds of things that had negatively affected their academics over the previous year (defined as something which had resulted in a lower mark than they would have otherwise received).  Mental health complaints are very high on this list; much higher in fact than actual diagnoses of such conditions.  Also of note here: internet gaming was sixth among factors causing poorer marks; finances only barely snuck into the top 10 reasons, with 10.3% citing it (though elsewhere in the study over 40% said they had experienced stress or anxiety as a result of finances).

Figure 5: Most Common Conditions Cited as Having a Negative Impact on Academics

Figure 5 Most Common Conditions Cited as Having

A final, disturbing point here: 8.7% of respondents said they had intentionally self-harmed over the past twelve months, 13% had seriously contemplated suicide and 2.1% said they had actually attempted suicide.  Sobering stuff.

Page 5 of 119« First...34567...102030...Last »