HESA

Higher Education Strategy Associates

Category Archives: Students

June 14

Two Approaches to Student Success

I’ve recently been doing a little bit of work recently on student success and I am struck by the fact that there are two very different approaches to student success, depending on which side of the Atlantic you are sitting on.  I’m not sure one is actually better than the other, but they speak to some very different conceptions of where student success happens within an institution.

(To be clear, when I say “student success” I mostly mean “degree/program completion”.  I recognize that there are evolving meanings of this which mean something more/different.  Some are extending the notion to not just completion but career success – or at least success in launching one’s career; others suggest completion is overrated as a metric since some students only attend to obtain specific skills and never intend to complete, and if these students drop out in order to take a job, that’s hardly a failure.  I don’t mean to challenge either of these points, but I’m making a point about the more traditional definition of the term).

What I would call the dominant North American way of thinking about student success is that it is an institutional and, to a lesser extent, a faculty matter rather than something dealt with at the level of the department or the program.  We throw resources from central administration (usually, from Institutional Research) to identify “at-risk” students.  We use central resources to bolster students’ mental health, and hire councillors, tutors and academic support centrally as well.  Academic advisors tend to be employed by faculties rather than the central admin, but the general point still stands – these are all things that are done on top of, and more or less without reference to, the actual academic curriculum.

The poster child for this kind of approach is Georgia State University (see articles here and here).  It’s an urban university with very significant minority enrolments, one that at the turn of the century had a completion rate of under 30%.  By investing heavily in data analytics and – more importantly – in academic tutors and advisors (I’ve heard but can’t verify that their ratio of students to advisors is 300:1 or less, which is pretty much unimaginable at a Canadian university).  Basically, they throw bodies at the problem.  Horrible, dreaded, non-academic staff bloat bodies.  And it works: their retention rates are now up over 50 percent – their improvement among minority students has been a whopping 32 percentage points.

But what they don’t seem to do is alter the curriculum much.  It’s a very North American thing, this.  The institution is fine, it’s students that have to make adjustments, and we have an army of counsellors to help them do so.

Now, take a gander at a fascinating little report from the UK called What Works: Student retention and success change programme phase 2  In this project, a few dozen individual retention projects were put together across 13 participating institutions, piloted and evaluated.  The projects differed from place to place, but they were built on a common set of principles, the first and most important one being as follows; “interventions and approaches to improve student retention and success should, as far as possible, be embedded into mainstream academic provision”.

So what got piloted were mostly projects that involved some adjustment to curriculum, either in terms of the on-boarding process (e.g. “Building engagement and belonging through pre-entry webinars, student profiling and interactive induction”) or the manner in which assessments are done (e.g., “Inclusive assessment approaches: giving students control in assignment unpacking”) or simply re-doing the curriculum as a whole (e.g. “Active learning elements in a common first-year engineering curriculum”).

That is to say, in this UK program, student success was not treated as an institutional priority dealt with by non-academic staff.  It was treated as a departmental-level priority, dealt with by academic staff.

I would say at most North American universities this approach is literally unimaginable.  Academic staff are not “front-line workers” who deal with issues like academic preparedness; in fact, often professors who do try to work with a student and refer them to central academic or counselling services will discover they cannot follow up an individual case with central services because the latter see it as a matter of “client confidentiality”.  And outside of professional faculties, our profs teach individual courses of their own choosing rather than jointly manage and deliver a set curriculum which can be tweaked.  Making a curriculum more student-friendly assumes there is a curriculum to alter, rather than simply a basket of courses.

Part of this is a function of how university is conceptualized.  In North America, we tend to think that students choose an institution first and a program of study later (based on HESA’s research on student decisions, I think this is decreasingly the case, but that’s another story). So, when we read all the Vince Tinto-related research (Tinto being the guru of student retention studies, most of which is warmed-over Durkheim) about “belonging”, “fit” and so on, we assume that what students are dropping out of is the institution not the program, and assign responsibilities accordingly.  But in Europe, where 3-year degrees are the norm and they don’t mess around with things like breadth requirements, the assumption is you’re primarily joining a program of study, not an institution.  And so when Europeans read Tinto, they assume the relevant unit is the department or program, not the institution.

But also I think the Europeans – those interested in widening access and participation, anyway – are much less likely to think of the problem as being one of how to get students to adapt to university and its structures.  Quite often, they reverse the problem and say “how can the institution adapt itself to its students”?

It’s worth pondering, maybe, whether we shouldn’t ask that question more often, ourselves.  I think particularly when it comes to Indigenous students, we might be better served with a more European approach.

 

June 07

National Patterns of Student Housing

The other day I published a graph on student housing in Canada and the United States that seemed to grab a lot of people’s attention.  It was this one:

Figure 1: Student Living Arrangements, Canada vs. US

June 7-17 Figure 1

People seemed kind of shocked by this and wondered what causes the differences.  So I thought I’d take a shot at answering this.

(caveat: remember, this is data from a multi-campus survey and I genuinely have no idea how representative this is of the student body as a whole.  The ACHA-NCHA survey seems to skew more towards 4-year institutions than the Canadian sample, and it’s also skewed geographically towards western states.  Do with that info what you will)

Anyways, my take on this is basically that you need to take into consideration several centuries worth of history to really get the Canada-US difference.  Well into the nineteenth century, the principal model for US higher education was Cambridge and Oxford, which were residential colleges.  Canada, on the other hand, looked at least as much to Scottish universities for inspiration, and the Scots more or less abandoned the college model during the eighteenth century.  This meant that students were free to live at home, or in cheaper accommodations in the city, which some scholars think contributed to Scotland having a much more accessible system of higher education at the time (though let’s not get carried away, this was the eighteenth century and everything is relative).

Then there’s the way major public universities got established in the two countries.  In the US, it happened because of the Morrill Acts, which created the “Land-Grant” Universities which continue to dominate the higher education systems of the Midwest and the South.  The point of land-grant institutions was to bring education to the people, and at the time, the American population was still mostly rural.  Also, these new universities often had missions to spread “practical knowledge” to farmers (a key goal of A&M – that is, agricultural and mechanical – universities), which tended to support the establishment of schools outside the big cities.  Finally, Americans at the time – like Europeans – believed in separating students from the hurly-burly of city life because of its corrupting influence.  The difference was that Europeans achieved usually achieved this by walling off their campuses (e.g. La Sapienza in Rome), while Americans did it by sticking their flagship public campuses out in the boonies (e.g. Illinois Urbana-Champaign).   And as a result of sticking so many universities in small towns, a residential system of higher education emerged more or less naturally.

In Canada, none of this happened because the development of our system lagged the Americans’ by a few decades.  Our big nineteenth-century universities – Queen’s excepted – were located in big cities.  Out west, provincial universities, which were the equivalent of the American land-grants, didn’t get built until the population urbanized, which is why the Universities of Manitoba, Saskatchewan and Alberta are in Winnipeg, Saskatoon and Edmonton instead of Steinbach, Estevan and Okotoks.  The corollary of having universities in big cities was that it was easier to follow the Scottish non-residential model.

The Americans could have ditched the residential model during the transition to a mass higher education in the 1950s, but by that time it had become ingrained as the norm because it was how all the prestigious institutions did things.  And of course, the Americans have some pretty distinctive forms of student housing too.  Fraternities and sororities, often considered a form of off-campus housing in Canada, are very much part of the campus housing scene in at least some parts of the US (witness the University of Alabama’s issuing over $100 million in bonds to spruce up its fraternities).

In short, the answer to the question of why Americans are so much more likely to live on campus than Canadian students is “historical quirks and path dependency”.  Given the impact these tendencies have on affordability, that’s a deeply unsatisfying answer, but it’s a worthwhile reminder that in a battle between sound policy and historical path dependency, the latter often wins.

 

June 05

Student Health (Part 3)

You know how it is when someone tries to make a point about Canadian higher education using data from American universities? It’s annoying.  Makes you want to (verbally) smack them upside the head. Canada and the US are different, you want to yell. Don’t assume the data are the same! But of course the problem is there usually isn’t any Canadian data, which is part of why these generalizations get started in the first place.

Well, one of the neat things about the AHCA-NCHA campus health survey I was talking about last week is that it is one of the few data collection instruments that is in use on both sides of the border. Same questions, administered at the same time to tens of thousands of students on both sides of the border. And, as I started to look at the data for 2016, I realized my “Canada is different” rant is – with respect to students and health at least – almost entirely wrong. Turns out Canadian and American students are about as alike as two peas in a pod. It’s kind of amazing, actually.

Let’s start with basic some basic demographic indicators, like height and weight. I think I would have assumed automatically that American students would be both taller and heavier than Canadian ones, but figure 1 shows you what I know.

Figure 1: Median Height (Inches) and Weight (Pounds), Canadian vs. US students.

OTTSYD-1

Now, let’s move over to issues of mental health, one of the key topics of the survey. Again, we see essentially no difference between results on either side of the 49th parallel.

Figure 2: Within the last 12 months have you been diagnosed with/treated for…

OTTSYD-2

What about that major student complaint, stress? The AHCA-NCHA survey asks students to rate the stress they’ve been under over the past 12 months. Again, the patterns in the two countries are more or less the same.

Figure 3: Within the last 12 months, rate the stress you have been under.

OTTSYD-3

One interesting side-note here: students in both countries were asked about issues causing trauma or being “difficult to handle”. Financial matters were apparently more of an issue in Canada (40.4% saying yes) than in the US (33.7%). I will leave it to the reader to ponder how that result lines up with various claims about the effects of tuition fees.

At the extreme end of mental health issues, we have students who self-harm or attempt suicide. There was a bit of a difference on this one, but not much, with Canadian students slightly more likely to indicate that they had self-harmed or attempted suicide.

Figure 4: Attempts at Self-harm/suicide.

OTTSYD-4

What about use of tobacco, alcohol and illicit substances? Canadian students are marginally more likely to drink and smoke, but apart from that the numbers look pretty much the same. The survey, amazingly, does not ask about use of opioids/painkillers, which if books like Sam Quinones’ Dreamland are to be believed have made major inroads among America’s young – I’d have been interested to see the data on that. It does have a bunch of other minor drugs – heroin, MDMA, etc, and none of them really register in either country.

Figure 5: Use of Cigarettes, Alcohol, Marijuana, Cocaine.

OTTSYD-5

This post is getting a little graph-heavy, so let me just run through a bunch of topics where there’s essentially no difference between Canadians and Americans: frequency of sexual intercourse, number of sexual partners, use of most illegal drugs, use of seat belts, likelihood of being physically or sexually assaulted, rates of volunteering….in fact among the few places where you see significant differences between Canadian and American students is with respect to the kinds of physical ailments they report. Canadian students are significantly more likely to report having back pain, Americans more likely to report allergies and sinus problems.

Actually, the really big differences between the two countries were around housing and social life. In Canada, less than 2% of students reported being in a fraternity/sorority, compared to almost 10% in the United States. And as for housing, as you can see Americans are vastly more likely to live on-campus and vastly less-likely to live at home. On balance, that means they are incurring significantly higher costs to attend post-secondary education. Also, it probably means campus services are under a lot more pressure in the US than up here.

Figure 6: Student Living Arrangements.

OTTSYD-6

A final point here is with respect to perceptions of campus safety. We all know the differences in rates of violent crimes in the two countries, so you’d expect a difference in perceptions of safety, right? Well, only a little bit, only at night and mostly- off-campus. Figure 7 shows perceptions of safety during the day and at night, on campus and in the community surrounding campus.

Figure 7: Perceptions of safety on campus and in surrounding community.

OTTSYD-7

In conclusion: when it comes to students health and lifestyle, apart from housing there do not appear to many cross-border differences. We seem to be living in a genuinely continental student culture.

June 02

Student Health (part 2)

Now you may have seen a headline recently talking about skyrocketing mental health problems among students.  Specifically, this one from the Toronto Star, which says, among other things:

A major survey of 25,164 Ontario university students by the American College Health Association showed that between 2013 and 2016, there was a 50-per-cent increase in anxiety, a 47-per-cent increase in depression and an 86-per-cent increase in substance abuse. Suicide attempts also rose 47 per cent during that period.

That’s a pretty stunning set of numbers.  What to make of them?

Part of what’s going on here is looking at the size of the increase instead of the size of the base.  If the incidence of something goes from 1% to 2% in the population, that can be accurately expressed either as “a one percentage point increase” or “IT DOUBLED!”.   The increase for the numbers on “attempted suicide in the last 12 months”, for instance, rose from 1.3% to 2.1%.  With such a tiny base, double-digit increases aren’t difficult to manufacture.

(in case you’re wondering whether these figures are a function of possible measurement error, the answer is no.  With a 40,000 student sample, the margin of error for an event that happens 1% of the time is 0.1, so a jump from 0.8% is well beyond the margin of error).

Now, the Star is correct, there is a very troubling pattern here – across all mental health issues, the results for 2016 are significantly worse than for 2013 and troublingly so.  But it’s still a mistake to rely on these figures as hard evidence for something major having changed.  As I dug into the change in figures between 2013 and 2016, I was amazed to see that in fact figures were not just worse for mental health issues, but for health and safety issues across the board.  Some examples:

  • In 2013, 53.4% of students said their health was very good or excellent, compared to just 45.3% three years later
  • The percentage of students whose Body Mass Index put them in the category of Class II Obese or higher rose from 3.15 to 4.3%, a roughly 35% increase.
  • The percentage of students with diabetes rose by nearly 40%, migraine headaches by 20%, ADHD by nearly 25%
  • Even things like incidence of using helmets when on a bicycle or motorcycle are down by a couple of percentage points each, while the percent saying they had faced trauma from the death or illness of a family member rose from 21% to 24%.

Now, when I see numbers like this, I start wondering if maybe part of the issue is an inconsistent sample base.   And, as it turns out, this is true.  Between 2013 and 2016, the institutional sample grew from 30 to 41, and the influx of new institutions changed the sample considerably.  The students surveyed in 2016 were far more likely to be urban, and less likely to have been white or straight.  They were also less likely to have been on varsity teams or fraternity/sorority members (and I suspect that last one tells you something about socio-economic background as well, but that’s perhaps an argument for another day).

We can’t tell for certain how much of the change in reported health outcomes have to do with the change in sample.  It would be interesting and helpful if someone could recalculate the 2016 data using only data from institutions which were present in the 2013 sample.  That would provide a much better baseline for looking at change over time.  But what we can say is that this isn’t a fully apples-to-apples comparison and we need to treat with caution claims that certain conditions are spreading in the student population.

To conclude, I don’t want to make this seem like a slam against the AHCA survey.  It’s great.  But it’s a snapshot of a consortium at a particular moment in time, and you have to be careful about using that data to create a time series.  It can be done – here’s an example of how I’ve done it with Canadian Undergraduate Survey Consortium data, which suffers from the same drawback.  Nor do I want to suggest that mental health isn’t an issue to worry about.  It’s clearly something which creates a lot of demand for services and the need to be met somehow (though whether this reflects a change in underlying conditions or a change in willingness to self-identify and seek help is unresolved and to some degree unresolvable).

Just, you know, be careful with the data.  It’s not always as straightforward as it looks.

 

June 01

Student Health (part 1)

I have been perusing a quite astonishingly detailed survey that was recently released regarding student health.  Run by the American College Health Association-National College Health Assessment, this multi-campus exercise has been run twice now in Canada – once in 2013 and once in 2016.  Today, I’m going to look at what the 2016 results say, which are interesting in and of themselves.  Tomorrow, I’m going to look at how the data has changed since 2013 and why I think some claims about worsening student health outcomes (particularly mental health) need to be viewed with some caution.  If I get really nerdy over the weekend, I might do some Canada-US comparisons, too.

Anyways.

The 2016 study was conducted at 41 public institutions across Canada.  Because it’s an American based survey, it keeps referring to all institutions as “colleges”, which is annoying.  27 of the institutions are described as “4-year” institutions (which I think we can safely say are universities), 4 are described as “2-year” institutions (community colleges) and 10 described as “other” (not sure what to make of this, but my guess would be community colleges/polytechnics that offer mostly three-year programs).  In total, 43,780 surveys were filled out (19% response rate), with a roughly 70-30 female/male split.  That’s pretty common for campus surveys, but there’s no indication that responses have been re-weighted to match actual gender splits, which is a little odd but whatever.

 

There’s a lot of data here, so I’m mostly going to let the graphs do the talking.  First, the frequency of students with various disabilities.  I was a little bit surprised that psychiatric conditions and chronic illnesses were as high as they were.

Figure 1: Prevalence of Disabilities

Figure 1 Prevalence of Disabilities

Next, issues of physical safety.  Just over 87% of respondents reported feeling safe on campus during the daytime; however, only 37% (61% of women, 27% of men, and right away you can see how the gender re-weighting issue matters) say that they feel safe on campus at night.  To be fair, this is not a specific worry about campuses: when asked about their feelings of personal safety in the surrounding community, the corresponding figures were 62% and 22%.  Students were also asked about their experiences with specific forms of violence over the past 12 months.  As one might imagine, most of the results were fairly highly gendered.

 

Figure 2: Experience of Specific Forms of Violence Over Past 12 Months, by Gender

Figure 2 Experience of Specific Forms of Violence

Next, alcohol, tobacco, and marijuana.  This was an interesting question as the survey not only asked students about their own use of these substances, but also about their perception of other students’ use of them.  It turns out students vastly over-estimate the number of other students who engage with these substances.  For instance, only 11% of students smoked cigarettes in the past 30 days (plus another 4% using e-cigarettes and 3% using hookahs), but students believed that nearly 80% of students had smoked in the past month.

 

Figure 3: Real and Perceived Incidence of Smoking, Drinking and Marijuana Use over Past 30 Days

Figure 3 Real and Perceived Incidence of Smoking

Figure 4 shows the most common conditions for which students had been diagnosed with and/or received treatment for in the last twelve months.  Three of the top ten and two of the top three were mental health conditions.

Figure 4: Most Common Conditions Diagnosed/Treated in last 12 Months

Figure 4 Most Common Conditions Diagnosed

Students were also asked separately about the kinds of things that had negatively affected their academics over the previous year (defined as something which had resulted in a lower mark than they would have otherwise received).  Mental health complaints are very high on this list; much higher in fact than actual diagnoses of such conditions.  Also of note here: internet gaming was sixth among factors causing poorer marks; finances only barely snuck into the top 10 reasons, with 10.3% citing it (though elsewhere in the study over 40% said they had experienced stress or anxiety as a result of finances).

Figure 5: Most Common Conditions Cited as Having a Negative Impact on Academics

Figure 5 Most Common Conditions Cited as Having

A final, disturbing point here: 8.7% of respondents said they had intentionally self-harmed over the past twelve months, 13% had seriously contemplated suicide and 2.1% said they had actually attempted suicide.  Sobering stuff.

May 23

Information vs Guidance

I’ve been working a lot lately on two big projects that touch on the issue of secondary school guidance.  The first is a large project for the European Commission on admission systems across Europe and the second is one of HESA’s own projects looking at how students in their junior year of high school process information about post-secondary education (the latter is a product for sale – drop us a line at info@higheredstrategy.com if you’re an institution interested in insights in how to get on students’ radar before they hit grade 12).  And one of the things that I’ve realised is how deeply difficult it is to present information to students in a way that it is meaningful to them.

Oh, we hand out information all right.  Masses of it.  We give students so much information it’s like drinking from a fire-hose.  It’s usually accurate, mostly consistent (though nothing – nothing – drives students crazier than discovering that information on a student’s catalogue is different from the information on the website, which happens all too frequently).   But that’s really not enough.

Here’s what we don’t do: we don’t provide data to students in a way that makes it easy for them to search what they want.  Information is provided solely by institutions themselves and students have to go search out data on an institution-by-institution basis.  We have nothing like France’s Admission Post-bac system which – while not without its faults as an admissions portal – actually does simplify an otherwise horrifically complicated admissions system by putting institutional information in a single spot.  We have nothing like the state-level guides in Australia where students in their graduating year can get info on all institutions in a single book.

We don’t make it simple to try to learn about their choices.  Institutions have every reason not to do this – their whole set-up in term of providing information to students is based on a philosophy of LOOK AT ME PAY NO ATTENTION TO THOSE OTHERS (though I kind of wonder what would happen to an institution that tried the “compare us now” approach used by Progressive Insurance).  Government has chosen not to play a role, preferring to leave it to institutions.  And third parties have given them things like rankings, and other statistical information which adults think they should know and care about but by and large don’t.

What students want – what they really, really want – is not more information.  They want guidance.  They want someone who is knowledgeable about that information AND who knows and appreciates their own tastes, abilities and interests and render it meaningful to them.  Yes, Queen’s is my local university and it’s pretty prestigious, but will fit in?  Sure, nursing pays well, but will I get bored (and which schools are best for nursing)?  I want to do Engineering, but it seems like a lot of work – can I actually handle it?  And if so, would it be better to go to a big school with lots of supports or to my local institution?

But this is precisely what guidance functions in secondary schools don’t deliver on a consistent basis.  Too often, their role is that of a really slow version of the internet – a centralized place to get all the individual view-books and brochures.  They don’t know individual students well enough to provide real, contextualized guidance, so that task falls upon favoured teachers and – more often – students own families.

Well, so what, you say.  What’s wrong with making students do a little leg-work on their own, and asking family and friends for guidance?  Well, the problem with this is cultural capital starts to play a really big role.  While guidance is helpful for everyone, the students who have the least idea of what to expect in post-secondary education, the ones most in need of guidance, are precisely the ones whose families have the least experience in post-secondary.  So if guidance fails, you get a Matthew Effect, with the already-advantaged receiving another leg-up.

(Secondary complaint: it is astonishing, if you believe students, how little guidance counselors want to talk to students about government student financial assistance.  On the other hand, they seem quite prepared to peddle stale chestnuts about how easy it is to get institutional aid because “millions of dollars go unclaimed every year because people don’t apply”.  I cringed every time I heard this.)

The way forward here is probably not to increase the number of guidance counselors to make it easier for them to know individual students.  The fact is, they will never get as close to students as will senior-year teachers.  Better, probably, to let those teachers do the advising (after some training, of course) and then build in time and rewards appropriately.

But it requires investment.  We have to stop preferring the provision of information over guidance because it’s cheap.  Good decisions require good guidance.  Skimp on it in schools serving richer areas if we must, but when it comes to serving low-income students it’s a false economy.

 

February 06

“Xenophobia”

Here’s a new one: the Canadian Federation of Students has decided, apparently, that charging international students higher tuition fees is “xenophobic”.  No, really, they have.  This is possibly the dumbest idea in Canadian higher education since the one about OSAP “profiting” from students.   But as we’ve seen all too often in the past year or two, stupidity is no barrier to popularity where political ideas are concerned.  So: let’s get down to debunking this.

The point that CFS – and maybe others, you never know who’s prepared to follow them down these policy ratholes – is presumably trying to highlight is that Canadian universities charge differential fees – one set for domestic students and another, higher, one for students from abroad.  Their argument is that this differential is unfair to international students and that fees should be lowered so as to equal those of domestic students.

It’s not indefensible to suggest that domestic and international tuition fees should be identical.  Lots of countries do it:  Norway, Germany and Portugal to name but three and if I’m not mistaken, both Newfoundland and Manitoba have had such policies within living memory as well.  But the idea that citizens and non-citizens pay different amounts for a publicly-funded service is not a radical, let alone a racist, one.  A non-citizen of Toronto wishing to borrow from the Toronto Libraries is required to pay a fee for a library card, while a citizen does not.  This is not xenophobic: it is a way of ensuring that services go in priority to people who pay taxes in that jurisdiction.  If an American comes to Canada and gets sick, they are expected to pay for their treatment if they visit a doctor or admitted to hospital.  This is not xenophobic either: the price is the same to all, it’s just that we have all pre-paid into a domestic health insurance fund but foreigners have not.

It’s the same in higher education.  American public universities all charge one rate to students from in-state and another to those out-of-state.  Not xenophobic: just prioritizing local taxpayers.  In Ontario, universities are not allowed to use their tuition set-aside dollars – collected from all domestic tuition fees – to provide funding to out-of-province students.  Irritating?  Yes.  Xenophobic?  No.

International students are in the same position.  Their parents have not paid into the system.  Only a minority of them will stay here in Canada to pay into it themselves.  So why on earth should they pay a similar amount to domestic students?  And it’s not as if there’s massive profiteering going on: as I showed back here, in most of the country international fees are set below the average cost of attendance.  So international students are in fact being subsidized; just not very much.

In any event, even if we were charging international students over the going rate, that wouldn’t be evidence of xenophobia.  Perhaps it has escaped CFS’ notice, but there is not a single university in the country which is turning away undergraduate students.  According to every dictionary I’ve been able to lay my hands on, xenophobia means irrational fear and hatred of foreigners; yet now CFS has discovered some odd variant in which the xenophobes are falling over each other to attract as many foreigners as possible.

My guess is that most people at CFS can distinguish between “xenophobia” and “differential fees”.  What’s happened, though, is that part of the brain trust at head office simply decided to use an emotive word to try to stigmatize a policy with which their organization disagrees.  That kind of approach sometimes works in politics: just think of the success Sarah Palin had when she invented the term “death panels” to describe end-of-life counselling under American federal health care legislation.

But effectiveness is not the be-all and end-all of politics.  Sarah Palin is a cancerous wart on democracy.  You’d kind of hope our own student groups would try to avoid imitating her.

February 03

Four Megatrends in International Higher Education: Massification

A few months ago I was asked to give a presentation about my thoughts on the “big trends” affecting international education. I thought it might be worth setting some of these thoughts to paper (so to speak), and so, every Friday for the next few weeks I’ll be looking one major trend in internationalization, and exploring its impact on Canadian PSE.

The first and most important mega-trend is the fact that all over the world, participation in higher education is going through the roof. Mostly, that’s due to growth in Asia which now hosts 56% of the world’s students, but substantial growth has been the norm around the world since 2000.  In Asia, student numbers have nearly tripled in that period (up 184%), but they also more than doubled (albeit from lower bases) in Latin America (123%) and Africa (114%), and even in North America numbers increased by 50%. Only in Europe, where several major countries have begun seeing real drops in enrolment thanks to changing demographics (most notably the Russian Federation), has the enrolment gain been small – a mere 20%.

Tertiary Enrolments by Continent, 1999-2014:

unnamed

Source: Unesco Institute of Statistics

Now, what does this have to do with the future of international higher education?  Well, back in the day, international students were seen as “overflow” – that is, students forced abroad because there were not enough educational opportunities in their own countries. Therefore, many people thought that the massification of higher education in Asia (and particularly China) would over the long run mean a decrease in internationalization because they would have more options to choose from at home.

Clearly the last decade and a half has put that idea to bed. Global enrolments have shot up, but international enrolments have risen even faster. But as all these national systems of higher education are undergoing massification, they are also undergoing stratification. That is to say: as higher education systems get larger, the positional advantage obtained simply from attending higher education declines, and the positional advantage to attending a specific, prestigious institution rises. And while higher education places are rising quickly around the world, the number of spaces in prestigious institutions is staying relatively steady in most countries (India, which is expanding its IIT system, is a partial exception). Take China for example; over the last 20 years, the number of new undergraduate students being admitted to Chinese universities has increased from about one and a half million to six million per year. In that same time, the intake of the country’s nine most prestigious universities  (the so-called “C-9”) has increased barely at all (it currently stands at something like 50,000 per year).

Now if you’re a student in a country where there’s a very tight bottleneck at the top of the prestige ladder, what do you do if you don’t quite make it to the top? Do you settle for a second-best university in your own country?  Or do you look for a second-best university in another country, preferably one where people speak English, and preferably one which has a little bit of cachet of its own? Assuming money is not a barrier (though it often is) the answer is a no-brainer: go abroad.

So when we look ahead to the future, as we think about what might affect student flows around the world, what we need to watch is not the rise of university or college places in places like China and India, but rather the ratio of prestige spaces to total spaces. As long as that ratio keeps falling – and there’s no evidence at the moment that this process will reverse itself anytime soon – expect the demand for international education to remain high.

November 23

Persuading High School Students

Over the years, a lot of people have surveyed incoming university students to find out why they chose a particular institution.  Most of these surveys contain a battery of questions about influencers: i.e. what were the sources of information that a student used to make their decision.  What researchers are looking for, usually, is some indication that school websites or career fairs or Maclean’s rankings or whatever are actually having some impact.  But year after year, students essentially give the same two answers for “top influencers”: namely, “family and friends”.  This doesn’t really help institutions because they have no idea what family and friends are telling the students, where they get their information, etc.  Institutions simply want to understand how to get information about their offerings into the information pipeline.

Here at HESA Towers, we’ve been working on a program of research on this for a couple of years now.  Two years ago, we followed a couple of hundred grade 12 students for a year to look at how the timing and type of information students received changed their views about institutions over time.  This gave us some interesting insights on which sources of data at which points in time seemed to make a difference to students.  This year we are doing something similar both with students in grade 11 (one of our big findings in looking at grade 12 students was how many of them had their minds made up about an institutions before their final year of studies) and with parents of students in grade 12.  I won’t bore you with the details here (though by all means get in touch if you want details about how to obtain our research – see the grey box below); what I want to do today instead is talk specifically about grade 12 students’ epistemology when it comes to choosing an institution.

Briefly, students know that institutions are selling them something.  From what we’ve seen, they are actually quite sophisticated media consumers – very willing to question institutions and not take for granted what they see on websites.  Actually, not to put too fine a point on in general, high school students hate institutional websites.  Like, with the fire of a thousand suns.  There are few if any exceptions.

Now precisely because students know they are being told something, their fondest wish is to be able to “look under the hood”, so to speak.  They want to be able to hear from other students what it’s like to be at a particular school. When they do this, they are not thinking like “investors”, they are thinking like “consumers”.  If you’re going to dedicate four years of your life to something, you want to know you won’t be lonely and/or bored.  Choosing an institution is, in many ways, effectively choosing a lifestyle or a “brand” for four years of their lives: what they really want to know is whether they will meet people from whom they can learn and with whom they can have a good time.

Many institutions understand this, and their response is to make “real students” available to prospective students to explain from a credible first-person perspective what it is they can expect.  But while high school students appreciate this effort, they know there is still an information asymmetry: high school students have no way of knowing whether these chosen students are reliable guides or not.

Now, the most credible source of information for grade 12 students are people they already know and who can give them first-hand straight dope.  They might trust adults to tell them about programs, but when it comes to student life and explaining it in a way a high school student can understand, they’re only going to listen to kids more or less their own age who come from a similar background.  Siblings, first and foremost, but apart from that the students that most closely meet this criteria are students from one’s own school in the graduating class one year ahead.

Now here’s the bit that I think eludes a lot of people.  A high school student does not need to speak with older classmates in order to obtain needed information about a particular post-secondary institution.  All they need to do is register where various graduates choose to go to college/university and they can make their own inferences about institutional brand.  Basically if you’re a high school student and all the older kids you admired went to institution Y, then that school starts out with a huge advantage in recruiting you even if you never spoke to any of those students about life at institution Y.  It’s wordless viral marketing, but no less effective for that.

(My son did this in a negative way: he put his efforts into avoiding the institutions which attracted the greatest number of what he considered “douchebags” from the graduating class prior to his own.  I won’t offend the institutions which got eliminated via this process, but let’s just say that via this method he concluded that Wilfrid Laurier must be a decent place to study.)

This is more or less how universities become branded without ever actually spending money on branding.  Students with particular characteristics (the jocks, the tree-huggers – whatever) choose institution X and that then affects how younger students with the same characteristics view each institution.  Breaking this cycle is very hard and goes well beyond a couple of ad campaigns.  Institutions seeking a new kind of student have to pro-actively identify and persuade a different type of student to come to their institution and in some cases actually discourage some of their more traditional students from attending (very difficult to do in an era when money is tight).  Success in this form of persuasion is very time-consuming, and takes a lot of patient work because results will take years to become apparent.  It means paying attention to many, many high schools and actually getting to know and assessing the individual students that come your way from each.

That doesn’t mean marketing to students is hopeless.  There are other parts of an institution’s value proposition that can be emphasized (employability, opportunity, etc) in ways that will make prospective sit up and take notice.  It’s just to say that students have already decided a lot about a school long before they first see a website or a viewbook, and marketing campaigns need to be conducted with that in mind.

September 13

Measuring the Effects of Study Abroad

In the higher education advocacy business, an unhappily large proportion of the research used is of the correlation = causation type.  For instance, many claim that higher education has lots of social benefits like lower crime rates and higher rates of community volunteering on the grounds that outcomes of graduates are better than outcomes of non-graduates in these areas.  But this is shaky.  There are very few studies which look at this carefully enough to eliminate selection bias – that is, that the people who go to higher education were less disposed to crime/more disposed to volunteering to begin with.  The independent “treatment” effect of higher education is much more difficult to discern.

This applies in spades to studying the question of the effects of study abroad.  For instance, one widely quoted study  of the Erasmus program showed that five years after graduation, unemployment rates for graduates who had been in a study-abroad program were 23% lower than for those who did not.  But this is suspect.  First of all “23% lower” actually isn’t all that much for a population where unemployment is about 5% (it means one group has unemployment of 4% and the other 5%, more or less).  Second of all, there is a selection bias here.  The study-abroad and non-study abroad populations are not perfectly identical populations who differ only in that they have been given different “treatments”: they are different populations, one of which has enough drive and courage to pick up sticks to move to another country and (often) study in another language.  It’s quite possible they would have had better employment outcomes anyways.  You can try to limit bias by selecting a control group that is similar to the study abroad population by selecting a group that mimics them in terms of field of study, GPA, etc, but it’s not perfect and very few studies do so anyway (a very honourable mention here to the GLOSSARI project from Georgia headed by Don Rubin)

(Before we go any further: no, I don’t think employability skills are the only reason to encourage study abroad.  I do however think that if universities and colleges are going to frame their claim for more study abroad in economic terms – either by suggesting students will be more employable or making more general claims of increasing economic competitiveness – then it is incumbent on them to actually demonstrate some impact.  Claiming money on an economic imperative and them turning around and saying “on that doesn’t matter because well-rounded citizen” doesn’t really wash.

There are other ways of trying to prove this point about employability, of course.  One is to ask employers if they think study abroad matters.  They’ll usually say yes, but it’s a leap of faith to go from that to saying that study abroad actually is much a help in actually landing a job.  Some studies have asked students themselves if they think their study abroad experience was helpful in getting a job.  The answer is usually yes, but it’s hard to interpret what that means, exactly.

Since it’s difficult to work out directly how well internationalization is helping students get jobs, some people try to look at whether or not students get the skills that employers want (self-discipline, creativity, working in teams, etc).  The problem with this approach of course, is that the only real way to do this is through self-assessment which not everybody accepts as a way to go (but in the absence of actual testing of specific skills, there aren’t a whole lot of other options).  Alternatively, if you use a pre-post evaluation mechanism, you can at least check on the difference in self-assessment of skills over time, which might then be attributed to time spent in study abroad.  If that’s still not enough to convince you (if, for instance, you suspect that all students self-assessments would go up over the space of a few months, because all students are to some degree improving skills all the time), try a pre-post method with a control group, too: if both groups’ self-assessments go up, you can still measure the difference in the rate at which the self-reported skills increase across the two groups.  If they go up more for study-abroad students than for stay-at-homes, then the difference in the rates of growth can, cautiously, be attributed to the study abroad period.

Basically: measuring impacts takes time, and is complicated.  And despite lots of people in Canada avowing how important outbound mobility is, we never seem to take them time, care and expense to do the measurement.  Easier, I suppose, to rely on correlations and hope no one notices.

It’s a shame really because I think there are some interesting and specifically Canadian stories to tell about study abroad.  More on that tomorrow.

Page 1 of 1212345...10...Last »