HESA

Higher Education Strategy Associates

Category Archives: Students

September 20

How Families Make PSE Choices

Over the last few months at HESA Towers we’ve been doing a lot of interviews of parents of grade 12 students, to help understand what it is that shapes and shifts their perceptions of higher education institutions.  I can’t give away much of the content here (that’s for paying customers), but one issue I do think is worth a mention is what we’re finding about how families make decisions about post-secondary education.

The way researchers conceive of decision-making in post-secondary is a pretty linear one, at least where traditional-aged students are concerned.  Parents and students, separately or in tandem, research possible career avenues and try to match them with educational pathways and students’ own interests.  They research programs and institutions, and try to judge quality.  They examine their finances – preferably jointly – and discuss what is affordable.  And on the basis of these various pieces of information, they winnow down the number of potential programs/institutions from a large number to a fairly small one, to which one might apply and finally down to a single institution.  Conceptually, it’s like a funnel, wide at the top and then narrowing gradually as students and parents seek and process information.

As a conceptual model, this suffers from just one problem: it’s mostly wrong.

Here’s what we’ve found instead.  The first is that the notion of parents and students “discussing” post-secondary options is valid only if you think of “discussions” as being asynchronous snatches of conversation that stretch over months or even years.  Parents do not really see their role as one of getting students to decide on choices.  In fact, most assume that the more direct they are about discussing or suggesting options, the more their kids will disengage or oppose them.  Instead, parents see their role as almost horticultural.  They “plant seeds” with their kids by suggesting ideas here and there, but more or less allow them to come to their own conclusions.

Another key set of assumptions about family decision making is that money is a central part of the discussion and money plays an important role in the eventual choice of institution.  And here the answer is basically “yes and no”.  Parents do talk to their kids about money in general terms.  A few don’t – they refuse to talk about it so as “not to distract them”, some save money but don’t tell them about it to “keep them motivated” – but for the most part parents let their kids know at least in general terms how much money is available.

But when it comes to choosing an institution, money plays an ambiguous role.  It’s pretty clear that most parents would prefer if their kid stayed home for financial reasons For the most part, kids are more than happy to study close to home, too, so for them the issue of money simply doesn’t impinge on choice.  Money (or lack of it) really only comes into play once a student starts coming close to making a decision that involves going away to school.

Not surprisingly, parents are reluctant to spend money on kids who they think are unlikely to benefit much by going away to school.  This isn’t just a preference for spending less money rather than more: many parents of grade 12 students simply don’t think their kids are mature enough or organized enough to go away.  But – and here’s where it gets interesting – parents don’t necessarily express this opinion by talking to their kids about money.  Another way they do it is to talk up local schools – or at least avoid talking up more distant ones – during their “planting seeds” discussions and hope the kid comes to the preferred conclusion on his or her own (though to some extent this also reflects greater parental familiarity with local as opposed to more distant institutions).

On the other hand, if the kid is perceived as actually having their act together – partially an issue of grades but also one of having goals and a sense of purpose – money becomes less of an issue for parents.  It’s not that the issue disappears or that they’ll let their kids do whatever they want, but if parents think their kid has their act together, they are more open to allow the student to drive the decision about where to go to school.

So in other words, much of the “discussion” occurs by way of kids spending hundreds of hours cracking the books (or not cracking then, as the case may be) and thus sending signals to parents about their talents and capacities.  Based on the presence/absence of said capacities, parents gradually, over a number of years, drop hints about institutional and program preferences, sometimes to kids who have a hopelessly short attention span about such things.  Sometime between mid-grade 11 and early grade-12, the students themselves become serious about searching for institutions.  When they start this process, they are doing so after having experienced years of subtle (or maybe not-so-subtle) hints from their parents about what kinds of programs and institutions are acceptable.  From that, they make a choice.  Then, and pretty much only then, do discussion about money explicitly come into the open.  But in many cases it does not need to because the student has already made the “correct” (form the parents’ point of view) choice which will unlock a contribution sufficient to get the student into school.

As noted above, this is quite different from how most college choice theories describe the decision-making process. And I think this has some serious consequences for the way we communicate to families and students issues of price and affordability.  There need to be some very simple, general messages about how aid makes education affordable, wherever one chooses to undertake it, that can be hammered over and over.   Communicating the specific details of aid programs is almost a total waste of time until very late in the final year of high school, after the institutional choice has already been made. It’s almost as if two different information products need to be created for two different audiences: a simple and general one for use during the choice process and a detailed one for afterwards.

There are also quite a lot of implications here for how institutions should sell themselves.  But those we keep for our institutional clients.  Drop us a line (info@higheredstrategy.com) if you’re interested in becoming one.

June 14

Two Approaches to Student Success

I’ve recently been doing a little bit of work recently on student success and I am struck by the fact that there are two very different approaches to student success, depending on which side of the Atlantic you are sitting on.  I’m not sure one is actually better than the other, but they speak to some very different conceptions of where student success happens within an institution.

(To be clear, when I say “student success” I mostly mean “degree/program completion”.  I recognize that there are evolving meanings of this which mean something more/different.  Some are extending the notion to not just completion but career success – or at least success in launching one’s career; others suggest completion is overrated as a metric since some students only attend to obtain specific skills and never intend to complete, and if these students drop out in order to take a job, that’s hardly a failure.  I don’t mean to challenge either of these points, but I’m making a point about the more traditional definition of the term).

What I would call the dominant North American way of thinking about student success is that it is an institutional and, to a lesser extent, a faculty matter rather than something dealt with at the level of the department or the program.  We throw resources from central administration (usually, from Institutional Research) to identify “at-risk” students.  We use central resources to bolster students’ mental health, and hire councillors, tutors and academic support centrally as well.  Academic advisors tend to be employed by faculties rather than the central admin, but the general point still stands – these are all things that are done on top of, and more or less without reference to, the actual academic curriculum.

The poster child for this kind of approach is Georgia State University (see articles here and here).  It’s an urban university with very significant minority enrolments, one that at the turn of the century had a completion rate of under 30%.  By investing heavily in data analytics and – more importantly – in academic tutors and advisors (I’ve heard but can’t verify that their ratio of students to advisors is 300:1 or less, which is pretty much unimaginable at a Canadian university).  Basically, they throw bodies at the problem.  Horrible, dreaded, non-academic staff bloat bodies.  And it works: their retention rates are now up over 50 percent – their improvement among minority students has been a whopping 32 percentage points.

But what they don’t seem to do is alter the curriculum much.  It’s a very North American thing, this.  The institution is fine, it’s students that have to make adjustments, and we have an army of counsellors to help them do so.

Now, take a gander at a fascinating little report from the UK called What Works: Student retention and success change programme phase 2  In this project, a few dozen individual retention projects were put together across 13 participating institutions, piloted and evaluated.  The projects differed from place to place, but they were built on a common set of principles, the first and most important one being as follows; “interventions and approaches to improve student retention and success should, as far as possible, be embedded into mainstream academic provision”.

So what got piloted were mostly projects that involved some adjustment to curriculum, either in terms of the on-boarding process (e.g. “Building engagement and belonging through pre-entry webinars, student profiling and interactive induction”) or the manner in which assessments are done (e.g., “Inclusive assessment approaches: giving students control in assignment unpacking”) or simply re-doing the curriculum as a whole (e.g. “Active learning elements in a common first-year engineering curriculum”).

That is to say, in this UK program, student success was not treated as an institutional priority dealt with by non-academic staff.  It was treated as a departmental-level priority, dealt with by academic staff.

I would say at most North American universities this approach is literally unimaginable.  Academic staff are not “front-line workers” who deal with issues like academic preparedness; in fact, often professors who do try to work with a student and refer them to central academic or counselling services will discover they cannot follow up an individual case with central services because the latter see it as a matter of “client confidentiality”.  And outside of professional faculties, our profs teach individual courses of their own choosing rather than jointly manage and deliver a set curriculum which can be tweaked.  Making a curriculum more student-friendly assumes there is a curriculum to alter, rather than simply a basket of courses.

Part of this is a function of how university is conceptualized.  In North America, we tend to think that students choose an institution first and a program of study later (based on HESA’s research on student decisions, I think this is decreasingly the case, but that’s another story). So, when we read all the Vince Tinto-related research (Tinto being the guru of student retention studies, most of which is warmed-over Durkheim) about “belonging”, “fit” and so on, we assume that what students are dropping out of is the institution not the program, and assign responsibilities accordingly.  But in Europe, where 3-year degrees are the norm and they don’t mess around with things like breadth requirements, the assumption is you’re primarily joining a program of study, not an institution.  And so when Europeans read Tinto, they assume the relevant unit is the department or program, not the institution.

But also I think the Europeans – those interested in widening access and participation, anyway – are much less likely to think of the problem as being one of how to get students to adapt to university and its structures.  Quite often, they reverse the problem and say “how can the institution adapt itself to its students”?

It’s worth pondering, maybe, whether we shouldn’t ask that question more often, ourselves.  I think particularly when it comes to Indigenous students, we might be better served with a more European approach.

 

June 07

National Patterns of Student Housing

The other day I published a graph on student housing in Canada and the United States that seemed to grab a lot of people’s attention.  It was this one:

Figure 1: Student Living Arrangements, Canada vs. US

June 7-17 Figure 1

People seemed kind of shocked by this and wondered what causes the differences.  So I thought I’d take a shot at answering this.

(caveat: remember, this is data from a multi-campus survey and I genuinely have no idea how representative this is of the student body as a whole.  The ACHA-NCHA survey seems to skew more towards 4-year institutions than the Canadian sample, and it’s also skewed geographically towards western states.  Do with that info what you will)

Anyways, my take on this is basically that you need to take into consideration several centuries worth of history to really get the Canada-US difference.  Well into the nineteenth century, the principal model for US higher education was Cambridge and Oxford, which were residential colleges.  Canada, on the other hand, looked at least as much to Scottish universities for inspiration, and the Scots more or less abandoned the college model during the eighteenth century.  This meant that students were free to live at home, or in cheaper accommodations in the city, which some scholars think contributed to Scotland having a much more accessible system of higher education at the time (though let’s not get carried away, this was the eighteenth century and everything is relative).

Then there’s the way major public universities got established in the two countries.  In the US, it happened because of the Morrill Acts, which created the “Land-Grant” Universities which continue to dominate the higher education systems of the Midwest and the South.  The point of land-grant institutions was to bring education to the people, and at the time, the American population was still mostly rural.  Also, these new universities often had missions to spread “practical knowledge” to farmers (a key goal of A&M – that is, agricultural and mechanical – universities), which tended to support the establishment of schools outside the big cities.  Finally, Americans at the time – like Europeans – believed in separating students from the hurly-burly of city life because of its corrupting influence.  The difference was that Europeans achieved usually achieved this by walling off their campuses (e.g. La Sapienza in Rome), while Americans did it by sticking their flagship public campuses out in the boonies (e.g. Illinois Urbana-Champaign).   And as a result of sticking so many universities in small towns, a residential system of higher education emerged more or less naturally.

In Canada, none of this happened because the development of our system lagged the Americans’ by a few decades.  Our big nineteenth-century universities – Queen’s excepted – were located in big cities.  Out west, provincial universities, which were the equivalent of the American land-grants, didn’t get built until the population urbanized, which is why the Universities of Manitoba, Saskatchewan and Alberta are in Winnipeg, Saskatoon and Edmonton instead of Steinbach, Estevan and Okotoks.  The corollary of having universities in big cities was that it was easier to follow the Scottish non-residential model.

The Americans could have ditched the residential model during the transition to a mass higher education in the 1950s, but by that time it had become ingrained as the norm because it was how all the prestigious institutions did things.  And of course, the Americans have some pretty distinctive forms of student housing too.  Fraternities and sororities, often considered a form of off-campus housing in Canada, are very much part of the campus housing scene in at least some parts of the US (witness the University of Alabama’s issuing over $100 million in bonds to spruce up its fraternities).

In short, the answer to the question of why Americans are so much more likely to live on campus than Canadian students is “historical quirks and path dependency”.  Given the impact these tendencies have on affordability, that’s a deeply unsatisfying answer, but it’s a worthwhile reminder that in a battle between sound policy and historical path dependency, the latter often wins.

 

June 05

Student Health (Part 3)

You know how it is when someone tries to make a point about Canadian higher education using data from American universities? It’s annoying.  Makes you want to (verbally) smack them upside the head. Canada and the US are different, you want to yell. Don’t assume the data are the same! But of course the problem is there usually isn’t any Canadian data, which is part of why these generalizations get started in the first place.

Well, one of the neat things about the AHCA-NCHA campus health survey I was talking about last week is that it is one of the few data collection instruments that is in use on both sides of the border. Same questions, administered at the same time to tens of thousands of students on both sides of the border. And, as I started to look at the data for 2016, I realized my “Canada is different” rant is – with respect to students and health at least – almost entirely wrong. Turns out Canadian and American students are about as alike as two peas in a pod. It’s kind of amazing, actually.

Let’s start with basic some basic demographic indicators, like height and weight. I think I would have assumed automatically that American students would be both taller and heavier than Canadian ones, but figure 1 shows you what I know.

Figure 1: Median Height (Inches) and Weight (Pounds), Canadian vs. US students.

OTTSYD-1

Now, let’s move over to issues of mental health, one of the key topics of the survey. Again, we see essentially no difference between results on either side of the 49th parallel.

Figure 2: Within the last 12 months have you been diagnosed with/treated for…

OTTSYD-2

What about that major student complaint, stress? The AHCA-NCHA survey asks students to rate the stress they’ve been under over the past 12 months. Again, the patterns in the two countries are more or less the same.

Figure 3: Within the last 12 months, rate the stress you have been under.

OTTSYD-3

One interesting side-note here: students in both countries were asked about issues causing trauma or being “difficult to handle”. Financial matters were apparently more of an issue in Canada (40.4% saying yes) than in the US (33.7%). I will leave it to the reader to ponder how that result lines up with various claims about the effects of tuition fees.

At the extreme end of mental health issues, we have students who self-harm or attempt suicide. There was a bit of a difference on this one, but not much, with Canadian students slightly more likely to indicate that they had self-harmed or attempted suicide.

Figure 4: Attempts at Self-harm/suicide.

OTTSYD-4

What about use of tobacco, alcohol and illicit substances? Canadian students are marginally more likely to drink and smoke, but apart from that the numbers look pretty much the same. The survey, amazingly, does not ask about use of opioids/painkillers, which if books like Sam Quinones’ Dreamland are to be believed have made major inroads among America’s young – I’d have been interested to see the data on that. It does have a bunch of other minor drugs – heroin, MDMA, etc, and none of them really register in either country.

Figure 5: Use of Cigarettes, Alcohol, Marijuana, Cocaine.

OTTSYD-5

This post is getting a little graph-heavy, so let me just run through a bunch of topics where there’s essentially no difference between Canadians and Americans: frequency of sexual intercourse, number of sexual partners, use of most illegal drugs, use of seat belts, likelihood of being physically or sexually assaulted, rates of volunteering….in fact among the few places where you see significant differences between Canadian and American students is with respect to the kinds of physical ailments they report. Canadian students are significantly more likely to report having back pain, Americans more likely to report allergies and sinus problems.

Actually, the really big differences between the two countries were around housing and social life. In Canada, less than 2% of students reported being in a fraternity/sorority, compared to almost 10% in the United States. And as for housing, as you can see Americans are vastly more likely to live on-campus and vastly less-likely to live at home. On balance, that means they are incurring significantly higher costs to attend post-secondary education. Also, it probably means campus services are under a lot more pressure in the US than up here.

Figure 6: Student Living Arrangements.

OTTSYD-6

A final point here is with respect to perceptions of campus safety. We all know the differences in rates of violent crimes in the two countries, so you’d expect a difference in perceptions of safety, right? Well, only a little bit, only at night and mostly- off-campus. Figure 7 shows perceptions of safety during the day and at night, on campus and in the community surrounding campus.

Figure 7: Perceptions of safety on campus and in surrounding community.

OTTSYD-7

In conclusion: when it comes to students health and lifestyle, apart from housing there do not appear to many cross-border differences. We seem to be living in a genuinely continental student culture.

June 02

Student Health (part 2)

Now you may have seen a headline recently talking about skyrocketing mental health problems among students.  Specifically, this one from the Toronto Star, which says, among other things:

A major survey of 25,164 Ontario university students by the American College Health Association showed that between 2013 and 2016, there was a 50-per-cent increase in anxiety, a 47-per-cent increase in depression and an 86-per-cent increase in substance abuse. Suicide attempts also rose 47 per cent during that period.

That’s a pretty stunning set of numbers.  What to make of them?

Part of what’s going on here is looking at the size of the increase instead of the size of the base.  If the incidence of something goes from 1% to 2% in the population, that can be accurately expressed either as “a one percentage point increase” or “IT DOUBLED!”.   The increase for the numbers on “attempted suicide in the last 12 months”, for instance, rose from 1.3% to 2.1%.  With such a tiny base, double-digit increases aren’t difficult to manufacture.

(in case you’re wondering whether these figures are a function of possible measurement error, the answer is no.  With a 40,000 student sample, the margin of error for an event that happens 1% of the time is 0.1, so a jump from 0.8% is well beyond the margin of error).

Now, the Star is correct, there is a very troubling pattern here – across all mental health issues, the results for 2016 are significantly worse than for 2013 and troublingly so.  But it’s still a mistake to rely on these figures as hard evidence for something major having changed.  As I dug into the change in figures between 2013 and 2016, I was amazed to see that in fact figures were not just worse for mental health issues, but for health and safety issues across the board.  Some examples:

  • In 2013, 53.4% of students said their health was very good or excellent, compared to just 45.3% three years later
  • The percentage of students whose Body Mass Index put them in the category of Class II Obese or higher rose from 3.15 to 4.3%, a roughly 35% increase.
  • The percentage of students with diabetes rose by nearly 40%, migraine headaches by 20%, ADHD by nearly 25%
  • Even things like incidence of using helmets when on a bicycle or motorcycle are down by a couple of percentage points each, while the percent saying they had faced trauma from the death or illness of a family member rose from 21% to 24%.

Now, when I see numbers like this, I start wondering if maybe part of the issue is an inconsistent sample base.   And, as it turns out, this is true.  Between 2013 and 2016, the institutional sample grew from 30 to 41, and the influx of new institutions changed the sample considerably.  The students surveyed in 2016 were far more likely to be urban, and less likely to have been white or straight.  They were also less likely to have been on varsity teams or fraternity/sorority members (and I suspect that last one tells you something about socio-economic background as well, but that’s perhaps an argument for another day).

We can’t tell for certain how much of the change in reported health outcomes have to do with the change in sample.  It would be interesting and helpful if someone could recalculate the 2016 data using only data from institutions which were present in the 2013 sample.  That would provide a much better baseline for looking at change over time.  But what we can say is that this isn’t a fully apples-to-apples comparison and we need to treat with caution claims that certain conditions are spreading in the student population.

To conclude, I don’t want to make this seem like a slam against the AHCA survey.  It’s great.  But it’s a snapshot of a consortium at a particular moment in time, and you have to be careful about using that data to create a time series.  It can be done – here’s an example of how I’ve done it with Canadian Undergraduate Survey Consortium data, which suffers from the same drawback.  Nor do I want to suggest that mental health isn’t an issue to worry about.  It’s clearly something which creates a lot of demand for services and the need to be met somehow (though whether this reflects a change in underlying conditions or a change in willingness to self-identify and seek help is unresolved and to some degree unresolvable).

Just, you know, be careful with the data.  It’s not always as straightforward as it looks.

 

June 01

Student Health (part 1)

I have been perusing a quite astonishingly detailed survey that was recently released regarding student health.  Run by the American College Health Association-National College Health Assessment, this multi-campus exercise has been run twice now in Canada – once in 2013 and once in 2016.  Today, I’m going to look at what the 2016 results say, which are interesting in and of themselves.  Tomorrow, I’m going to look at how the data has changed since 2013 and why I think some claims about worsening student health outcomes (particularly mental health) need to be viewed with some caution.  If I get really nerdy over the weekend, I might do some Canada-US comparisons, too.

Anyways.

The 2016 study was conducted at 41 public institutions across Canada.  Because it’s an American based survey, it keeps referring to all institutions as “colleges”, which is annoying.  27 of the institutions are described as “4-year” institutions (which I think we can safely say are universities), 4 are described as “2-year” institutions (community colleges) and 10 described as “other” (not sure what to make of this, but my guess would be community colleges/polytechnics that offer mostly three-year programs).  In total, 43,780 surveys were filled out (19% response rate), with a roughly 70-30 female/male split.  That’s pretty common for campus surveys, but there’s no indication that responses have been re-weighted to match actual gender splits, which is a little odd but whatever.

 

There’s a lot of data here, so I’m mostly going to let the graphs do the talking.  First, the frequency of students with various disabilities.  I was a little bit surprised that psychiatric conditions and chronic illnesses were as high as they were.

Figure 1: Prevalence of Disabilities

Figure 1 Prevalence of Disabilities

Next, issues of physical safety.  Just over 87% of respondents reported feeling safe on campus during the daytime; however, only 37% (61% of women, 27% of men, and right away you can see how the gender re-weighting issue matters) say that they feel safe on campus at night.  To be fair, this is not a specific worry about campuses: when asked about their feelings of personal safety in the surrounding community, the corresponding figures were 62% and 22%.  Students were also asked about their experiences with specific forms of violence over the past 12 months.  As one might imagine, most of the results were fairly highly gendered.

 

Figure 2: Experience of Specific Forms of Violence Over Past 12 Months, by Gender

Figure 2 Experience of Specific Forms of Violence

Next, alcohol, tobacco, and marijuana.  This was an interesting question as the survey not only asked students about their own use of these substances, but also about their perception of other students’ use of them.  It turns out students vastly over-estimate the number of other students who engage with these substances.  For instance, only 11% of students smoked cigarettes in the past 30 days (plus another 4% using e-cigarettes and 3% using hookahs), but students believed that nearly 80% of students had smoked in the past month.

 

Figure 3: Real and Perceived Incidence of Smoking, Drinking and Marijuana Use over Past 30 Days

Figure 3 Real and Perceived Incidence of Smoking

Figure 4 shows the most common conditions for which students had been diagnosed with and/or received treatment for in the last twelve months.  Three of the top ten and two of the top three were mental health conditions.

Figure 4: Most Common Conditions Diagnosed/Treated in last 12 Months

Figure 4 Most Common Conditions Diagnosed

Students were also asked separately about the kinds of things that had negatively affected their academics over the previous year (defined as something which had resulted in a lower mark than they would have otherwise received).  Mental health complaints are very high on this list; much higher in fact than actual diagnoses of such conditions.  Also of note here: internet gaming was sixth among factors causing poorer marks; finances only barely snuck into the top 10 reasons, with 10.3% citing it (though elsewhere in the study over 40% said they had experienced stress or anxiety as a result of finances).

Figure 5: Most Common Conditions Cited as Having a Negative Impact on Academics

Figure 5 Most Common Conditions Cited as Having

A final, disturbing point here: 8.7% of respondents said they had intentionally self-harmed over the past twelve months, 13% had seriously contemplated suicide and 2.1% said they had actually attempted suicide.  Sobering stuff.

May 23

Information vs Guidance

I’ve been working a lot lately on two big projects that touch on the issue of secondary school guidance.  The first is a large project for the European Commission on admission systems across Europe and the second is one of HESA’s own projects looking at how students in their junior year of high school process information about post-secondary education (the latter is a product for sale – drop us a line at info@higheredstrategy.com if you’re an institution interested in insights in how to get on students’ radar before they hit grade 12).  And one of the things that I’ve realised is how deeply difficult it is to present information to students in a way that it is meaningful to them.

Oh, we hand out information all right.  Masses of it.  We give students so much information it’s like drinking from a fire-hose.  It’s usually accurate, mostly consistent (though nothing – nothing – drives students crazier than discovering that information on a student’s catalogue is different from the information on the website, which happens all too frequently).   But that’s really not enough.

Here’s what we don’t do: we don’t provide data to students in a way that makes it easy for them to search what they want.  Information is provided solely by institutions themselves and students have to go search out data on an institution-by-institution basis.  We have nothing like France’s Admission Post-bac system which – while not without its faults as an admissions portal – actually does simplify an otherwise horrifically complicated admissions system by putting institutional information in a single spot.  We have nothing like the state-level guides in Australia where students in their graduating year can get info on all institutions in a single book.

We don’t make it simple to try to learn about their choices.  Institutions have every reason not to do this – their whole set-up in term of providing information to students is based on a philosophy of LOOK AT ME PAY NO ATTENTION TO THOSE OTHERS (though I kind of wonder what would happen to an institution that tried the “compare us now” approach used by Progressive Insurance).  Government has chosen not to play a role, preferring to leave it to institutions.  And third parties have given them things like rankings, and other statistical information which adults think they should know and care about but by and large don’t.

What students want – what they really, really want – is not more information.  They want guidance.  They want someone who is knowledgeable about that information AND who knows and appreciates their own tastes, abilities and interests and render it meaningful to them.  Yes, Queen’s is my local university and it’s pretty prestigious, but will fit in?  Sure, nursing pays well, but will I get bored (and which schools are best for nursing)?  I want to do Engineering, but it seems like a lot of work – can I actually handle it?  And if so, would it be better to go to a big school with lots of supports or to my local institution?

But this is precisely what guidance functions in secondary schools don’t deliver on a consistent basis.  Too often, their role is that of a really slow version of the internet – a centralized place to get all the individual view-books and brochures.  They don’t know individual students well enough to provide real, contextualized guidance, so that task falls upon favoured teachers and – more often – students own families.

Well, so what, you say.  What’s wrong with making students do a little leg-work on their own, and asking family and friends for guidance?  Well, the problem with this is cultural capital starts to play a really big role.  While guidance is helpful for everyone, the students who have the least idea of what to expect in post-secondary education, the ones most in need of guidance, are precisely the ones whose families have the least experience in post-secondary.  So if guidance fails, you get a Matthew Effect, with the already-advantaged receiving another leg-up.

(Secondary complaint: it is astonishing, if you believe students, how little guidance counselors want to talk to students about government student financial assistance.  On the other hand, they seem quite prepared to peddle stale chestnuts about how easy it is to get institutional aid because “millions of dollars go unclaimed every year because people don’t apply”.  I cringed every time I heard this.)

The way forward here is probably not to increase the number of guidance counselors to make it easier for them to know individual students.  The fact is, they will never get as close to students as will senior-year teachers.  Better, probably, to let those teachers do the advising (after some training, of course) and then build in time and rewards appropriately.

But it requires investment.  We have to stop preferring the provision of information over guidance because it’s cheap.  Good decisions require good guidance.  Skimp on it in schools serving richer areas if we must, but when it comes to serving low-income students it’s a false economy.

 

February 06

“Xenophobia”

Here’s a new one: the Canadian Federation of Students has decided, apparently, that charging international students higher tuition fees is “xenophobic”.  No, really, they have.  This is possibly the dumbest idea in Canadian higher education since the one about OSAP “profiting” from students.   But as we’ve seen all too often in the past year or two, stupidity is no barrier to popularity where political ideas are concerned.  So: let’s get down to debunking this.

The point that CFS – and maybe others, you never know who’s prepared to follow them down these policy ratholes – is presumably trying to highlight is that Canadian universities charge differential fees – one set for domestic students and another, higher, one for students from abroad.  Their argument is that this differential is unfair to international students and that fees should be lowered so as to equal those of domestic students.

It’s not indefensible to suggest that domestic and international tuition fees should be identical.  Lots of countries do it:  Norway, Germany and Portugal to name but three and if I’m not mistaken, both Newfoundland and Manitoba have had such policies within living memory as well.  But the idea that citizens and non-citizens pay different amounts for a publicly-funded service is not a radical, let alone a racist, one.  A non-citizen of Toronto wishing to borrow from the Toronto Libraries is required to pay a fee for a library card, while a citizen does not.  This is not xenophobic: it is a way of ensuring that services go in priority to people who pay taxes in that jurisdiction.  If an American comes to Canada and gets sick, they are expected to pay for their treatment if they visit a doctor or admitted to hospital.  This is not xenophobic either: the price is the same to all, it’s just that we have all pre-paid into a domestic health insurance fund but foreigners have not.

It’s the same in higher education.  American public universities all charge one rate to students from in-state and another to those out-of-state.  Not xenophobic: just prioritizing local taxpayers.  In Ontario, universities are not allowed to use their tuition set-aside dollars – collected from all domestic tuition fees – to provide funding to out-of-province students.  Irritating?  Yes.  Xenophobic?  No.

International students are in the same position.  Their parents have not paid into the system.  Only a minority of them will stay here in Canada to pay into it themselves.  So why on earth should they pay a similar amount to domestic students?  And it’s not as if there’s massive profiteering going on: as I showed back here, in most of the country international fees are set below the average cost of attendance.  So international students are in fact being subsidized; just not very much.

In any event, even if we were charging international students over the going rate, that wouldn’t be evidence of xenophobia.  Perhaps it has escaped CFS’ notice, but there is not a single university in the country which is turning away undergraduate students.  According to every dictionary I’ve been able to lay my hands on, xenophobia means irrational fear and hatred of foreigners; yet now CFS has discovered some odd variant in which the xenophobes are falling over each other to attract as many foreigners as possible.

My guess is that most people at CFS can distinguish between “xenophobia” and “differential fees”.  What’s happened, though, is that part of the brain trust at head office simply decided to use an emotive word to try to stigmatize a policy with which their organization disagrees.  That kind of approach sometimes works in politics: just think of the success Sarah Palin had when she invented the term “death panels” to describe end-of-life counselling under American federal health care legislation.

But effectiveness is not the be-all and end-all of politics.  Sarah Palin is a cancerous wart on democracy.  You’d kind of hope our own student groups would try to avoid imitating her.

February 03

Four Megatrends in International Higher Education: Massification

A few months ago I was asked to give a presentation about my thoughts on the “big trends” affecting international education. I thought it might be worth setting some of these thoughts to paper (so to speak), and so, every Friday for the next few weeks I’ll be looking one major trend in internationalization, and exploring its impact on Canadian PSE.

The first and most important mega-trend is the fact that all over the world, participation in higher education is going through the roof. Mostly, that’s due to growth in Asia which now hosts 56% of the world’s students, but substantial growth has been the norm around the world since 2000.  In Asia, student numbers have nearly tripled in that period (up 184%), but they also more than doubled (albeit from lower bases) in Latin America (123%) and Africa (114%), and even in North America numbers increased by 50%. Only in Europe, where several major countries have begun seeing real drops in enrolment thanks to changing demographics (most notably the Russian Federation), has the enrolment gain been small – a mere 20%.

Tertiary Enrolments by Continent, 1999-2014:

unnamed

Source: Unesco Institute of Statistics

Now, what does this have to do with the future of international higher education?  Well, back in the day, international students were seen as “overflow” – that is, students forced abroad because there were not enough educational opportunities in their own countries. Therefore, many people thought that the massification of higher education in Asia (and particularly China) would over the long run mean a decrease in internationalization because they would have more options to choose from at home.

Clearly the last decade and a half has put that idea to bed. Global enrolments have shot up, but international enrolments have risen even faster. But as all these national systems of higher education are undergoing massification, they are also undergoing stratification. That is to say: as higher education systems get larger, the positional advantage obtained simply from attending higher education declines, and the positional advantage to attending a specific, prestigious institution rises. And while higher education places are rising quickly around the world, the number of spaces in prestigious institutions is staying relatively steady in most countries (India, which is expanding its IIT system, is a partial exception). Take China for example; over the last 20 years, the number of new undergraduate students being admitted to Chinese universities has increased from about one and a half million to six million per year. In that same time, the intake of the country’s nine most prestigious universities  (the so-called “C-9”) has increased barely at all (it currently stands at something like 50,000 per year).

Now if you’re a student in a country where there’s a very tight bottleneck at the top of the prestige ladder, what do you do if you don’t quite make it to the top? Do you settle for a second-best university in your own country?  Or do you look for a second-best university in another country, preferably one where people speak English, and preferably one which has a little bit of cachet of its own? Assuming money is not a barrier (though it often is) the answer is a no-brainer: go abroad.

So when we look ahead to the future, as we think about what might affect student flows around the world, what we need to watch is not the rise of university or college places in places like China and India, but rather the ratio of prestige spaces to total spaces. As long as that ratio keeps falling – and there’s no evidence at the moment that this process will reverse itself anytime soon – expect the demand for international education to remain high.

December 07

Two (Relatively) Good News Studies

A quick summary of two studies that came out this week which everyone should know about.

Programme for International Student Assessment (PISA)

On Tuesday, the results for the 2015 PISA tests were released.  PISA is, of course, that multi-country assessment of 15 year-olds in math, science and reading which takes place every three years and is managed by the Organization for Economic Co-operation and Development (OECD).  PISA is not a test of curriculum knowledge (in an international context that would be really tough); what it is instead is a test of how well individuals’ knowledge of reading, math and science can be applied to real-world challenges.  So the outcomes of the test can best be thought of as some sort of measure of cognitive ability in various domains.

In addition to taking the tests, students also answer questions about themselves, their study habits and their family background. Schools also provide information about the kinds of resources they have and what kind of curriculum structure they use, there is an awful lot of background information about each student who takes the test, and that permits some pretty interesting and detailed cross-national examination in the determinants of this cognitive ability.  And from this kind of analysis, the good folks at OECD have determined that government policy is best focused in four areas.

But heck, nobody wants to hear about that; what everybody wants to know is “where did we rank”?  And the answer is: pretty high.  The short version is here and the long version here, but here are the headlines: Out of the 72 countries where students took the test, Canada came 2nd in Reading, 7th in Science and 10th in Math.  If you break things down to the sub-jurisdictional level (Canada vastly oversamples compared to other countries so that it can get results at a provincial level), BC comes first in the world for reading (Singapore second, Alberta third, Quebec fourth and Ontario fifth).  In Science, Alberta and British Columbia come second and third in the world (behind only Singapore which as a country came top in every category).  In Math, the story is not quite as good, but Quebec still cracks the top three.

CMEC also has a publication out which goes into more depth at the provincial level (available here).  The short story is our four big provinces do well across the board but the little ones less so (in some cases much less so).  Worth a glance if comparing provinces rather than countries is your thing.

One final little nugget from the report: the survey taken by students asks if the students see themselves heading towards a Science-based career in the future.  In Canada, 34% said yes, the second highest of any country in the survey (after the US).  I’d like to think this will put to rest all the snarky remarks about how kids aren’t sufficiently STEM-geared these days (<cough> Ken Coates <cough>), but I’m not holding my breath.

Statscan Report on Youth Employment

Statistics Canada’s put out some interesting data youth employment by Rene Morisette on Monday.  It’s one of those half-full/half-empty stories: the youth unemployment rate is back down to 13% where it was in 1976 (and hence lower than it has been for most of the intervening 40 years), but the percentage of youth working full-time has dropped.  The tricky part of this analysis – not really covered by the paper – is that the comparison in both time periods excludes students.  That makes for a tricky comparison because there are proportionately about 3 times as many students as there were 40 years ago.  To put that another way, there are a lot fewer bright kids – that is, the kind likely to get and keep jobs – not in school now than in 1976.  So it’s not quite an apples-to-apples comparison and it’s hard to know what having more young people in school actually does to the employment rate.

Aside from data on employment rates, the report (actually a condensation of some speaking notes and graphs from a presentation made earlier this year) also includes a mishmash of other related data, from differing recent youth employment trends in oil provinces vs. non-oil provinces (short version: they’re really different) to gender differences in graduate wage premiums (bigger for women than men, which may explain participation rate differences), to trends in overall graduate wage premiums.  Intriguingly, these rose through the 80s and 90s but are now declining back to 1980 levels, though whether that is due to an increase in the supply of educated labour or reflects broader changes in the labour market such as the “Great Reversal” in the demand for cognitive skills that UBC’s David Green and others have described is a bit of a mystery.

But don’t take my word for it: have a skim through the report (available here).  Well worth a few minutes of your time.

Page 1 of 1212345...10...Last »