HESA

Higher Education Strategy Associates

Category Archives: surveys

September 22

Where Do Students Want to Live?

Today, we at HESA released a paper called: Moving On?  How Students Think About Choosing a Place to Live After Graduation, which is based on a 2011 survey of 1,859 students from across the country.  Obviously, you should go read the whole thing, but for the time-pressed here are the highlights:

1)      Part of the paper’s purpose is to examine the qualities students look for in a place to live.  Turns out Richard Florida’s whole shtick about young educated types looking for cities that are “hip” or “creative” may be somewhat wide of the mark; in fact, students’ main priorities in finding a place of residence are access to good jobs, healthcare services, and a low crime rate.  Access to cultural events and foodie cultures rank way, way down the list.  To put that another way: what young educated people look for in a place to live is pretty much what everyone else looks for.

2)      A solid majority of students intend to stay in the province in which they graduated.  That said, just over 40% of students are at least open to the idea of moving.  However, these students are not evenly distributed.  Students in the prairie provinces (including Alberta) are much more open to moving away than are students in British Columbia.  And, equally, students are not open to moving just anywhere – of the people open to a move, most have three or fewer potential destination provinces in mind, and are not open to any others (the most commonly-sought destinations are Ontario and British Columbia – Manitoba and Saskatchewan are the least ).  Only 7% are genuinely open to a move to pretty much anywhere in the country.

3)      Here’s perhaps the most important piece of news: financial incentives for graduates such as the tax credits used by Saskatchewan, Manitoba, and New Brunswick have almost no effect.  We asked students what they expected to earn in their first job in the province they were most likely to call home.  Then we asked them how much it would take to get them to move to each of the other provinces.  For most provinces (BC was the outlier), about a quarter said “nothing could get me to go there” and another 25% said “I’d go for an extra $25,000 or more” (which is really just a polite way of saying “never”).  But, intriguingly, between 13% (Manitoba) and 25% (British Columbia) of all students, say they’d move to that province for either no extra money or even a cut in pay – just give them a job and they’d go.  The percentage who say they’d move for an extra $2,000 (roughly the value of the tax credits in SK, MB and NB)?  About 1%.  Move the financial incentive up to $5,000 and you get another 1%.  And that’s perfectly consistent, right across the country.

The fact is, students are going to move where they’re going to move.  They are either tied to their present spot by networks of friends and family, or they are lured by money, jobs, and prosperity.  A couple of thousand bucks, in the grand scheme of things, just doesn’t seem to matter that much.

All of which begs the question: how come more provinces aren’t making like Nova Scotia and ditching these tax rebate programs?

January 17

Can’t Get No Satisfaction (Data)

Many of you will have heard by now that the Globe and Mail has decided not to continue its annual student survey, which we at HESA ran for the last three years.  The newspaper will continue publishing the annual Canadian University Report, but will now do so without any quantitative ratings.

Some institutions will probably greet this news with a yawn, but for a number of others, the development represents a real blow.  There were a number of institutions who based a large part of their marketing campaigns around the satisfaction data, and the loss of this data source makes it more difficult for them to differentiate themselves.

When the survey started a decade ago, many were skeptical about the relevance of satisfaction data.  But slowly, as year followed year, and as schools more or less kept the same scores in each category year after year, people began to realize that satisfaction data was pretty reliable, and might even be indicative of something more interesting.   And as it became apparent that satisfaction scores actually had a reasonably good correlation with things like “student engagement” (basically: a disengaged student is an unhappy student), it also  became apparent that “satisfaction” was an indicator which was both simple and meaningful.

Sure, it wasn’t a perfect measure.  In particular, institutional size clearly had a negative correlation with satisfaction.  And there were certainly some extra-educational factors which tended to affect scores, be it students’ own personalities, or even just geography – Toronto students, as we know, are just friggin’ miserable, no matter where they’re enrolled.  But, when read within its proper context (mainly, by restricting comparisons to similarly-sized institutions), it was helpful.

Still, what made the data valid and useful to institutions was precisely what eventually killed it as a publishable product.  The year-to-year reliability assured institutions that something real was being measured, but it also meant that new results rarely generated any surprises.  Good headlines are hard to come by when the data doesn’t change much, and that poses a problem for a daily newspaper.  The Globe stuck with the experiment for a decade, and good on them for doing so; but in the end, the lack of novelty made continuation a tough sell.

So is this the end of satisfaction ratings?  A number of institutions who use the data have contacted us to say that they’d like the survey to continue.  Over the next week or so, we’ll be in intensive talks with institutions to see if this is possible.  Stay tuned – or, if you’d like to drop us a note with your views, you can do so at, info@higheredstrategy.com.

January 20

Graduate Surveys We’d Like to See

If there’s one type of Canadian educational survey where complete and utter stasis has set in, it’s graduate surveys. Questions like “are you employed,” “what are your earnings,” and “were you satisfied with your education” aren’t just boring, I think they’re actively making us stupider. There seems to be a general view that because the answers to these questions don’t change very much from year to year, that we’re doing as good a job as we ever have.

But labour market results aren’t achieved in a vacuum. Economic conditions (both global and local) play a role, as do demographics. Canada’s labour force, which has been increasing in size since WWII, is predicted to plateau in the next couple of years and then decline slightly thereafter. As employers get desperate for workers, they’ll take anyone (think about Alberta fast-food workers making $17/hour in the boom years); in those conditions, low levels of graduate unemployment can’t be taken as evidence of educational excellence.

In future, universities and colleges are going to be judged on how they make students more productive, not on whether they’re employed. That means institutions will need to dig a lot deeper in terms of figuring out how students acquire competencies and then put them into use. Surveys can be helpful in working out which elements of a student’s education proved to be useful and which didn’t. Graduates – even those from disciplines which aren’t vocationally-oriented (i.e., the humanities) – have a pretty good sense of which of their courses were useful and which were decorative. Identifying the courses (and professors!) that graduates in the labour force rate highly can be an enormously powerful tool in curriculum revision.

So, here’s a suggestion for graduate surveys: let’s ease up on the strictly quantitative stuff. The next time you do a survey of graduates, don’t ask them if they were satisfied with their education – ask them which class contributed most to their success in the job market. Don’t ask whether they’d recommend a university to a friend – ask them what missing skills they most wished they’d got before leaving school. Trust me, the answers will be revealing.

Finally: stop wasting information, and link individual graduates’ surveys to their student records. It’s not as time-consuming and expensive as you think, and it vastly increases the explanatory power of the available data.

As I mentioned Monday, Canadian institutions underwent a data revolution in the late 90’s and early 00’s, but unfortunately a benchmarking agenda took over and the discovery agenda was put to the side. But as we’ve shown over the past  three days, it doesn’t have to be that way. Better surveys are possible; we just need to design and execute them.

Let’s do it!

January 19

Faculty Workload Surveys We’d Like to See

While we’re on a roll about surveys, let me muse about one that I think many people in this country would like to see on academic staff and their workloads.

There is a lot of talk about teaching loads, particularly in comparison to professors in other countries where they get paid less (notably the United States). The problem is that we’re dealing in anecdotes; institutions unfortunately don’t publish hard data on teaching loads, either of the notional or the real variety (and there is a considerable difference between the two). Times used to be you could sort of work this out on your own by hand with a course calendar, but those aren’t published anymore so we need different ways of getting at this information.

One solution would be for Statistics Canada to add one or two fields to their annual survey of full-time staff. Right now, all they ask is field of study, age, rank and salary. It wouldn’t be an enormous stretch to ask about number of courses or total credit hours taught.

But why wait for Statscan? Here’s a simple little survey that VP Academics, or Provosts (or whatever they’re called in your neck of the woods) could fill out in a few minutes that would tell us everything we need to know:

Proportion of Tenured and Tenure-Track Professors Teaching Various Course Loads (Rows to Add to 100%)

No big database searches or anything – just “what’s your best guess about proportions of faculty in each category?” It doesn’t need to be 100% accurate – just give a rough idea.

The results would tell us a lot, wouldn’t they? Especially if you could do it every couple of years and create a time series. Yet, if I were to actually send out that survey, and ask people to fill it in, it’s a dead certainty that almost none would do so, even if given a guarantee of institutional anonymity. Most would find reasonable-sounding rationales for refusing, but really they’d just be afraid of the backlash if the data were ever published.

That, I would suggest, is indicative of a much larger problem: universities are not confident of their ability to explain their labour practices to the general public. But that’s no reason someone shouldn’t try out what would definitely be a very cool survey.

January 18

Student Surveys We’d Like to See

Surveys of current students tend to focus on just a few areas. Apart from questions about demographics and time use, they ask a lot of specific questions about satisfaction with student services along with a few general questions about overall satisfaction.

This is odd, because at the end of the day students don’t actually think student services are central to the overall quality of their PSE experience. What they care about first and foremost is the quality of the teaching they experience. Yet institutional surveys seem determined to avoid asking all but the most banal questions about teaching.

Sure, we have course evaluations. But these are used exclusively (if sparingly – but that’s another story) by departments. The data are never used as a tool to learn about what kinds of teaching methods work better than others, never linked to other demographic data to see if there are patterns in the data that link satisfaction or reported learning to student background, the amount of paid work a student engages in, etc. They are, in short, a massive lost opportunity.

What about the National Survey of Student Engagement (NSSE)? Well, despite allegedly being outcome-related, NSSE insists on treating a student’s entire class schedule as a single observation. It does ask about how often students “work in teams” or “make presentations in class,” but most students have a mix of classes, some of which have these elements and some which don’t. If you’re trying to understand how different teaching styles affect students, this stuff needs to be broken out class by class. NSSE, for all its cost, is essentially useless for this purpose.

Getting this data isn’t rocket science. Here at HESA, we regularly do surveys which ask questions about details of each of a student’s classes. That’s how we look at class size, it’s how we found out about the characteristics of students’ favourite and least-favourite classes and it’s how we learned about the effectiveness of e-learning resources in Canadian universities. If we can do it, so can universities and colleges.

From a public policy perspective, the reluctance to look harder at what makes for good teaching looks deeply suspicious. Teaching is what actually matters. It’s why institutions receive tax dollars. Finding out what kinds of pedagogical strategies are most effective should be a moral imperative for anyone who benefits from that amount of public support. But even if that weren’t true, having better knowledge about what works and what doesn’t in teaching is potentially an enormous source of strategic advantage for institutions.

But you can’t act on that advantage until you start collecting the data. So, enough with the student services questions; let’s start asking students real questions about teaching and learning.

January 17

Applicant Surveys We’d Like to See

I’ve always been a bit intrigued by the continuing popularity of Applicant Surveys. What is it that people expect to see in this year’s results that weren’t there last year?

There are basically three sets of research questions that are at the heart of current applicant surveys: who is applying (i.e., the social/ethnic composition), what information tools are students using to acquire information about institutions, and what do students say they are looking for in an institution?

The “who applies” question is an important one, but it’s not one that needs to get asked more than about once every three or four years. The data simply doesn’t change that fast.

At the margin, the “information tools” question does vary a bit with changes in technology. But in essence, the answer to this question is always the same: parents and friends at the top, schools and guidance counselors in the middle, followed by institutional websites, Maclean’s and the Globe and Mail, in more or less that order.

(Which is a disaster, of course; some of our recent research on the related topic of student financial aid found that students who relied on information from parents and friends were actually less knowledgeable about the topics at hand than people who had not sought ay information at all. Yikes.)

The important question isn’t “where did students get their information” but “what is it that students think they know about institutions”? Institutions need to have a much better sense of their own brand’s state of play if they are going to do anything useful about them. Sure, you can ask applicants if they think particular universities have “prestige researchers,” or offer “a safe environment”, but their answers are mostly ex-ante rationalizations, so why bother?

Bluntly, we know virtually nothing about the process of choice-formation. How do they develop interests in particular fields of study? How do they go about forming a choice set? How much do they understand about the differences between particular institutions? When do they become aware of institutional stereotypes (e.g., Western = “party school,” Trent = tree-huggers, etc.) and to what extent do these affect choice?

Admittedly, none of this is easy to get at through a survey. Some of these questions are inevitably qualitative (though analytical software is making post-survey coding ever-easier), and even the stuff that lends itself to quantitative analysis would require a lot of focus-group work to make sense.

But good things require work. In terms of being able to recruit better students, getting a jump on competitors gathering data to get inside the decision-making process is a lot more productive than seeing if the proportion of applicants saying they are looking for universities with “an emphasis on teaching” has moved another percentage point or not.

January 16

No More Boring Surveys

As most of you probably know, we at HESA spend a lot of our time working on surveys. While doing so, we see a lot of different types of survey instruments, especially from governments and institutions. And we’ve come to a major conclusion:

Most of them are really boring.

There was a time – say fifteen years ago– when doing surveys of applicants, graduates and alumni was relatively rare. There weren’t any surveys of satisfaction, or engagement, or anything else, really. We knew essentially nothing about the composition of the student body, their background, their finances or their experiences once they were there. Apart from the National Graduates Survey that Statistics Canada put out every four years, there was really almost nothing out there to tell us about students.

Things started to change in the late 1990s and early 2000s. Statscan and HRDC between them put the Youth in Transition Survey (YITS) into the field, along with the Post-Secondary Education Participation Survey (PEPS) and the Survey of Approached to Educational Planning (SAEP) (the latter two now being subsumed into the ASETS survey). A group of twenty or so institutions banded together to create the undergraduate survey consortium (CUSC); other institutions began signing on to a private-sector initiative (from the company that later became Academica) to look at data about applicants. Provincial governments began surveying graduates more regularly, too; the Millennium Scholarship Foundation also spurred some developments in terms of looking at student financing and students at private vocational colleges. That’s not to forget the post-2004 NSSE boom and the plethora of smaller institutional surveys now being conducted.

This explosion of activity – born of a combination of increasing policy interest in the field and decreasing survey costs – was all to the good. We learned a lot from these surveys, even if the insights they generated weren’t always used to improve programming and policy as much as they might.

But it’s getting really stale. Now that we have these surveys, we’ve stopped asking new questions. All anyone seems to want to do is keep re-running the same questions so we can build up time-series. Nothing wrong with time-series, of course – but since change in higher education comes at such a glacial pace, we’re wasting an awful lot of time and money measuring really tiny incremental changes in student engagement and the like rather than actually learning anything new. Where ten years ago we were discovering things, now we’re just benchmarking.

It doesn’t have to be this way. Over the next four days, we’ll be giving you our suggestions for how to change Canadian PSE surveys. It’s time to start learning again.