HESA

Higher Education Strategy Associates

Category Archives: satisfaction

August 11

Improving Career Services Offices

Over the last few years, what with the recession and all, there has been increased pressure on post-secondary institutions to ensure that their graduates get jobs.  Though that’s substantially the result of things like curriculum and one’s own personal characteristics, landing a job also depends on being able to get interviews and to do well in them.  That’s where Career Services Offices (CSOs) come in.

Today, HESA released a paper that looks at CSOs and their activities.  The study explores two questions.  The first question deals specifically with university CSOs and what qualities and practices are associated with offices that receive high satisfaction ratings from their students.  The second question deals with college career services – here we did not have any outcome measures like the Globe and Mail, so we focussed on a relatively simple question: how does their structure and offerings differ from what we see in the university sector?

Let’s deal with that second question first: college CSOs tend to be smaller and less sophisticated than those at universities of the same size.  At first glance, that seems paradoxical – these are career-focussed organizations, aren’t they?  But the reason for this is fairly straightforward: to a large extent, the responsibility for making connections between students and employers resides at the level of the individual program rather than with some central, non-academic service provider – a lot of what takes place in a CSO at universities takes place in the classroom at colleges.

Now, to universities, and the question: what is it that makes for a good career services department?  To answer this question we interviewed CSO staff at high- medium- and low-performing institutions (as measured by the Globe and Mail’s pre-2012 student satisfaction surveys) to try to work out what practices distinguished the high-performers.  So what is it that makes for a really good career services office?  Turns out that the budget, staff size, and location of Career Services Offices aren’t really the issue.  What really matters are the following:

  • Use of Data.  Everybody collects data on their operations, but not everyone puts it to good use.  What distinguishes the very best CSOs is that they have an effective, regular feedback loop to make sure insights in the data are being used to modify the way services are delivered.
  • Teaching Job-seeking Skills.  Many CSOs view their mission as making as many links as possible between students and employers.  The very best-performing CSOs find ways to teach job search and interview skills to students, so that they can more effectively capitalize on any connections.
  • Better Outreach Within the Institution.  It’s easy to focus on making partnerships outside the institution.  The really successful CSOs also make partnerships inside the institution. One of the key relationships to be nurtured is academic staff.  Students, for better or for worse, view profs as frontline staff and ask them lots of questions about things like jobs and careers.  At many institutions, profs simply aren’t prepared for questions like that, and don’t know how to respond.  The best CSOs take the time to reach out to staff and partner with them to ensure they have tools at their disposal to answer those questions, and to direct students to the right resources at the CSOs.

If you want better career services, there’s your recipe.  Bonne chance.

March 18

Comparing Outcomes Across Credentials

I was doing some random websurfing the other day and I came across the BC Student Outcomes Page, which makes freely available an absolute cornucopia of data on its graduates.  BC has a seriously decent survey set-up, in that they do surveys of each graduating class, every year – universities, colleges, apprenticeships, you name it.  Actually, it’s probably overkill, but for data nerds like me it’s absolute heaven.

Anyways, BC surveys all its graduates between 9 and 20 months after graduation (not ideal, I know, because a lot can happen in that period), and asks them about their satisfaction with their program, how they rate the usefulness of the skills they gained, and their employment status.  Given all the talk going on about shifting labour markets, and the need for greater emphasis on skills training, yadda yadda, I thought I would line up the combined results for the 2009, 2010, and 2011 surveys – that is, the years that cover the recent period of elevated unemployment – to see how people with each of the three credentials rated their education.  (DataBC released the pre-compiled results, here.)

First, satisfaction: how happy are Bachelor’s graduates, Diploma/Associate Degree/Certificate graduates (which, for the sake of convenience, I’ll call “Sub-Bachelor’s” graduates), and apprenticeship graduates, with the education they received?  Well, the answer to that question is so dull I’m not even going to post a graph (lest I be accused of doing this).  Across the board, 94% said they were satisfied or very satisfied.  The only sub-group that stood out were Bachelors-level students in Education, where the percentage was 89%.

The percentage of students saying they gained useful skills and knowledge is a bit more interesting.

BC Graduates Rating Knowledge/Skills Received in Their Program as “Useful” or “Very Useful”, By Level of Credential, 2009-2011

 

 

 

 

 

 

 

 

 

 

 

 

That apprentices are most likely to rate the knowledge/skills obtained in their programs as being useful is unsurprising, but the other two sets of numbers are more interesting.  In Bachelor’s programs, all programs except Arts/Science (78%) and Visual/Performing Arts (81%) are in the high-80s or low-90s.  In sub-bachelor’s program it’s a similar story, with General Arts and Science (53%) and Visual/Performing Arts (63%) pulling down other averages, which are in the high-70s to low-90s. Takeaway: provision of useful skills in BC colleges is pretty uneven.

Now, here’s the killer stat: unemployment rates.

Unemployment Rates of Recent Graduates, By Level of Credential, 2009-2011

 

 

 

 

 

 

 

 

 

 

 

 

Yes, really.  Break it down even further and you get even more interesting numbers: Visual/Performing Arts BAs – 8%; Arts and Science BAs/BScs – 9%; Construction trades apprentices – 11%.

Now, it’s great – obviously – that people are taking apprenticeships and skilled trades more seriously these days.  But this meme about how undergraduate degrees are inferior to other forms of education in terms of skills and job outcomes?  It’s factually incorrect.  It’s got to stop.

January 17

Can’t Get No Satisfaction (Data)

Many of you will have heard by now that the Globe and Mail has decided not to continue its annual student survey, which we at HESA ran for the last three years.  The newspaper will continue publishing the annual Canadian University Report, but will now do so without any quantitative ratings.

Some institutions will probably greet this news with a yawn, but for a number of others, the development represents a real blow.  There were a number of institutions who based a large part of their marketing campaigns around the satisfaction data, and the loss of this data source makes it more difficult for them to differentiate themselves.

When the survey started a decade ago, many were skeptical about the relevance of satisfaction data.  But slowly, as year followed year, and as schools more or less kept the same scores in each category year after year, people began to realize that satisfaction data was pretty reliable, and might even be indicative of something more interesting.   And as it became apparent that satisfaction scores actually had a reasonably good correlation with things like “student engagement” (basically: a disengaged student is an unhappy student), it also  became apparent that “satisfaction” was an indicator which was both simple and meaningful.

Sure, it wasn’t a perfect measure.  In particular, institutional size clearly had a negative correlation with satisfaction.  And there were certainly some extra-educational factors which tended to affect scores, be it students’ own personalities, or even just geography – Toronto students, as we know, are just friggin’ miserable, no matter where they’re enrolled.  But, when read within its proper context (mainly, by restricting comparisons to similarly-sized institutions), it was helpful.

Still, what made the data valid and useful to institutions was precisely what eventually killed it as a publishable product.  The year-to-year reliability assured institutions that something real was being measured, but it also meant that new results rarely generated any surprises.  Good headlines are hard to come by when the data doesn’t change much, and that poses a problem for a daily newspaper.  The Globe stuck with the experiment for a decade, and good on them for doing so; but in the end, the lack of novelty made continuation a tough sell.

So is this the end of satisfaction ratings?  A number of institutions who use the data have contacted us to say that they’d like the survey to continue.  Over the next week or so, we’ll be in intensive talks with institutions to see if this is possible.  Stay tuned – or, if you’d like to drop us a note with your views, you can do so at, info@higheredstrategy.com.

May 18

Size Matters

Did this make it through your spam filters? Here’s hoping.

Ok, so everyone knows from even a casual glance at student satisfaction data that there’s a correlation between institutional size and satisfaction. Students, on average, prefer small schools to big schools. Makes them feel more at home. Less of a leap from high school. The accompanying small class sizes don’t hurt either (the relationship between size and satisfaction is of course one reason why the Globe and Mail’s Canadian University Report takes care to separate institutions by size before comparing them).

As the graph below shows, on a nine-point scale, a university tends to lose about a tenth of a point for every extra 6,000 students they enrol. That sounds small, but when you consider that all but two institutions in Canada receive a score of between six and eight, even small differences can have a big effect on where schools place relative to one another.

Relationship between School Satisfaction and Institutional Enrolment

So, what would happen if you normalized satisfaction scores for size? That is to say, what if, instead of measuring an absolute value of satisfaction, we measured the distance from that nice little diagonal trend line?

The answer is that not much would change for some institutions – especially those that already have enrolments in the 10-15,000 range – but there would be some very big adjustments for some institutions at the top and the bottom.

At one end of the scale, there are a number of institutions whose poor performance in satisfaction rankings can be ascribed mainly (but not entirely) to the size of the institution – they actually do as well as (or maybe even better than) expected, given their size. With size-normalized scores, the University of Toronto would jump 23 places in the table out of 62 institutions. UBC would rise 19 places, the University of Alberta 17, McGill 16, Concordia 15 and Ryerson 11.

On the other end of the scale it’s a different story: there are a number of institutions whose position at or near the top of the table are due to a large degree to their size and their size alone. Normalized for enrolments, UPEI would drop 14 places, Cape Breton, Brandon and UNBC would drop 11 places, while Thompson Rivers and Nipissing would drop 10 places each.

You can make an argument either way for whether normalizing for size is the right way to display the data. Clearly, students like smallness and that needs to be reflected somehow. But then again, you also want to be able to display data in a way that rates institutions base on what they do, not on their size. I’d be interested to hear from anyone with ideas on how to do both in the next CUR.

February 24

Looking Like You Care About Undergraduates

In our annual Globe Survey, we ask students to describe, using an 11-point scale, the extent to which their school is geared towards serving graduate students or undergraduates. As you’d expect, undergraduates tend to be slightly more satisfied (y-axis) with their schools the more undergraduate they perceive it to be. It’s not a huge effect, and it’s presumably correlated to some degree with size, but it’s there.

Figure 1: Satisfaction as a Function of Perceived Grad-centricness

What’s really interesting, though, is that the degree to which students believe their institution to be “graduate centric” and the degree to which it actually is graduate-centric (as measured by the percentage of the student body that is enrolled in graduate studies) can differ substantially, as shown in Figure 2.

Figure 2: Perceptions of Grad-centricness vs. Proportion of Students Enrolled in Graduate Programs

The y-axis in Figure 2 is students’ perceptions of grad-centredness, the x-axis is the percentage of the student body enrolled in graduate programs, and the upward-sloping trend line indicates a general positive correlation of the two. Institutions below the line are institutions that are perceived as being less graduate-centric than they actually are; those above the line are perceived as being more graduate-centric than they actually are.

A few intriguing points here:

1) Students correlate size with not being concerned with undergraduates, regardless of the actual presence of graduate students. Small schools almost without exception score below the line regardless of their graduate populations. This is spectacularly so in the case of Mount St. Vincent, which does an astonishing job of disguising the fact that it is the third most graduate-intensive school in the country (I know it’s mostly M.Ed. students, but they still count).

2) Concordia is the only really large school which lands well below the line (kudos to David Graham and his team!); Laurier and Sherbrooke, though somewhat smaller, also deserve mention for being considered a long way below the line. Queen’s is the only U-15 school which makes it below the line.

3) Queen’s and Laurier apart, nearly all Ontario schools are above the line – even places like Brescia and OCAD. One wonders whether Ontario schools’ constant chasing of prestige via research isn’t actually hurting some school brands which could be better served by a more undergraduate focus.

4) The U of T suburban campuses are suffering from a serious disconnect. There’s very little graduate work being done on these campuses, yet students there are convinced that their institutions are focused on graduate students.

It is clearly possible to have lots of graduate students without alienating one’s undergraduate students. Excellence in both areas is clearly a sweet spot more institutions should try to emulate.

February 22

Best and Worst Student Experiences

Most of you know that we at HESA do the data collection and analysis for the Globe and Mail’s Canadian University Report. But what we do with that data is much more than just gives scores to each institution. We also spend a lot of our time mining that data for all its worth, looking for insight into the student experience (and not just on those miserable Toronto students).

Today we’d like to look at how students describe their best and worst academic experiences.

Now, you might think that these two things are pretty similar: one being the obverse of the other. Good profs = best experience, bad profs = worst experience, etc. But it turns out it’s not quite so simple.

Take technology and facilities, for instance. Students almost never cite great tech as being part of their best experience; they do, however, routinely describe bad tech as being a “worst experience.” They never describe a well-organized course or professor as a positive experience, but they regularly describe it as a worst experience.

When students talk about what made their experience at university so great, they tend to describe things like personal and intellectual growth and forming relationships (both with other students and professors). When they talk about worst experiences, they often talk about particular episodes or incidents. In other words, the good stuff takes months or years to build, the bad stuff is quicker and more transactional.

It’s an old rule of customer service – a positive reputation can take a long time to cultivate, but a bad reputation can be gained in a minute. Turns out it’s the same thing in higher education. It’s a lesson that we should all take to heart.

November 25

Grades, Satisfaction and Miserable Toronto Students

It’s been noted many times (here, for instance) that professors who give easy As tend to do better on course evaluations than those who don’t. But does this work at the institutional level as well?

It’s hard to tell directly because all institutions essentially grade on the same curve. But we can get at it indirectly by looking at the gap between high school and university grades, which does vary significantly – at more selective institutions, students see a drop; at less selective ones their grades tend to get better.

For this analysis we use self-reported data on grades. Now, we know that skeptics say that this is bad methodology because asking students to self-report on grades is like asking men on a dating site to report on their height or income – all three tend to rise in the telling. But what we’re looking at here is change in reported grades. As long as any exaggeration is consistent across time, the exaggerations should cancel each other out. For math-heads, this can be expressed as:

Onwards to look at our sample from the Globe and Mail. We start by arranging the changes in reported grades between high school and university in bands and looking at average satisfaction in each band. It turns out that there is very little change in satisfaction levels unless students see a very large drop in marks (13% or worse).

Figure 1: Satisfaction by Change in Grades from High School to University

Loyal readers will know where this is going. Guess which city has an abnormally high proportion of students whose grades drop precipitously once they get to university?

Figure 2: Percentage of Students with a Drop in Grades of 13% or Worse

How big a difference does this make to satisfaction? Well, check out the differences in satisfaction between students with large grade drops versus others at Toronto institutions; at Mississauga, the difference between students whose grades have fallen a long way and others is greater than one standard deviation.

Figure 3: Average Satisfaction, Students with Large Mark Drop-Offs vs. Others, Toronto

Oddly, when we look at the five institutions elsewhere in Canada with the most students experiencing large drops in marks, we don’t see anything like the same drop in satisfaction, to wit:

Figure 4: Average Satisfaction, Students with Large Mark Drop-Offs vs. Others, Not Toronto

It’s not quite a story about big fish from little ponds getting shocked by the jump to a larger pool. It’s that big fish from Toronto ponds are both likelier to feel a shock and that they feel a whole a lot worse about the jump that fish elsewhere. A simple case of Torontonians’ elevated sense of entitlement? Maybe. Or maybe Toronto is just a more ruthless environment, with higher social penalties for poor performance.

November 11

Why are Toronto Students so Friggin’ Miserable (Part 5)

Last week in our series on student satisfaction (and Toronto students’ lack thereof) we looked at how students’ perception of institutional character – specifically, things like having applied curricula, seeming open to new ideas and offering a supportive environment – correlated with student satisfaction. This week, we’re still on the issue of character, but students’ own characters rather than those of their institutions.

The 2012 Canadian University Report survey asked students how much they agreed, on a one to nine scale, with a series of statements about themselves (e.g. “I am an athlete”). There were eight statements in total, corresponding to “athlete,” “political junkie,” “environmental activist,” “artist,” “technological guru,” “career oriented,” “studious” and “I like to live it up.” (We were unable to ask the more direct question – whether or not students would describe themselves as “liking to party” – because no institution wants to be labelled a “party school” as a result of the CUR).

It turns out that only three of these eight statements have any important relationship to overall satisfaction (Figure 1). The ultimate trifecta of satisfaction? A studious, career oriented student who likes to live it up; the average satisfaction of a student who rates themselves as a nine on all of these measures is 7.5 out of nine.

This, by the way, makes you wonder why any institution would want to avoid being called a party school since such a status is likely to be associated with high levels of satisfaction. To at least some extent, the University of Western Ontario’s continued long-term success in having top satisfaction ratings (which it likes to talk about) is because of its status as a party school (which it would prefer not to talk about). It’s two sides of the same coin.

But back to our longer-term question: does any of this explain why Toronto students are so miserable? Can Toronto institutions’ low satisfaction levels be explained because students there are less likely to describe themselves as “career oriented,” “studious” or “liking to live it up”? Well, maybe a little bit.

Figure 2 shows the percentage of students who rate themselves as an eight or a nine (on a nine-point scale) on each of these three traits. Clearly, Toronto students are less likely to strongly identify with these three traits, but the gap isn’t huge – certainly not enough to explain the big gaps in satisfaction we see each year. That said, it’s worth noting that students at Ryerson – the one Toronto school that does reasonably well on the CUR’s satisfaction measure – are also the ones with the highest average scores for being career oriented and “liking to live it up” and the second-highest average scores for studiousness (behind OCAD).

More next week. Stay tuned.

– Alex Usher and Jason Rogers

November 04

Why Are Toronto Students so Friggin’ Miserable? (Part Quatre)

To date, we’ve been musing about the causes of Toronto students’ dissatisfaction. But let’s put the shoe on the other foot for a bit: what causes student satisfaction to begin with?

One thing we ask in the Globe Canadian University Report survey is students’ perceptions about a number of dimensions of the character of their institution. For instance, we ask them if they think their institution’s curriculum is more theoretical or applied, whether the institution is broadly-based or focuses on a few areas of study, etc. (to save space, we’ll spare you the methodology and wording caveats; contact us directly if you want to know more).

It turns out that students’ perceptions of certain dimensions of institutional character are remarkably closely correlated with satisfaction:

Here’s what Figure 1 tells us: Students are more satisfied if they think their institution isn’t specializing in certain areas, but rather is spreading the (monetary) love about evenly across different fields of study. If they think their curriculum is practical/applied, they’re happier than if they think it’s theoretical. Students who see their schools as expecting them to be self-sufficient or as cautious to new approaches and ideas are miserable; by contrast, those who see their institution as “nurturing and supportive” or “open to new approaches” are extremely satisfied. Indeed, the range of satisfaction from one end of the scale to the other for both of these measures is over three points: that’s larger than the entire range of institutional satisfaction results in the 2012 Canadian University Report.

So, might these factors explain our Toronto problem? Are Toronto institutions seen as too theoretical, closed to new approaches, etc?

It turns out that Toronto institutions aren’t rated differently from institutions elsewhere in terms of spreading their wealth and are barely different in terms of being open to new ideas.  Having a theoretical curricula is a big factor for St. George, but not for the rest of Toronto on average (though there is variation – applied curricula at Ryerson and OCAD offset theory at York and the U of T satellites). Only on the issue of being insufficiently nurturing/supportive of students – a measure inversely correlated with institutional size – is there a clear difference between Toronto schools and those elsewhere. But even this is less than it seems. Toronto’s numbers are being driven by the three U of T campuses and OCAD (all of which do badly even once size is taken into account); York’s score is about average while Ryerson does exceptionally well on this measure.

In short, institutional characteristics are an important driver of satisfaction generally, but can only partially explain our Toronto conundrum.

The search continues next week. Stay tuned.

– Alex Usher and Jason Rogers

October 28

Why are Toronto Students so Friggin’ Miserable (Part 3)

So, back to our favourite hobby of delving in to the causes of Toronto students’ misery. Today’ we’re looking at the issue of institutional size and asking the question: are Toronto schools Too Big Not to Fail?

(For those of you tired of hearing about Toronto, bear with us: you can learn a lot about satisfaction generally by following this series.)

First, to put this all in perspective: this year’s Canadian University Report data shows that Toronto students are really unhappy (Figure 1). On a nine-point scale, they rate their satisfaction 0.75 points lower than students from elsewhere in Canada. Given that no school receives an overall satisfaction score less than a 5.8 or greater than 8.2, this is a rather substantial difference (a standard deviation of 1.5, 1.5 standard deviations, if your unit of analysis is university means).

One obvious suspect is size. Toronto has some of the largest institutions in Canada, and smaller schools generally do better on the CUR, as a quick glance at the results charts in this year’s edition (or at Figure 2) will tell you.

So, given that a majority of students in our sample attend massive schools like University of Toronto or York University, is “Colossal U” the barrier to Toronto’s satisfaction? A closer look at the data suggests not. Toronto actually has institutions in all four CUR size categories. While the size hypothesis could account for low satisfaction grades at U of T’s St. George campus (B-) or York (C+) (Ryerson’s B actually hugs the national average for schools its size), it hardly explains sentiments at Mississauga (C+) or Scarborough (C+). OCAD’s B-minus is perhaps the ultimate proof; it’s not bad for a Toronto school, but still the lowest score nationally among very small institutions. That’s what teaching Torontonians will do to you.

In short, Toronto’s misery is not concentrated in any one institution or even one type of institution; it’s spread among the big and the small alike, as Figure 3 demonstrates.

So much for that obvious explanation. Maybe the problem is that we’re asking the wrong question: instead of looking at sources of dis-satisfaction, we need to take a harder look at what factors (other than size) are associated with satisfaction – in particular, the correlation between certain types of institutional and student characteristics that seem to positively affect satisfaction. Stay tuned.

– Alex Usher and Jason Rogers.

For the record, no actual Torontonians work at Higher Education Strategy Associates.

Page 1 of 212