Can’t Get No Satisfaction (Data)

Many of you will have heard by now that the Globe and Mail has decided not to continue its annual student survey, which we at HESA ran for the last three years.  The newspaper will continue publishing the annual Canadian University Report, but will now do so without any quantitative ratings.

Some institutions will probably greet this news with a yawn, but for a number of others, the development represents a real blow.  There were a number of institutions who based a large part of their marketing campaigns around the satisfaction data, and the loss of this data source makes it more difficult for them to differentiate themselves.

When the survey started a decade ago, many were skeptical about the relevance of satisfaction data.  But slowly, as year followed year, and as schools more or less kept the same scores in each category year after year, people began to realize that satisfaction data was pretty reliable, and might even be indicative of something more interesting.   And as it became apparent that satisfaction scores actually had a reasonably good correlation with things like “student engagement” (basically: a disengaged student is an unhappy student), it also  became apparent that “satisfaction” was an indicator which was both simple and meaningful.

Sure, it wasn’t a perfect measure.  In particular, institutional size clearly had a negative correlation with satisfaction.  And there were certainly some extra-educational factors which tended to affect scores, be it students’ own personalities, or even just geography – Toronto students, as we know, are just friggin’ miserable, no matter where they’re enrolled.  But, when read within its proper context (mainly, by restricting comparisons to similarly-sized institutions), it was helpful.

Still, what made the data valid and useful to institutions was precisely what eventually killed it as a publishable product.  The year-to-year reliability assured institutions that something real was being measured, but it also meant that new results rarely generated any surprises.  Good headlines are hard to come by when the data doesn’t change much, and that poses a problem for a daily newspaper.  The Globe stuck with the experiment for a decade, and good on them for doing so; but in the end, the lack of novelty made continuation a tough sell.

So is this the end of satisfaction ratings?  A number of institutions who use the data have contacted us to say that they’d like the survey to continue.  Over the next week or so, we’ll be in intensive talks with institutions to see if this is possible.  Stay tuned – or, if you’d like to drop us a note with your views, you can do so at, info@higheredstrategy.com.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.