Surveys of current students tend to focus on just a few areas. Apart from questions about demographics and time use, they ask a lot of specific questions about satisfaction with student services along with a few general questions about overall satisfaction.
This is odd, because at the end of the day students don’t actually think student services are central to the overall quality of their PSE experience. What they care about first and foremost is the quality of the teaching they experience. Yet institutional surveys seem determined to avoid asking all but the most banal questions about teaching.
Sure, we have course evaluations. But these are used exclusively (if sparingly – but that’s another story) by departments. The data are never used as a tool to learn about what kinds of teaching methods work better than others, never linked to other demographic data to see if there are patterns in the data that link satisfaction or reported learning to student background, the amount of paid work a student engages in, etc. They are, in short, a massive lost opportunity.
What about the National Survey of Student Engagement (NSSE)? Well, despite allegedly being outcome-related, NSSE insists on treating a student’s entire class schedule as a single observation. It does ask about how often students “work in teams” or “make presentations in class,” but most students have a mix of classes, some of which have these elements and some which don’t. If you’re trying to understand how different teaching styles affect students, this stuff needs to be broken out class by class. NSSE, for all its cost, is essentially useless for this purpose.
Getting this data isn’t rocket science. Here at HESA, we regularly do surveys which ask questions about details of each of a student’s classes. That’s how we look at class size, it’s how we found out about the characteristics of students’ favourite and least-favourite classes and it’s how we learned about the effectiveness of e-learning resources in Canadian universities. If we can do it, so can universities and colleges.
From a public policy perspective, the reluctance to look harder at what makes for good teaching looks deeply suspicious. Teaching is what actually matters. It’s why institutions receive tax dollars. Finding out what kinds of pedagogical strategies are most effective should be a moral imperative for anyone who benefits from that amount of public support. But even if that weren’t true, having better knowledge about what works and what doesn’t in teaching is potentially an enormous source of strategic advantage for institutions.
But you can’t act on that advantage until you start collecting the data. So, enough with the student services questions; let’s start asking students real questions about teaching and learning.