One of the main struggles with measuring performance in higher education – whether of departments, faculties, or institutions – is how to measure the quality of teaching.
Teaching does not go entirely unmeasured in higher education. Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester. The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions. That’s partly because faculty unions are wary about using individual professors’ performance data as an input for anything other than pay and promotion decisions, but it also suits the interests of the research-intensive universities who do not wish to see the creation of a metric that would put them at a disadvantage vis-a-vis their less-research-intensive brethren (which is also why course evaluations differ from one institution to the next).
Some people try to get around the comparability issue by asking students about teaching generally at their institution. In European rankings (and Canada’s old Globe and Mail rankings), many of which have a survey component, students are simply asked questions about the quality of courses they are in. This gets around the issue of using course evaluation data, but it doesn’t address a more fundamental problem, which is that a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it. There is a bit of truth here: it has been established, for instance, that teachers who grade more leniently tend to get better course satisfaction scores. But this is hardly a lethal argument. Just control for average class grade before reporting the score.
It’s not as though there isn’t a broad consensus on what makes for good teaching. Is the teacher clear about goals and expectations? Does she/he communicate ideas effectively? Is he or she available to students when needed? Are students challenged to learn new material and apply this knowledge effectively? Ask students those kinds of questions and you can get valid, comparable responses. The results are more complicated to report than a simple satisfaction score, sure – but it’s not impossible to do so. And because of that, it’s worth doing.
And even the simple questions like “was this a good course” might be more indicative than we think. The typical push-back is “but you can’t really judge effectiveness until years later”. Well, OK – let’s test a proposition. Why not just ask students about a course they took a few years ago, and compare it with the answers they gave in a course evaluation at the time? If they’re completely different, we can indeed start ignoring satisfaction types of questions. But we might find that a good result today is in fact a pretty good proxy for results in a few years, and therefore we would be perfectly justified in using it as a measure of teaching quality.
Students may be inexperienced, but they’re not dumb. We should keep that in mind when dismissing the results of teaching quality surveys.