How to Measure Teaching Quality

One of the main struggles with measuring performance in higher education – whether of departments, faculties, or institutions – is how to measure the quality of teaching.

Teaching does not go entirely unmeasured in higher education.  Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester.  The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions.  That’s partly because faculty unions are wary about using individual professors’ performance data as an input for anything other than pay and promotion decisions, but it also suits the interests of the research-intensive universities who do not wish to see the creation of a metric that would put them at a disadvantage vis-a-vis their less-research-intensive brethren (which is also why course evaluations differ from one institution to the next).

Some people try to get around the comparability issue by asking students about teaching generally at their institution.  In European rankings (and Canada’s old Globe and Mail rankings), many of which have a survey component, students are simply asked questions about the quality of courses they are in.  This gets around the issue of using course evaluation data, but it doesn’t address a more fundamental problem, which is that a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it.  There is a bit of truth here: it has been established, for instance, that teachers who grade more leniently tend to get better course satisfaction scores.  But this is hardly a lethal argument.  Just control for average class grade before reporting the score.

It’s not as though there isn’t a broad consensus on what makes for good teaching.  Is the teacher clear about goals and expectations?  Does she/he communicate ideas effectively?  Is he or she available to students when needed?  Are students challenged to learn new material and apply this knowledge effectively?  Ask students those kinds of questions and you can get valid, comparable responses.  The results are more complicated to report than a simple satisfaction score, sure – but it’s not impossible to do so.  And because of that, it’s worth doing.

And even the simple questions like “was this a good course” might be more indicative than we think.  The typical push-back is “but you can’t really judge effectiveness until years later”.  Well, OK – let’s test a proposition.  Why not just ask students about a course they took a few years ago, and compare it with the answers they gave in a course evaluation at the time?  If they’re completely different, we can indeed start ignoring satisfaction types of questions.  But we might find that a good result today is in fact a pretty good proxy for results in a few years, and therefore we would be perfectly justified in using it as a measure of teaching quality.

Students may be inexperienced, but they’re not dumb.  We should keep that in mind when dismissing the results of teaching quality surveys.

Posted in

10 responses to “How to Measure Teaching Quality

  1. “These data are never aggregated to allow comparisons of quality of instruction across departments or institutions.” I think you may have your higher-ed-equals-university goggles on again. At Humber College, and I expect all the colleges in Ontario, data from student feedback questionnaires (SFQs) are indeed aggregated and comparisons across departments are made. Key Performance Indicators (KPIs), including questions such asking about “The overall quality of the learning experiences in this program” are also used across colleges, http://collegesontario.org/outcomes/key-performance-indicators.html

  2. Is it fair to evaluate professors on something they have not been taught to do?

    In my mind, if we want to improve professor performance in the classroom then we need to actually take proactive steps to teach professors how to teach. It’s no surprise that the lecture model dominates when we ask people with no training to teach well. Evaluating someone about education and pedagogical tactics as part of a comprehensive plan to develop them is one thing, but using that unfiltered evaluation to determine how much pay, benefits, etc that person should get is ludicrous. In my eyes that seems analogous to using the results of a first day pop-quiz as the final mark in a class. Other industries train their staff on an on-going basis in their job – why don’t we?

    Administrators; Spend the resource money you would spend on this program on enrolling professors in mid-level classes on education offered at your own institution and I’m willing to bet you’d see a better increase than evaluating people on something you’ve never taught them how to do. Only once you’ve got a trained workforce should you start implementing evaluation to see where they are that – and consequently where you should be focusing your efforts.

  3. “a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it”

    Actually there is a portion of academic staff that believe the process is flawed because there have been studies done about it and their conclusion is that it’s inherently flawed. It’s not because students are dumb. It’s because they’re as susceptible as anyone else to charm and the like, and that is what these surveys measure.

    1. Thanks for commenting.

      You really think they’re *inherently* flawed? Or that there are some bad instruments out there? Or that there are mitigating factors which aren;t usually properly controlled for? I’m not sure know how you could come up with a conclusion that measurements are *inherently* flawed unless you had some absolute standard against which you could compare them. Which we don’t.

      1. Indeed we don’t. Which is why there is no way to text their validity, and they should be denounced as pseudo-science. I mean that in the Popperian way: there’s no way to disprove their claims, at least not in fields where we don’t have an agreed body of knowledge.

        One might add that not only are students as susceptible to “charm and the like” as the rest of us, but they’re also susceptible to widely-shared, social beliefs. Are they really learning more in a class, or are they just declaring to be effective what they have always been told is effective? If they’ve always been told that small-group discussion is more effective than lecturing, will they always punish lecturers, regardless of how much they learned? If the lecturer is charismatic, do they learn, or do they merely feel like they’re learning while merely reinforcing their prejudices?

        As an example, I used to enjoy extraordinarily high scores for “organization” when I was the only member of my department using PowerPoint. They’ve since dropped off, and for that matter, I’ve lost interest in projection software. If anything, I’m a better instructor now that I make less use of PowerPoint — certainly, I’m more experienced and know more about the material — but I no longer look organized. Thank God I now have tenure and can be fearlessly conscientious.

      2. You can compare students’ performance in a course where one year’s version has instructor A and the next year’s version has instructor A after having taken a course in presentation skills. This has been done. Evaluations shot up. The only difference was that instructor A had learned to modulate his tone, gesticulate, walk around the room, smile, tell jokes etc. Students’ performance in the course stayed still — it was a standard course, same exam etc. Reasonable conclusion: Asking students what they think of the “quality” of professors’ teaching *is* inherently flawed.

  4. thanks for sharing this great article.Teaching does not go entirely unmeasured in higher education. Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester. The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions.
    Link this

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.