The Government of Ontario, in its ongoing quest to try to reform its funding formula, continues to insist that one element of the funding formula needs to relate to the issue of “teaching quality” or “quality of the undergraduate experience”. Figuring out how to do this is of course a genuine puzzle.
There are some of course who believe that quality can only be measured in terms of inputs (i.e. funding) and not through outputs (hi, OCUFA!) Some like the idea of sticking with existing instruments like the National Survey on Student Engagement (NSSE); others want to measure this through “hard numbers” on post-graduate outcomes like employment rates, average salaries and the like. Still others are banging away at certain types of solutions involving testing of graduates; HEQCO’s Essential Adult Skills Initiative seems like an interesting experiment in this respect.
But there are obvious defects with each of these approaches. The problem with the “let’s-measure-inputs-not-
That leaves the old survey stalwarts like NSSE and CUSC. These, to be honest, don’t tell us much about quality or paths to improvement. They did when they were first introduced, 15-20 years ago, but each successive survey adds less and less. To be honest, pretty much the only reason we still use them is because nobody wants to break up the time-series. But that’s an argument against particular surveys rather than surveys in general. Surveys are good because they are cheap and easily replicable. We just need to find a better survey, one that measures quality more directly.
Here’s my suggestion. What we really need to know is how many students are being exposed to good teaching practices and at what frequency. We know from various types of research what good teaching practices are (e.g. Chickering & Gamson’s classic Seven Principles for Good Practice).
Think about it: at an aggregate faculty or institutional level – which is all you would need to report publicly or to government – the results of such a survey would instantly become a credible source of data on teaching quality. But more importantly, they would provide institutions with incredible data on what’s going on inside their own classrooms. Are certain teaching practices associated with elevated levels of dropping out, or with an upward shift in grades? By tying the survey to individual student records on a class-by-class basis, you could know that from such a survey. A Dean could ask intelligent questions about why one department in her faculty seem to be less likely to involve group work or interactive discussions than others, as well as see how that plays into student completion or choice of majors. Or one could see how teaching patterns vary by age (are blended learning classes only the preserve of younger profs?). Or, by matching descriptions of classes to other more satisfaction-based instruments like course evaluations, it would be possible to see whether certain modes of teaching or types of assignment result in higher or lower student satisfaction results – and whether or not the relationship between practices and satisfaction hold true across different disciplines (my guess is it wouldn’t in some cases, but there’s only one way to find out!)
So there you go: a student-record-linked survey with a focus on classroom experiences on a class-by-class could conceivably get us a system which a) provides reliable data for accountability purposes on “learning experiences” and b) provides institutions with vast amount of new, appropriately granular data which can help them improve their own performance. And it could be done much more cheaply and less intrusively than wide-scale testing.
Worth a try, surely.