The Government of Ontario, in its ongoing quest to try to reform its funding formula, continues to insist that one element of the funding formula needs to relate to the issue of “teaching quality” or “quality of the undergraduate experience”. Figuring out how to do this is of course a genuine puzzle.
There are some of course who believe that quality can only be measured in terms of inputs (i.e. funding) and not through outputs (hi, OCUFA!) Some like the idea of sticking with existing instruments like the National Survey on Student Engagement (NSSE); others want to measure this through “hard numbers” on post-graduate outcomes like employment rates, average salaries and the like. Still others are banging away at certain types of solutions involving testing of graduates; HEQCO’s Essential Adult Skills Initiative seems like an interesting experiment in this respect.
But there are obvious defects with each of these approaches. The problem with the “let’s-measure-inputs-not-
That leaves the old survey stalwarts like NSSE and CUSC. These, to be honest, don’t tell us much about quality or paths to improvement. They did when they were first introduced, 15-20 years ago, but each successive survey adds less and less. To be honest, pretty much the only reason we still use them is because nobody wants to break up the time-series. But that’s an argument against particular surveys rather than surveys in general. Surveys are good because they are cheap and easily replicable. We just need to find a better survey, one that measures quality more directly.
Here’s my suggestion. What we really need to know is how many students are being exposed to good teaching practices and at what frequency. We know from various types of research what good teaching practices are (e.g. Chickering & Gamson’s classic Seven Principles for Good Practice).
Think about it: at an aggregate faculty or institutional level – which is all you would need to report publicly or to government – the results of such a survey would instantly become a credible source of data on teaching quality. But more importantly, they would provide institutions with incredible data on what’s going on inside their own classrooms. Are certain teaching practices associated with elevated levels of dropping out, or with an upward shift in grades? By tying the survey to individual student records on a class-by-class basis, you could know that from such a survey. A Dean could ask intelligent questions about why one department in her faculty seem to be less likely to involve group work or interactive discussions than others, as well as see how that plays into student completion or choice of majors. Or one could see how teaching patterns vary by age (are blended learning classes only the preserve of younger profs?). Or, by matching descriptions of classes to other more satisfaction-based instruments like course evaluations, it would be possible to see whether certain modes of teaching or types of assignment result in higher or lower student satisfaction results – and whether or not the relationship between practices and satisfaction hold true across different disciplines (my guess is it wouldn’t in some cases, but there’s only one way to find out!)
So there you go: a student-record-linked survey with a focus on classroom experiences on a class-by-class could conceivably get us a system which a) provides reliable data for accountability purposes on “learning experiences” and b) provides institutions with vast amount of new, appropriately granular data which can help them improve their own performance. And it could be done much more cheaply and less intrusively than wide-scale testing.
Worth a try, surely.
I am doing exactly this in my faculty. Got a small teaching grant to ask my colleagues across the faculty how they spend class time, and now we will ask students themselves what they experience/what they think we do in class. Should be very interesting. Results will be kinda like asking married couples, separately, how much sex they have-
Not a perfect measure of quality, but a much better measure than the ones we use now.
(Previous dean suggested I drop this line of inquiry. I obeyed; now, we have a new dean… and I have tenure.)
How would a survey like this account for extrinsic constraints on choice of pedagogical methods? Not all pedagogical methods are feasible in large classes for instance, especially not in the absence of substantial TA support.
Also, are students really going to be any good at estimating how much time their class spends on pedagogical technique X? In general, people who aren’t formally tracking their time are terrible at estimating their time allocation. And we know from surveys of *faculty* that they routinely mis-estimate how they allocate class time to different pedagogical techniques (https://bioscience.oxfordjournals.org/content/61/7/550.full.pdf). Why should students be any better at estimating how class time is spent? And I wouldn’t assume that those estimation errors will be random (unbiased) and so cancel out in aggregate. I suppose one way to deal with this is to only ask students to provide very rough estimates of how class time is allocated–2/3 or something. Or just ask whether particular pedagogical techniques were used at all.
Why is quality of experience solely related to quality of teaching? What about ease of institutional use, navigating the institutional systems, expedient and timely bureaucratic response to student concerns instead of getting the “runaround”, such as unanswered student emails and voice mail messages and students getting bounced around between different offices instead of the help they need to switch programs, etc.? These kinds of stresses influence a student’s experience and academic performance as well. Does it make sense for the government and our tax payers’ money to support institutions that make higher education more difficult and burdensome for our students?
Not sure that the ‘large class excuse’ holds anymore (thank goodness!) but I do have one tiny concern that certainly doesn’t take away from the merit of your proposal, just adds a small factor that would need consideration: my research supports the notion that students tend to rate courses in comparison to the others that they are taking at the time. Therefore, a course that employs a tiny bit of active learning would score higher if the student is taking a suite of courses in which active learning does not appear.
Just a tiny thought.