Hi everyone, and welcome back.
The best education story of the winter break was almost certainly the Globe piece on program reviews at Canadian universities. Despite an inane headline (when it comes to a policy’s unsuitability, nothing unites Canadian bien-pensants more than claims to an American origin), it’s an important piece about a useful process occurring at universities across Canada.
HESA has directly contributed to two of these exercises (you can see some of our work, here), and with that experience I think there are a couple of points in the article which bear greater exposition. In particular, while the idea of conducting program reviews has received recent impetus from the writings of Robert Dickeson, the idea that Canadian institutions have adopted Dickeson holus-bolus is simply not true.
Dickeson’s key insights – the ones that attract everyone to his book – are that spending decisions at universities should be informed by some kind of insight about the relative quality and efficiency of different programs, and, in turn, this requires the creation of an indicator set which allows like-to-like comparisons. He also suggests a number of possible indicators, the specifics of which one can fiddle with as the local need requires. So far, so good.
Dickeson’s work is a less useful guide, however, when it comes to the espousal of Jack Welsh-style ranking –and-yanking. In his system, every indicator gets weighted and scored, all scores are aggregated, and the worst-performing units are cut. Conceivably, this approach makes sense at cash-strapped American state institutions where shared governance is less in evidence than it is in Canada – although I’d argue there are still better ways to use the data. Up here, this approach would lead to a faculty revolt within minutes.
Another problem with Dickeson’s approach is his desire for data at the level of programs, rather than departments. Most of the data he advocates for simply doesn’t exist at the program level in most institutions, and trying to deal with this issue can waste a lot of energy. What would make more sense in Canada is to look at quality at the departmental level (which can be done relatively easily), and keep the program-level focus at the level of economics (i.e. how much money is being gained/lost by each program).
Taking a more rigorous approach to analyzing the academy won’t please everybody; there will always be those who say that the data can’t capture everything, or will whine about the need for an appeals process, or some such thing. But bills have to be paid. Decisions need to be taken. And it’s better to make those decisions based on real data than on the squeaky-wheel basis that has historically predominated in Canadian universities.