As most of you probably know, there’s been a lot of recent interest in the program prioritization process (PPP) ; specifically, a model based on the work of a fellow called Robert Dickeson. However, Program Prioritization, and Dickeson, have come in for a bit of a beating lately, some of it deserved and some not.
(Caveat lector: I make money from consulting on program prioritization. I’m going to be talking my own book here. Read on with that in mind.)
Stripped to essentials, Dickeson advocates distributing resources according to how strong each program is, with “strength” determined by a series of indicators that measure how a program performs in a number of different dimensions. In a nutshell: get comparable data on all programs, score the results, aggregate the scores, and make decisions accordingly. Top ones get more resources; bottom ones face cuts.
Almost no one disagrees with the idea of making decisions based on better, more comparable data. There are, however, legitimate disagreements about what are the right indicators to measure program efficiency and quality, about how well indicators can measure the stuff that matters, etc., but that’s par for the course. The real disagreements start when administrators ask what they should do with all that information. And, increasingly, there’s a backlash against Dickeson’s approach of aggregating the scores, ranking programs, and yanking funding from the bottom ones.
This has led to a lot of predictable fire from faculty (understandably nervous about the results of such exercises), but also from some senior administrators – Windsor’s Leo Groarke recently wrote quite a thoughtful piece on this for OCUFA’s “Academic Matters”. Do read his whole piece; but for now, just know that his main critique of the PPP (apart from some of the methodological stuff listed above) is that: i) good administrators should know which programs are weak anyway; ii) the costs of data-gathering are too high; and, iii) the rank-and-yank approach is disastrous.
As I say, it’s a good piece, but there are answers to all three of those objections. The first is that even if deans know which are the good programs and which are the bad, a little external data validation doesn’t go amiss when making big resource decisions. The second can be true, but a more discerning approach to indicators (you don’t need the 60-odd ones that Dickeson suggests) can cut costs. And if you set them up right the first time, you can create a permanent set of indicators, which can be consulted at low-cost in future.
As for the third point, I agree completely with Groarke about rank-and-yank – and I would go further to say that I am a mystified as to why senior administrators would elect to surrender their decision-making to an algorithm. But while that’s the essence of Dickeson’s approach, PPP need not work that way. If you scrap the aggregation and ranking elements, what PPP leaves you with is a very rich, multidimensional dataset, which can be used as a basis for complex and fraught decisions. And in the end, this is what everyone wants, right?
Intelligent strategy isn’t just data, and it isn’t just judgement; it’s a mix of the two. Program prioritization, done properly, should recognize and achieve that.