Better Thinking About Program Prioritization

As most of you probably know, there’s been a lot of recent interest in the program prioritization process (PPP) ; specifically, a model based on the work of a fellow called Robert Dickeson.  However, Program Prioritization, and Dickeson, have come in for a bit of a beating lately, some of it deserved and some not.

(Caveat lector: I make money from consulting on program prioritization. I’m going to be talking my own book here.  Read on with that in mind.)

Stripped to essentials, Dickeson advocates distributing resources according to how strong each program is, with “strength” determined by a series of indicators that measure how a program performs in a number of different dimensions.  In a nutshell: get comparable data on all programs, score the results, aggregate the scores, and make decisions accordingly.  Top ones get more resources; bottom ones face cuts.

Almost no one disagrees with the idea of making decisions based on better, more comparable data.  There are, however, legitimate disagreements about what are the right indicators to measure program efficiency and quality, about how well indicators can measure the stuff that matters, etc., but that’s par for the course.  The real disagreements start when administrators ask what they should do with all that information.  And, increasingly, there’s a backlash against Dickeson’s approach of aggregating the scores, ranking programs, and yanking funding from the bottom ones.

This has led to a lot of predictable fire from faculty (understandably nervous about the results of such exercises), but also from some senior administrators – Windsor’s Leo Groarke recently wrote quite a thoughtful piece on this for OCUFA’s “Academic Matters”.  Do read his whole piece; but for now, just know that his main critique of the PPP (apart from some of the methodological stuff listed above) is that: i) good administrators should know which programs are weak anyway; ii) the costs of data-gathering are too high; and, iii) the rank-and-yank approach is disastrous.

As I say, it’s a good piece, but there are answers to all three of those objections.  The first is that even if deans know which are the good programs and which are the bad, a little external data validation doesn’t go amiss when making big resource decisions.  The second can be true, but a more discerning approach to indicators (you don’t need the 60-odd ones that Dickeson suggests) can cut costs.  And if you set them up right the first time, you can create a permanent set of indicators, which can be consulted at low-cost in future.

As for the third point, I agree completely with Groarke about rank-and-yank – and I would go further to say that I am a mystified as to why senior administrators would elect to surrender their decision-making to an algorithm.  But while that’s the essence of Dickeson’s approach, PPP need not work that way. If you scrap the aggregation and ranking elements, what PPP leaves you with is a very rich, multidimensional dataset,  which can be used as a basis for complex and fraught decisions.  And in the end, this is what everyone wants, right?

Intelligent strategy isn’t just data, and it isn’t just judgement; it’s a mix of the two.  Program prioritization, done properly, should recognize and achieve that.

Posted in

6 responses to “Better Thinking About Program Prioritization

  1. What I wonder about this PPP approach to academic program management is how well it accounts for change over time. Yes, presumably one could construct a set of benchmark measures and comparator data to assess relative program strengths at a given moment (though in practice it seems very complex to do this well across all the areas of learning in a university). But replicating particular areas of strength across decades and changes in personnel is not something universities do very well, as far as I can tell. Why? Because the most important variable in that excellence is the professoriate, each position turns over infrequently, and the employer’s capacity at hiring time to predict how an academic career path will turn out is very limited. Indeed, sometimes excellence in a particular field or subfield is heavily linked to an individual professor, and it will clearly _not_ be possible to replicate that excellence with junior hires.

    Clearly resource decisions need to made, but for future flexibility and for meeting student demand, is it better to balance program prioritization with the maintenance of intellectual infrastructure and curriculum in areas that may not be the strongest today but may become so in the future?

  2. The residue from your and Groarke’s pieces seems to be this: it’s a good thing for administrators to have information. Yes, it is! But this isn’t a defence of PPP, it’s a defence of—having information. PPP adds to this the loopy idea that there’s some one algorithm you can use to “rank” lab-intense sciences, faculty-intense humanities, food services, parking etc. And the ultimate irony behind it, of course, is that despite all the lip service paid to evidence, Dickeson’s book is entirely lacking in evidence *that PPP works*. There’s a list of places that have used the process — mostly undistinguished institutions — but nothing about whether those implementations were successful.

    The problems with implementation are massive. One is this: admin units pretty much get to say what their roles are, and to tailor this in such a way that — surprise! — they are carrying out those roles very well. Academic units do not get to draw their own bullseyes in this way, their roles are defined as teaching, research and service. Unsurprisingly then, admin units do bizarrely well in these exercises. Another problem is that the idea of ordinal ranking mesmerizes people, to the point where they start tweaking the metrics in order to exaggerate differences among programs, to “get a good spread” (a phrase used by a senior admin at my institution). Sadly, I could go on, and on, and on…

    1. Good points. My responses would be 1) yes, data is good. Sometimes you need a cross-campus process to get that data, though. That, to me, is what PPP should be, 2) Agree completely that reviewing admin and non-admin units in the same data-driven kind of process is wrong. There’s no possible set of metrics where it makes sense to include them in the same comparison. better by far to say upfront “we’re going to drive an extra x% of savings from these units” and do it using whatever different processes that make sense for each service. Comparing English to Physics is difficult enough. Comparing English to Parking is silly.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.