Critical Friends

A few times a year I get asked to help with the drafting a university or college’s strategic plan.  Usually nothing major: a little bit of environmental scanning, talking about industry trends, that kind of thing.  I think I do enough to get a decent sense of where the pain points are in academic and strategic planning.  The most important one I wrote about back here – the fact that strategic planning is often done around academics rather than with them, mainly because academics’ identification with their discipline and personal research agendas is often much stronger than their corporate identification with their institution.  But there’s another big problem, which is how to assess progress in areas where quality can’t easily be reduced to a quantitative indicator.

We live in what some people call the “Audit Society”.  In a big, complex society, people are – to some degree rationally – not prepared to trust that everyone is doing what they are supposed to be doing.  Society thus adopts processes in which people who want public trust or money have to regularly explain what their objectives are, and then show that they are meeting these objectives.  University/college strategic plans work in precisely these ways: they provide not only goals and directions, but in many cases targets against which performance can be measured.  In principle, there’s nothing wrong with any of this.  It’s how society combats free-riders. But there is a tension here.  In order to be effective, the chosen targets have to be measurable in ways that actually permits an observer to determine whether or not a target is being met.  This tends to restrict the nature of the targets to be chosen.

It’s one thing to pick a target like “we will graduate more students” – one that’s easily measurable by the number of graduates, the graduation rate, etc.  But not all educational targets work that way:  for instance, if your goal is to make the institution more accepting of diversity, or to “internationalize” the university experience, things are a little bit tougher because you must use proxies which don’t always capture the true nature of the goal.  Do the results of an employee or student survey actually capture the nature of an institution’s “diversity”?  Do the number of students going abroad for a term capture “internationalization”?  And that’s not even going into more complex educational goals like “producing leaders” or something of that nature.

It’s not that educational leaders are unaware that such proxies are problematic.  It’s just that they don’t see any other options.  We live in a world of indicators: if there is no indicator, how can we demonstrate a commitment to an outcome?  So the problem is really the limited nature of indicators, and the automatic reliance we have on quantitative measures.

There is another option, though, which doesn’t get anywhere enough attention or take-up.  And that is simply to adopt more indicators where the results are reported in a non-quantitative fashion.

I know that sounds a bit goofy, but hear me out.  Let’s take internationalization as an example.  Simply counting the number of people moving from point X to point Y is simplistic.  Ditto counting MOUs signed with foreign universities, or the number of course curricula that have been “internationalized”, or the number of scientific papers co-authored with foreign authors – it’s all same thing.  Kind of jejune.

But that doesn’t mean that independent observers, using an agreed-upon rubric and looking at a variety of processes and outcomes, couldn’t take all that information and more and come to some kind of rational decision about whether an institution’s commitment to and progress in internationalization is good/bad/indifferent (or some other easily-interpretable coded metric).

I can envision three possible objections here.  The first is that anything that comes out of such a system is not an “objective” measure of quality.  To which the answer simply is: it’s a form of grading, based on rubrics.  This is literally the foundation of assessment in PSE.  Give your head a shake.  The second is that the data required to form some kind of qualitative assessment is deeper and more complicated than simple indicators, and therefore more costly to produce.  To which the answer is: Yup, good analysis costs money.  Next?

The third one, of course, is the crucial one: who could be trusted to form such an opinion.  Part of the value of simple indicators is that their simplicity means they are replicable and therefore can be trusted.  The problem is less with the notion of qualitative assessment than the question of who provides it.  Fair question.  Such advice would need to be given by a person or persons who i) possessed experience in post-secondary education and/or public service, ii) had some understanding of the institution being judged and its history/context but iii) were clearly seen as independent of that institution.  In other words, what it requires is a set of “Critical Friends”

There are some kinks that would need to be worked out here, but think about it this way: an institution could appoint a group of say five Critical Friends, whose job it was to look at institutional data and processes in a few key areas related to an institution’s mission and strategic, to ask tough questions about them and then provide some kind of judgement.  Different stakeholders within the institution might even be able nominate one of the Friends as a way to avoid homogeneity in the Friends points of view (obviously they could not vote for one of their own as this would violate the independence rule – but think perhaps of the way students in Scotland vote for their rectors).

I don’t think this would be a simple thing to set up, but it would certainly reduce the use of bad proxy indicators in post-secondary planning, and it would increase the quality (and hence importance) of feedback to post-secondary institutions.  All in all, I think it would be an interesting experiment in institutional quality assurance.  Someone should try it.

Posted in

5 responses to “Critical Friends

  1. Isn’t that sort of what we already do by external reviews?

    I think you aren’t harsh enough towards quantitative measures: they all suffer under Goodhart’s law. If your goal is to graduate more students, then the number of students graduating ceases to be a measure of the quality of instruction, and just becomes a measure of how much pressure was placed on faculty to prostitute all standards and not fail anyone. If your goal is to increase “faculty productivity” you’ll just get lots of not terribly good papers, “least publishable units.” And so forth.

    The idea of critical friends partly solves the problem, but only if they’re kept away from quantitative data, so that their narratives don’t just become stealth collections of indicators.

    1. It is in the same spirit as external reviews, but those are for academic units, not cross-cutting issues across the institution (eg. indigenization).

  2. Oh, and isn’t providing an outside view what a Board of Governors is supposed to do?

    1. Not really. They are an external check at a very high level (mainly on financial matters). They are not (usually) meant to delve into and evaluate operational processes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.