Just a brief thought today on how the increasing interconnectedness of research efforts is making evaluation of institutional outputs harder.
One of the things about academia that governments have a hard time conceptualizing is that “universities,” as a singular entity, are to some extent a fiction. Governments treat them as discrete entities that have some agency of their own. What is never very well understood is the extent to which university agency is restricted by the professional norms of its main inhabitants – the academics who work there. And those academics’ behaviour is significantly constrained by the habits and organizing logics of the disciplines in which they trained. To a considerable extent, institutions are holding companies for local branches of those international scholarly societies called disciplines. There is no singular University of Windsor or Lethbridge, there are merely a collection of scholarly associations which happen to exist in Windsor/Lethbridge, who happen to share some co-working space and for administrative convenience get paid by the same entity. Each discipline/department has its own agenda(s), in which institutional interference is meant to be minimal, and the main job of institutions is to butt out of the agendas and get more money from government/donors/students to make sure everything works smoothly.
(I am exaggerating a bit, but you get the idea).
Most people inside of universities understands this logic; almost nobody outside the university does. A lot of what you see in university branding and university strategic plans is, to some degree, covering up this problem: the external world (governments especially, but also philanthropists) assume that institutions are unitary entities with agency and so it is important that the university present itself this way. If it presents itself as a set of cats fighting in a bag – arguably nearer the truth – things would unlikely to go well on the whole “go get more money” front. But this accentuates the problem from an accountability perspective. The more institutions paint themselves as having agency (as opposed to the bagged cats they contain), the more external bodies – governments, rankers, whatever – are going to focus on institutions rather than programs or department as the locus of accountability. This is arguably both unfair and unhelpful.
‘Twas ever thus. And to some extent the way you deal with this issue is by having external quality assurance procedures focused on individual disciplines or programs, which in North America is done primarily by professional disciplines. But if, as I suggested a couple of days ago, academic research is becoming less focussed on disciplines and more focussed on challenges, that makes judgement about what constitutes excellence somewhat harder. And, let’s just add to that: as academic units respond to public pressure to work jointly and collaboratively on these issues (and you’d better believe this pressure is only going to grow over the next decade or so), it’s going to get harder by several orders of magnitude to separate out and evaluate the work of any individual or unit.
Put simply: public authorities think evaluating universities is a good thing; in some cases, they want to tie money to some of those evaluations. They also think universities should work jointly on “big societal programs”. The latter complicates the former enormously because it becomes ever less clear where the real locus of activity lies. Certainly, it will become harder to measure through pure metrics and algorithms. More nuanced analyses, some even containing things like (shock horror) expert judgement might be required.
Additionally, it would probably help if we could complement assessments of programs and institutions with evaluations of how well our system(s) of higher education are working. This might be difficult in the short term, since very few Canadian governments can articulate what they actually want out of the system, particularly on research (Quebec probably could, the rest would struggle). We used to do these pretty regularly, albeit on an ad hoc basis. My colleague Yves Pelletier and I did one for Manitoba colleges a few years ago, but I don’t believe there have been any since then. It may have been ten years since the last one anywhere in Canada for universities (the O’Neill Report in Nova Scotia). We need more of these, with specific mandates to look at how academics are contributing to overall growth in human knowledge through networked research – because otherwise that will never get considered.
(Yes, I know there is a post-secondary review going on in Newfoundland, but it was announced over two years ago, and I am starting to doubt that it will ever actually report. One rumour I heard recently is that the panel wants to punt on the issue of whether or not to end the 20-year long freeze on tuition fees, which is bananas if true because the whole point of the panel was to give weak-kneed politicians cover for doing what everyone knows needs to be done.)
In fact, it might even be more cost-effective and generally useful for provinces to contribute to a genuine, pan-Canadian exploration of how Canadian universities are collectively advancing knowledge. Run it through the Council of Ministers of Education, Canada. Difficult, I know, because provincial education ministries dislike working with each other almost as much as they dislike working with the feds but again: if it’s the only way to get at a problem, why not take it?
Or even: why wouldn’t Canadian universities themselves commission periodic reports by credible independent observers on the state of system contributions? There is a precedent: the Commission of Inquiry on Canadian University Education (I realize this is now going back 30 years, but it is a precedent nonetheless). After all, the biggest barrier to re-organizing research around interdisciplinary challenges is common to all universities: disciplines have permanence in the form of departmental bureaucracies and tend to outlast occasional initiatives at greater trans-disciplinary coherence. It might be worth a collective examination of how to overcome that.
I think that you’re missing one of the great strengths of interdisciplinarity: it provides refuges for those who aren’t at home in their departments. If one were to find oneself the only Vasily Grossman scholar in a Department of Russian otherwise populated by formalists and linguists, perhaps it would be possible to offer classes through a center for second-world-war literature.
The real strength of interdisciplinarity isn’t to make us all sing from the same hymnbook, but to let each of us pursue our own interests, as indeed we ought. Both so-called “accountability” and an insistence on focusing on “challenges” seem calculated to have the opposite effect.