“Academic Freedom” or “Freedom from Evaluation”?

So, you may have heard that the University of Manitoba Faculty Association (UMFA) is threatening a strike, starting tomorrow.  What you may not have grasped is just how thin the grounds for the strike are.

You can see the university’s full bargaining position, here; UMFA, in contrast, has publicly issued only a single note (responding to a missive from the administration, which it felt was misleading) and an open letter to students published in the Free Press.  Frankly, for a group threatening to disrupt the lives of tens of thousands of students, this is pretty poor form (St. FX’s faculty union was admirably communicative during its strike, last year).  But we’ll let that pass for the moment.

Refreshingly, UMFA says the strike is not about money.  Instead, they say it’s about academic freedom.  One key issue is the desire to enshrine the right to criticize the administration in the collective bargaining agreement – which, you know, is fair enough, though it’s not obvious that there are extant cases of intimidation or retaliation that would make this grounds for a walkout.

The more important file, according to UMFA, is performance evaluation.  What UMFA wants is – and I quote – with respect to tenure, pay, and promotion, “no prescribed journals, venues, enumeration of publications or dollar amounts of research funding may be established or taken into consideration.”

Now, I do understand the objection to prescribed lists of journals, but there are easy solutions here that would still ensure that professors are publishing in rigorous journals.  External reviewers could assess individual cases (I understand Waterloo does this), journal impact factors could be used, or, if that’s too passé, one could use citation counts as evidence that at least other scholars find your work useful.  I can also see that prescribing dollar amounts of grants might be problematic, though in many fields it’s not too much to ask for a number greater than zero.

And the administration apparently sees this, too – they’ve already conceded on both of those points (see: page 9 of the admin response). What the admin hasn’t given in on is the bit about enumeration.  Read the passage above again… UMFA does not want enumeration of publications to count for pay, promotion, or apparently even tenure, for God’s sake.

Mind-blowing, huh? I understand the arguments against “publish or perish”, but this is bananas.

From UMFA’s public communications, it’s difficult to escape the impression that this strike is really about redefining “academic freedom” as “freedom from evaluation”.  That’s not something any reputable university can accept, and it’s a terrible reason to disrupt students’ semesters.  Hopefully, everybody will return to their senses before tomorrow’s strike deadline.

Posted in

23 responses to ““Academic Freedom” or “Freedom from Evaluation”?

  1. I’m not an expert (obviously) but your reading seems tendentious. What they seem to be opposed to is the idea that there would be a list of journals you must have published in or a set number of publications or dollars from grants. That’s not “freedom from evaluation” but a constraint on how evaluation proceeds. It doesn’t mean that you can’t evaluate someone on how much money they brought in, but that there is no prescribed amount defined for tenure. Same with the enumeration of publications (5 journal articles for tenure, say).

    Actually the argument in favor of this sort of thing is typically transparency of tenure standards. When there is no template for tenure, decisions can start to seem arbitrary and junior faculty often become more nervous than when there is a nice clear “assignment” to accomplish before tenure.

  2. I’d agree with C. It’s not about not evaluating people, it’s how you evaluate. Difficult to see a metrics guy like Alex agreeing I suppose, but a one size fits all yardstick, even at the disciplinary level, to me seems a bit restrictive, unless used as a very vague guideline. It tends to reinforce the notion that there is only one way to receive peer evaluation, which is something I have a particular problem with.

    I can appreciate the certainty argument C makes, but I’ve had too much experience with the attitudes and blinkered perspectives of people on tenure and promotion committees to be happy with giving some people an ax to wield to support them.

  3. I echo C’s comment. And, if some kind of metric is going to be applied to tenure and promotion on the basis of outputs (how neoliberal), it should be done in consultation with faculty, not by administrative fiat. Using some kind of standardized, one-size-fits-all model would disadvantage some disciplines, and so direct consultation and adjustments on a discipline by discipline basis would be required.

    Also, relying on citation counts is not an ideal metric either given the rise of “citation cartels” that attempt to game the system. Anyone evaluating citation counts would have to dig deep and not just react to a raw number on a piece of paper as the basis to make a decision – not that a t&p ctte would be so careless as to skimp on that.

  4. I acknowledge the journalistic imperative to have a “hook” but bargaining is a complex process specific to the parties involved, difficult to reduce to a catchy blog post by someone outside the process.

    As for the assertion that UMFA wants to avoid being evaluated, I think the truth is quite the opposite. They appear to be asking to be fully and properly evaluated, based on their individual files and in the context of their disciplines, rather than having someone with a calculator running the numbers while failing to engage with the substance. Of course it is more “efficient” to do a quick count, but where the quality of teaching and research is at stake, surely we ought to go the extra mile and forgo the cookie-cutter “metrics.”

  5. This piece grossly over-simplifies. True, UMFA has proposed “no prescribed lists etc” at the bargaining table, which this article takes as the only acceptable outcome from UMFA’s perspective. But that is just not the case.

    The bottom line is that UMFA is concerned about what is already happening in at least two faculties (Nursing and Arts) and has been implemented for non-UMFA academics in Medicine with the clear intent based on the Dean of Medicine’s blog that it be extended to UMFA members as well. There is clear evidence that the administration wants to replace the current system of academic accountability (Annual Reports of Academic Staff etc.) by a quantitative and categorized system that can remove academic factors completely from the process. It is the idea that bureaucrats might both establish the benchmarks for academic evaluation AND ultimately perhaps carry out the assessments themselves as a simple exercise in bean-counting that concerns me, and concerns UMFA.

    To imply that UMFA does not want to be accountable is both incorrect and insulting. Academics have always been accountable to our academic peers, to our funding agencies, to the University, and most importantly to our students. UMFA wants this to continue, and believes that the best way to establish and monitor academic job performance standards is through MEANINGFUL CONSULTATION. The emphasis on “meaningful” is important. As a Senator at U of Manitoba for 10 years I have seen again and again that “consultation” consists mostly of presenting fait accompli and demanding Senate’s approval. It has been clear that criticisms are not taken well, even when meant constructively, and the number of proposed policies that have been amended based on Senate objections during my time has been very small indeed.

    So why did UMFA propose “no prescribed lists etc etc…”? This is bargaining, right? You always propose more than you are willing to settle for, with the expectation that you will have to concede something to get something. UMFA entered this round of contract talks assuming that both sides wanted to forge a collective agreement through negotiations. Instead, the Administration has refused to discuss the issues. Dr. Barnard’s statement that “we have not proposed anything relating to performance management” is true in the narrow sense, but is misleading since his administration is currently working on quantitative performance management initiatives. The Administration doesn’t need to propose anything in the contract to let it replace the current Collegial Governance model by the top-down Corporate model favoured by groups like HESA (among others).

    1. Reasonable points. Let me push back a bit on a couple of them:

      1) I understand your concerns, re: prescribed lists of journals and even suggested a couple of ways this can be dealt with. What I’m unclear on wrt journals and funding) is this: the admin has withdrawn proposals in this area. To me it looks as if you;re trying not to take yes for an answer: can you explain to me why it matters what’s going on in Nursing and Arts if the admin signs a deal to this effect?

      2) I understand the desire for meaningful consultation – I am unclear why your aim seems to be to prevent any meaningful consultation from concluding that maybe an enumeration of journals, publications or funding amounts might be the correct conclusion in some faculties/depts. Is it really meaningful consultation if certain outcomes are ruled out from the start?

      3) The “quantitative performance management initiatives” to which you refer in your final paragraph…are these initiatives at the level of the individual professor or at the unit-level? If the latter, would it be fair to say that what you’re really trying to achieve is for UMFA to have influence over the general resource-allocation process?

      4) I get that this is bargaining. What I would respectfully suggest to you is that when you combine bargaining with rhetoric appealing to universal values (e.g. academic freedom), you will appear very strident. I do not assume that academic freedom is something to be bargained away. I would, rather, assume that an academic union which genuinely believed academic job performance standards really were equivalent to “academic freedom”, which is what UMFA’s single public missive quite clearly implies, is not likely to compromise about it. If you’re telling me otherwise, that’s a bit of a shock, to be honest.

      1. The administration is technically correct in saying that it has not proposed performance management, but that is not the same thing as saying “we have no intention of putting performance management in place”. The Trial Balloons that have been floated in various faculties are a clear indication of where our administration is heading. We want language that ensures that academic performance is assessed by academics on academic grounds, and the administration has flatly refused to consider this. They are NOT saying “Yes” at all: and if they would agree that any changes to how faculty job performance is assessed had to be approved by the relevant Faculty Council, I for one would be happy to sign. But they are not doing this. I have to wonder, if they don’t intend to push for performance management, why they won’t put that into the contract.

        And, I did not and do not suggest that meaningful consultation could not lead to bean-counting as a method of performance assessment. That was certainly the first proposal that UMFA made – “no journal lists” – but see my later remark about bargaining. Personally, if a department or faculty wants to be judged by those standards and can get a majority in their Faculty Council to agree, that’s ok by me. The point is, academics are best positioned to assess academic performance. We want it to stay that way, our administration won’t agree to put language to that effect into the contract, or indeed even to discuss it from what I hear.

        UMFA should not influence resource allocation, nor did I say so. However, during contract negotiations UMFA is the only voice that faculty have to preserve the idea that academic decisions are made by academics through Faculty Councils and ultimately the Senate of the university, as demanded by the Act.

        Finally, academic freedom is not “rhetoric”. It is one of the most important aspects of the University concept in Western society. Job performance standards are not intrinsically infringements of academic freedom – we already have quite respectable methods for assessing academic performance and we are quite content to be judged on that basis by our peers. Until now, even our administration was content that we be judged on this basis. The concern is that a journal list (for example) is not value-free. If you must publish in journals X, Y or Z, but none of these journals will accept papers about your sub-discipline, you are being told effectively that you must not research that sub-discipline. If you must raise a certain number of dollars in research funding, this clearly forces you to pursue donor-friendly research in place of other directions.

        There has been a lot of pressure on universities to become “Knowledge Factories”, coming from governments and big business. If we become a factory, run by factory-style rules, we cease to be a University in the Western tradition. I think we should fight that kind of transformation.

        1. Look, I;m still a bit lost by the argument in your first and last paragraphs. The admin has *agreed* to the conditions about no prescribed journals and no prescribed $. Why are you still arguing that point as if it’s relevant? What are these “trial balloons, exactly? And why do you think these trial balloons have more force than the language in a signed CBA?

          As for second para – *nowhere* as far as I can see has UMFA said “academics should judge academics”. The only thing I;ve seen them say is “no prescribed journals, $, or enumeration of pubs”. The admin position paper seems to say that UMFA did put forward an additional proposal to cut Deans out the pay/promotion process, but there’s no way to verify that. Maybe they did, maybe they didn’t – this is part of the problem of UMFA not having uttered a single word in public about this until about 2 weeks ago.

          The reason I asked about resource allocation is that I can see mgmt plans which involve distribution of resources (program evaluation such as the ones at Guelph and Regina, for instance) necessarily contain quantitative elements, but would be about entire units, not individual faculty members. I was wondering of these were the kinds of trial balloons you were talking about.

          As for rhetoric, look: the kind of interview given here is suggesting that the fight is about something wholly other than performance standards:
          main.mp4.cbc.ca/prodVideo/news/254/538/mb-uofmstrike-131021_M__181267.mp4 …

          As for sub-disciplines…it seems to me the university has entirely the right to determine what sub-disciplines it wants its staff to pursue, doesn;t it? Otherwise it fundamentally abandons the right to determine curriculum. Deans exist precisely to plan what palette of sub-disciplines it will offer. Academics hired to teach, say, English history, do not have the right to unilaterally start doing their scholarly work in Irish history, or mesopotamian history or what have you. That’s operational anarchy, not academic freedom.

  6. What would be really neat is if someone proposing these metrics could cite examples of world-leading universities that use them. Compare that to the list of world-leading places that *don’t* use them. And then… count! ‘Cause we’re all about cardinality, right?

    1. You’d have to go dept. by dept, I think. I don’t know of any institution that makes pan-faculty criteria like this.

      1. “You’d have to go dept. by dept, I think.” You seem to be buying into this idea, albeit its less stupid variant. But is there any evidence that it’s a *good* idea? That was the point of my comment. If people have radical proposals about changing how academia works, it’s not asking a lot to want some evidence that they are good ideas. Why make up ideas out of thin air instead of asking how they do things at Princeton, say?

        1. That’s getting into issues around explicit/implicit standards. Princeton probably doesn’t need to write anything down in order to have its scholars produce research which in quantity and quality is well above anything most Cdn universities could match. The issue is whether setting more explicit standards for universities below Ivy level is a way to maintain/improve quality. I believe that it is quite common in the UK to require minimum publication #s for promotion, but I admit to now knowing much about it. This is why I was suggesting you’d need research to answer your question.

          1. I think the people who “need research” are the administrators who are proposing this Taylorite scheme with no evidence of its being any good.

          2. Well, equally, I’m not sure there’s evidence that the present scheme “works”, is there? The value of the status quo is just a hunch, too.

  7. I don’t know this university or situation, but based on my experience at my institution, that’s the whole point, Alex – the admin wants to impose across the board a model that may work in some disciplines, eg the sciences, but would be completely unfit for many humanities disciplines. So this sounds to me like a fight against pan-faculty criteria, though I take your point that unions have a responsibility, even – especially – during bargaining, to explain their positions well and fully.

    1. That would be a problem if true. I see no evidence that this is the case, though. even the union is only claiming this is being done at a faulty by faculty basis.

  8. Wow, you’re seriously suggesting that there’s a lack of evidence that the method of peer evaluation long employed in academic decisions about hiring and promotion works? Almost all intellectual and scientific progress made in the last 150 years can be credited to higher education researchers. On my planet, anyway… I don’t know which one you are thinking of.

      1. Alex at this point I need some help understanding your thinking. We have a system, peer review and evaluation, which is an important part of the modern research university, which generates knowledge better than any institution ever known. That’s not a hunch, it’s a fact. So if someone proposes majorly changing this important component of a highly successful system they’d better have, oh, SOME evidence that what they’re proposing is better. To me that seems reasonable to ask. Or is your idea that we just set up some natural experiments and look at the results 20 years down the road?

        1. Look, even the cartoon version of what U of M was suggesting didn’t affect peer review in the way you’re describing it here. Enumerated publications in prescribed journals? You can bet they’d all be peer-reviewed. and with high journal impact factors, I bet, meaning (in theory at least) the quality of the reviewers would be higher, too. To the extent grants come from places like NSERC and SSHRC, there’s a form of peer review there, too. Indeed, one could argue it’s better peer review than what one would get from relying on one’s colleagues in one’s own institution because they are further away, more dispassionate, not affected by departmental infighting, etc. So the issue here isn’t peer review, it’s how you set standards fir tenure, pay and promotion.

          What I was saying was that I don’t think there’s any particular evidence that refusing to set minimum publications standards for tenure pay and promotion is a better way of encouraging scholarly output than actually setting them. Both sides of the debate are operating on hunches.

          1. My point is that one side of this debate has a successful working implementation to point to. The other side has… I still don’t know what. The UK’s RAE? Jury’s still out on that. (Luckily for us the Brits *have* set up a natural experiment and we can look at it 10 years from now and see how it’s worked out. So far it looks pretty ridiculous in my discipline at least, with all the playing-to-the-numbers one would expect. Sad.)

          2. The issue isn’t so much RAE as how institutions react to the incentives of RAE. If some have come up with new pay, promotion criteria (tenure doesn’t really exist there) in order to improve research outputs and others not, then you’d have an experiment.

            What field are you in?

  9. “The issue isn’t so much RAE as how institutions react to the incentives of RAE.” Sorry, I don’t understand how the latter is a separate issue from the former. If RAE sets up incentives for institutions to do stupid things, and they do stupid things, surely the problem is the RAE. I really don’t see what sort of separation you envision given that the whole point of the thing is to change incentives.

    I’m in the humanities. And for what it’s worth, if you asked me, given how well-paid Canadian professors are, why Canadian universities aren’t *way* better than they are, in the humanities at least (where external funding is far less a constraint than in the sciences), the main culprit in my view is the longstanding (until recently) strong preference mandated by the government to hire Canadians. That cut down the hiring pool by 95% or so in many searches. It’s a good thing it’s gone. The main beneficiaries of its demise will be Canadian students.

    Fiddling with evaluation procedures and such is closing the barn door after the horse is gone. Your profs are in place, they have tenure; mostly what the effort will achieve is to demoralize them. All the action is at the hiring stage but at that stage no number of metrics come close to replacing professional judgement, although that judgement is often clouded by the pernicious impulse not to hire someone better than you. Addressing *that* problem is the main issue, in my judgement, now that the preference for Canadians is gone.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.