Are NSERC decisions “skewed” to bigger institutions?

That’s the conclusion reached by a group of professors from – wait for it – smaller Canadian universities, as published recently in PLOS One. I urge you to read the article, if only to understand how technically rigorous research without an ounce of common sense can make it through the peer-review process.

Basically, what the paper does is rigorously prove that “both funding success and the amount awarded varied with the size of the applicant’s institution. Overall, funding success was 20% and 42% lower for established researchers from medium and small institutions, compared to their counterpart’s at large institutions.” 

They go on to hypothesize that:

“…applicants from medium and small institutions may receive lower scores simply because they have weaker research records, perhaps as a result of higher teaching or administrative commitments compared to individuals from larger schools. Indeed, establishment of successful research programs is closely linked to the availability of time to conduct research, which may be more limited at smaller institutions. Researchers at small schools may also have fewer local collaborators and research-related resources than their counterparts at larger schools. Given these disparities, observed funding skew may be a consequence of the context in which applicants find themselves rather than emerging from a systemic bias during grant proposal evaluation.”

Oh my God – they have lower success rates because they have weaker research records?  You mean the system is working exactly as intended?

Fundamentally, this allegedly scientific article is making a very weird political argument.  The reason profs at smaller universities don’t get grants, according to these folks, is because they got hired by worse universities –  which means they don’t get the teaching release time, the equipment and whatnot that would allow them to compete on an even footing with the girls and boys at bigger schools.  To put it another way, their argument is that all profs have inherently equal ability and are equally deserving of research grants, it’s just that some by sheer random chance got allocated to weaker universities, which have put a downer on their career, and if NSERC doesn’t actively ignore actual outputs and perform some sort of research grant affirmative action, then it is guilty of “skewing” funding.

Here’s another possible explanation: yes, faculty hired by bigger, richer, more research-intensive institutions (big and research-intensive are not necessarily synonymous, but they are in Canada) have all kinds of advantages over faculty hired by smaller, less research-intensive universities.  But maybe, just maybe, faculty research quality is not randomly distributed.  Maybe big rich universities use their resources mainly to attract faculty deemed to have greater research potential.  Maybe they don’t always guess quite right about who has that potential and who doesn’t but on the whole it seems likelier than not that the system works more or less as advertised.

And so, yes, there is a Matthew effect (“for unto every one that hath shall be given, and he shall have abundance”) at work in Science: the very top of the profession gets more money than the strata below them and that tends to increase the gap in outcomes (salary, prestige, etc).  But that’s the way the system was designed.  If you want to argue against that, go ahead. But at least do it honestly and forthrightly: don’t use questionable social science methods to allege NSERC of “bias” when it is simply doing what has always been asked to do.

Posted in

5 responses to “Are NSERC decisions “skewed” to bigger institutions?

  1. Paul, I don’t think this engages the core of the analysis, which is that scores on all three NSERC Discovery evaluation criteria co-vary with institutional size, and seem to do so irrespective of the career stage/record of the researcher. This looks like a halo effect of affiliation, raising a perfectly reasonable question of what the results would look like if the assessment criteria didn’t invite such an effect.*

    The authors’ argument for a bias, from the abstract onward, is not simply that there is a difference in overall award rates, as you suggest, but that the criterially distinct evaluation categories internal to the process are all swayed by institution size. Maybe for the HQP criterion this is unsurprising. But for EoR and (perhaps especially) MoP it is *not* direct consequence of the operations of a larger university. So it looks like a bias applying across these categories; and since there is a difference in award rate by university size, this is evidence that the difference in award rates is driven by the bias. To what extent it is driven by bias is an open question (and one the authors do too little to address; the title is probably the least warranted part of the paper). But to depict this as a call for “research grant affirmative action” is largely to overlook the authors’ focus, both on the analytical side and on the advocacy side of things.

    The load-bearing portion of your critique is the conjecture that researchers’ inherent abilities, and their dispositions to write meritorious scientific proposals, are in fact indirectly correlated with the size of the institutions at which they are hired — and in just the right manner to generate the results measured. This is not much more than an assumption, though, and it does not obviously vitiate the point of the Murray et al paper.

    A general worry is that it is a Just-So Story, of the sort apt to dismiss the appearance of bias in any observed result. It is always possible to hypothesize a hidden distribution of merit that rationalizes the observed distribution of goods. But with what evidence?

    More specifically, it misses that there are two issues here, one about the reliability of the process (the analysis suggests size-driven halo effects spreading across the evaluative criteria, making the process inappropriately sensitive to affiliation), and one about whether the process is getting things wrong (you suggest mechanisms in the hiring process that contingently link university size to researcher ability, so that it all works out fine anyhow). The heart of the paper is its linking the first issue with the second. But this is nowhere acknowledged in your objections.

    Nor is much other subtlety. I agree that it would be surprising if research ability altogether failed to make itself felt in the hiring processes that large research-intensive universities deploy. But there’s no need to suppose that either the difference in success rates is all problematic bias, or it’s no problematic bias, with nothing in between. Again, the authors themselves could be clearer about this fact, to be sure; but they seem less far committed to the false dichotomy than your move-along-nothing-to-see-here critique is. In fact, I can’t find anything in their paper that remotely answers to your astonishing gloss:

    “The reason profs at smaller universities don’t get grants, according to these folks, is because they got hired by worse universities – which means they don’t get the teaching release time, the equipment and whatnot that would allow them to compete on an even footing with the girls and boys at bigger schools. To put it another way, their argument is that all profs have inherently equal ability and are equally deserving of research grants…”

    Rather, the authors argue that researchers’ applications should be judged on more direct measures of their “future research prospects, capabilities, and accomplishments”. That doesn’t seem terribly unreasonable. The question, then, is whether judging them directly or indirectly on how big a university they are affiliated with is a sufficiently good proxy for their research prospects, capabilities, and accomplishments. You point out mechanisms by which affiliation could reflect these things; and that is a fair point as far as it goes. But how far it goes is not at all clear. And certainly it’s no argument against using better proxies.

    As far as I can see, at no point do the authors claim that using other criteria would eliminate *all* differences in success rates; presumably that is an empirical question. They really have to say more about what a reasonable baseline difference would be — and whether or not they think it’s zero, I would want to see their reasoning! But on the advocacy side, anyhow, their focus is not on having zero difference. It’s on using a less (merely-) size-sensitive process, to mitigate significant decreases in small-institution success rates like that observed 2007-2011, and projected again for the future. To simply assume that such decreases would reflect an underlying distribution of merit is a lot more empirically dubious than anything I can find in the Murray et al paper.

    To be clear, I am not here taking a view on how significant a concern the putative bias is. My point is just that your post does not really reflect what the paper argues. In particular, what you’ve written falls far short of justifying the serious insinuation that the authors have not acted “honestly and forthrightly,” and have used “questionable social science methods”. There are things here about which reasonable people may disagree reasonably; but these characterizations of the paper just don’t withstand scrutiny.

    * Wherever I talk about what the paper says, please just read an implicit “or so it seemed to me, as I quickly read it this morning while doing a bunch of other things.” Caveat lector!

  2. A few non-Toronto-centric thoughts:
    *The best faculty are not all at the big universities. New faculty go to where the few job openings are.
    *Not all research requires the most expensive toys, so size of university often is irrelevant with regards to potential for success. The authors of the article should have compared disciplines where university resources was not a factor. That being said, many researchers at small universities set up arrangements to do their lab work at other institutions, and thus the resource issue can often be negated.
    *Not all areas of expertise are housed at the big universities. I would argue there may be more of a bias on the type of research that gets done rather than on individuals. The thought here being that if it’s not being done at the big universities, it’s not worth doing.
    *Human nature dictates that people will support that which is closest to their own experience. Having programs (i.e. grad student education, disciplinary vs. interdisciplinary research, having their own equipment vs. accessing that of others) set up in the same familiar way tends to get more support than that which requires a different model. It also inhibits new ideas.

    On the whole, the NSERC system works, and it’s much more fair than most. I completely disagree with the idea of giving funding to those not deserving regardless of where they are. However, as a former NSERC program officer, I saw instances where researchers from large universities were given the benefit of the doubt on crap proposals partly because of their institutions’ reputations, not because of their individual abilities (old-boys’-club analogy). The same benefit was not given to those from small universities which didn’t have the same clout at the table.

  3. Certainly in my own sub-field (which is not in sciences), there’s a job advertised about every five years. At that rate, where a faculty member ends up is a product of arbitrary circumstance, not merit.

    The wider problem, iwis, is that we’re judged so much on our ability to attract funding. This makes sense for the sciences, but only because funding is so necessary an evil and we have faith that, eventually, empirical results will trump scientific fashion. It makes no sense for anything else.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.