Sometimes, I read research reports which are fascinating but probably wouldn’t make for an entire blog post (or at least a good one) on their own. Here are two from the last couple of weeks.
Research vs. Teaching
Much of the rhetoric around universities’ superiority over other educational providers is that their teachers are also at the forefront of research (which is true if you ignore sessionals, but you’d need a biblically-sized mote in your eye to miss them). But on the other hand, research and teaching present (to some extent at least) rival claims on an academic’s time, so surely if more people “specialized” in either teaching or research, you’d get better productivity overall, right?
Anyone trying to answer this question will come up pretty quickly against the problem of how to measure excellence in teaching. Research is easy enough: count papers or citations or whatever other kind of bibliometric outcome takes your fancy. But measuring teaching is hard. One line of research tries to measure the relationship between research productivity and things like student evaluations and peer ratings. Meta-analyses show zero correlation between the two: high research output has no relationship with perceived teaching quality. Another line of research looks at research output versus teaching output in terms of contact hours. No surprise there: these are in conflict. The problem with those studies is that the definitions of quality are trivial or open to challenge. Also, very few studies do very much to control for things like discipline type, institutional type, class size, stage of academic career, etc.
So now along comes a new study by David Figlio and Morton Schapiro of Northwestern University, which has a much more clever way of identifying good teaching. They look specifically at professors teaching first year courses and ask the question: what is the deviation in grades each of their students receives in follow-up courses in the same subject. This is meant to measure whether or not professors are “inspiring” their students. Additionally, the measure how many students actually go on from each professor’s first year class to major in a subject. The first is meant to measure “deep learning” and the second to measure how well professors inspire their students. Both measures are certainly open to challenge, but they are still probably better than the measures used in earlier studies.
Yet the result is basically the same as those earlier studies: having a better publishing record is uncorrelated with teaching quality measures: that is, some good researchers have good teaching outputs while others don’t.
Institutions should pay attention to this result. It matters for staffing and tenure policies. A lot.
Christos Kolympiris of Bath University and Peter Klein of Baylor University have done the math on university incubators and what they’ve found is that there are some interesting opportunity costs associated with them. The paper is gated, but a summary can be found here. The main one is that on average, universities see a decrease in both patent quality (as measured by patent citations) and licensing revenues after establishing an incubator. Intriguingly, the effect is larger at institutions with lower research income, suggesting that the more resources are constrained, the likelier it is that incubator funding is being drawn from other areas of the institutional research effort, which then suffer as a result.
(My guess, FWIW, is that it also has to do with limited management attention span. At smaller institutions, there are fewer people to do oversight and hence a new initiative takes away managerial focus in addition to money).
This intriguing results is not an argument against university or polytechnic incubators; rather, it’s an argument against viewing such initiatives as purely additive. The extent to which they take resources away from other parts of the institution needs to be considered as well. To be honest, that’s probably true of most university initiatives, but as a sector we aren’t hardwired to think that way.
Perhaps we should be.