HESA

Higher Education Strategy Associates

Tag Archives: Teaching Quality

February 21

Two Studies to Ponder

Sometimes, I read research reports which are fascinating but probably wouldn’t make for an entire blog post (or at least a good one) on their own.  Here are two from the last couple of weeks.

Research vs. Teaching

Much of the rhetoric around universities’ superiority over other educational providers is that their teachers are also at the forefront of research (which is true if you ignore sessionals, but you’d need a biblically-sized mote in your eye to miss them).  But on the other hand, research and teaching present (to some extent at least) rival claims on an academic’s time, so surely if more people “specialized” in either teaching or research, you’d get better productivity overall, right?

Anyone trying to answer this question will come up pretty quickly against the problem of how to measure excellence in teaching.   Research is easy enough: count papers or citations or whatever other kind of bibliometric outcome takes your fancy.  But measuring teaching is hard.  One line of research tries to measure the relationship between research productivity and things like student evaluations and peer ratings.  Meta-analyses show zero correlation between the two: high research output has no relationship with perceived teaching quality.  Another line of research looks at research output versus teaching output in terms of contact hours.  No surprise there: these are in conflict.  The problem with those studies is that the definitions of quality are trivial or open to challenge.  Also, very few studies do very much to control for things like discipline type, institutional type, class size, stage of academic career, etc.

So now along comes a new study by David Figlio and Morton Schapiro of Northwestern University, which has a much more clever way of identifying good teaching.  They look specifically at professors teaching first year courses and ask the question: what is the deviation in grades each of their students receives in follow-up courses in the same subject. This is meant to measure whether or not professors are “inspiring” their students.  Additionally, the measure how many students actually go on from each professor’s first year class to major in a subject.  The first is meant to measure “deep learning” and the second to measure how well professors inspire their students.  Both measures are certainly open to challenge, but they are still probably better than the measures used in earlier studies.

Yet the result is basically the same as those earlier studies: having a better publishing record is uncorrelated with teaching quality measures: that is, some good researchers have good teaching outputs while others don’t.

Institutions should pay attention to this result.  It matters for staffing and tenure policies.  A lot.

Incubator Offsets

Christos Kolympiris of Bath University and Peter Klein of Baylor University have done the math on university incubators and what they’ve found is that there are some interesting opportunity costs associated with them.  The paper is gated, but a summary can be found here.  The main one is that on average, universities see a decrease in both patent quality (as measured by patent citations) and licensing revenues after establishing an incubator.  Intriguingly, the effect is larger at institutions with lower research income, suggesting that the more resources are constrained, the likelier it is that incubator funding is being drawn from other areas of the institutional research effort, which then suffer as a result.

(My guess, FWIW, is that it also has to do with limited management attention span.  At smaller institutions, there are fewer people to do oversight and hence a new initiative takes away managerial focus in addition to money).

This intriguing results is not an argument against university or polytechnic incubators; rather, it’s an argument against viewing such initiatives as purely additive.  The extent to which they take resources away from other parts of the institution needs to be considered as well.  To be honest, that’s probably true of most university initiatives, but as a sector we aren’t hardwired to think that way.

Perhaps we should be.

August 04

Summer Updates from Abroad (2): The UK Teaching Excellence Framework

The weirdest – but also possibly most globally consequential – story from this year’s higher education silly season comes from England.  It’s about something called a “Teaching Excellence Framework”.

Now, news of nationally-specific higher education accountability mechanisms don’t often travel.  Because, honestly, who cares?  It’s enough trouble keeping track of accountability arrangements in one’s own country.  But there are few in academia, anywhere, who have not heard about the UK’s Research Excellence Framework (or its nearly-indistinguishable predecessor, the Research Assessment Exercise).  There is scarcely a living British academic who has travelled abroad in the last two decades without regaling foreign colleagues with tales of this legendary process, usually using words like “vast”, “bureaucratic”, “walls full of filing cabinets”, etc.  So news that the country may be looking at creating a second such framework, related to teaching, is sure to strike many as some sort of Orwellian joke.

But no, this government is serious.  It’s fair to say that the government was somewhat disappointed that its de-regulation of tuition fees did not force institutions to focus more on teaching quality.  With the market having failed in that task, they seem to be retreating to good old-fashioned regulation, mixed with financial incentives.

The idea – and, at the moment, it’s still just a pretty rough idea – is rather simple: institutions should be rated on the quality of their teaching.  But there are two catches: first, how do you measure it?  And second, what are the rewards for doing well?

The first of these seems to be up in the air.  Although the government has committed to the principle of assessing teaching at the institutional level, it genuinely seems to have not thought through in the least how it intends to achieve this.  There are a lot of options here: one could simply look at use of resources and presence of qualifications: student/teacher ratios, number of profs who have actually sought teaching qualifications, etc.  One could go the survey route, and ask students how they feel about teaching; one could also go the peer assessment route, and have profs rate each others’ teaching.  Or there’s the “learning gain” model, used by the Collegiate Learning Assessment, which was part of the AHELO system (from which, by the way, the UK has now officially withdrawn).  Of course, everyone knows that most of these measurements are either untested, or can be gamed, so there’s some fear that what the government really wants to do is to rely on – what might generously be called – lowest-common denominator statistics; namely, employment and income data.

Why might they want to do something this bell-ended, when everyone knows income is tied most closely to fields of study?  Well, the clue is in the rewards.  British universities have – as universities do – recently been clamouring for more money.  But according to this government, there is no more money to be had; in fact, at about the same time they announced the new excellence framework, they also announced a £150 million cut to the basic teaching grant, spread over two years.  So the proposed reward for good teaching is the ability to charge higher fees (so much for de-regulation… ) But as I explained a couple weeks backraising tuition doesn’t help much because, thanks to high debt and a generous loan forgiveness system, somewhere between 60 and 80% of any extra charges at the margin will end up on the public books circa 2048, anyway. 

But… if you only increase tuition at schools where income is the highest, the likelihood is that you will get a higher proportion of graduates earning enough to pay back their loans, over time.  And hence less money will need to be forgiven.  And hence this might not actually cost so much.  Which is why there is an incentive for government to do the wrong thing here.

Still, on the off-chance the government gets this initiative at least partially right, the impact could be global.  Governments all over the world are trying to get institutions to pay more attention to teaching; expect a lot of imitators if the results of this exercise look even half-promising.  Stay tuned.