HESA

Higher Education Strategy Associates

Author Archives: Paul

April 17

British Columbia: Provincial Manifesto Analysis

On May 9th, our left-coasters go to the polls.  What are their options as far a post-secondary education is concerned?

Let’s start with the governing Liberals.  As is often the case with ruling parties, some of their promises are things that are both baked into the fiscal framework and will take longer than one term to complete (e.g. “complete re-alignment of $3 billion in training funds by 2024”), or are simply re-announcements of previous commitments (page 85-6 of the manifesto appears to simply be a list of all the SIF projects the province already agreed to co-fund), or take credit for things that will almost certainly happen anyways (“create 1000 new STEM places”…. in a province which already has 55,000 STEM seats and where STEM spots have been growing at a rate of about 1700/year anyway…interestingly the Liberals didn’t even bother to cost that one…)

When you throw those kinds of promises away, what you are left with is a boatload of micro-promises, including: i) making permanent the current BC Training Tax Credit for employers, ii) creating a new Truck Logger training credit (yes, really), iii) spending $10M on open textbooks over the next 4 years, iv) reducing interest rates on BC student loans to prime, v) making minor improvements to student aid need assessment, vi) providing a 50% tuition rebate to Armed Forces Veterans, vii) creating a centralized province-wide admission system and viii) allowing institutions to build more student housing (currently they are restricted from doing so because any institutional debt is considered provincial debt and provincial debt is more or less verboten…so this is a $0 promise just to relax some rules).  There’s nothing wrong with any of those, of course, but only the last one is going to make any kind of impact and as a whole it certainly doesn’t add up to a vision.  And not all of this appears to be new money: neither the student loan changes nor the centralized application system promises are costed, which suggests funds for these will cannibalized from elsewhere within the system.  The incremental cost of the remaining promises?  $6.5 million/year.  Whoop-de-do.  Oh, and they’re leaving the 2% cap on tuition rises untouched.

What about the New Democrats?  Well, they make two main batches of promises.  One is about affordability, and consists of matching the Liberal pledge on a tuition cap, slightly outdoing them on provincial student loan interest (eliminating it on future and past loans, which is pretty much the textbook definition of “windfall gains”), and getting rid of fees for Adult Basic Education and English as a Second Language Program (which, you know, GOOD).  There’s also an oddly-worded pledge to provide a $1,000 completion grant “for graduates of university, college and skilled trades programs to help pay down their debt when their program finishes”: based on the costing and wording, I think that means the grant is restricted to those who have provincial student loans.

The NDP also has a second batch of policies around research – $50M over two years to create a graduate scholarship fund and $100M (over an unspecified period, but based on the costing, it’s more than two years) to fund expansion of technology-related programs in BC PSE institutions.  There is also an unspecified (and apparently uncosted) promise to expand tech-sector co-op programs.  Finally, they are also promising to match the Liberals on the issue of allowing universities to build student housing outside of provincial controls on capital spending.

Finally, there are the Greens, presently running at over 20% in the polls and with a real shot at achieving a significant presence in the legislature for the first time.  They have essentially two money promises: one, “to create a need-based grant system” (no further details) and two, an ungodly bad idea to create in BC the same graduate tax credit rebate that New Brunswick, Nova Scotia and now Manitoba all have had a shot at (at least those provinces had the excuse that they were trying to combat out-migration; what problem are the BC Greens trying to solve?).

Hilariously, the Green’s price-tag for these two items together is…$10 million.  Over three years.  Just to get a sense of how ludicrous that is, the Manitoba tax credit program cost $55 million/year in a province a quarter the size.  And within BC, the feds already give out about $75M/year in up-front grants.  So I think we need to credit the Greens with being more realistic than their federal cousins (remember the federal green manifesto?  Oy.), but they have a ways to go on realistic budgeting.

(I am not doing a manifesto analysis for the BC Conservatives because a) they haven’t got one and b) I’ve been advised that if they do release one it will probably be printed in comic sans.)

What to make of all this?  Under Gordon Campbell, the Liberals were a party that “got” post-secondary education and did reasonably well by it; under Christy Clark it’s pretty clear PSE can at best expect benign neglect.  The Greens’ policies focus on price rather than quality, one of their two signature policies is inane and regressive, and their costing is off by miles.

That leaves the NDP.  I wouldn’t say this is a great manifesto, but it beats the other two.  Yeah, their student aid policies are sub-optimally targeted (they’re all for people who’ve already finished their programs, so not much access potential), but to their credit they’ve avoided going into a “tuition freezes are magic!” pose.  Alone among the parties, they are putting money into expansion and graduate studies and even if you don’t like the tech focus, that’s still something.

But on the whole, this is a weak set of manifestos.  I used to say that if I was going to run a university anywhere I’d want it to be In British Columbia.  It’s the least-indebted jurisdiction in Canada, has mostly favourable demographics, has easy access from both Asia (and its students) and from the well-off American northwest.  And it’s got a diversified set of institutions which are mostly pretty good at what they do.  Why any province would want to neglect a set of institutions like that is baffling; but based on these manifestos it seems clear that BC’s PSE sector isn’t getting a whole lot of love from any of the parties.  And that’s worrying for the province’s long-term future.

April 10

Evaluating Teaching

The Ontario Confederation of University Faculty Associations (OCUFA) put out an interesting little piece the week before last summarizing the problems with student evaluations of teaching.  It contains reasonable summary of the literature and I thought some of it would be worth looking at here.

We’ve known for awhile now that the results of student evaluations are statistically biased in various ways.  Perhaps the most important way they are biased is that professors who mark more leniently get higher rankings from their students.  There is also the issue of what appears to be discrimination: female professors and visible minority professors tend to get lower ratings than white men.  And then there’s the point that OCUFA makes with respect to the comments section of these evaluations being a hotbed of statements which amount to harassment.  These points are all well worth making.

One might well ask: given that we all know about the problems with teaching evaluations, why in God’s name do institutions still use them?  Fair question.  Three hypotheses:

  1. Despite flaws in the statistical measurement of teaching, the comments actually do provide helpful feedback, which professors use to improve their teaching.
  2. When it comes to pay and promotion, research is weighted far more highly than teaching, so unless someone completely tanks their teaching evals – and by tanking I mean doing so much below par that it can’t reasonably be attributed to one of the biases listed above – they don’t really matter all that much (note: while this probably holds for tenured and tenure-track profs, I suspect the stakes are higher for sessionals).
  3. No matter how bad a measurement instrument they are, the idea that one wouldn’t treat student opinions seriously is totally untenable, politically.

In other words, there are benefits despite the flaws, the consequences of flaws might not be as great as you think, and to put it bluntly, it’s not clear what the alternative is.  At least with student evaluations you can maintain the pretense that teaching matters to pay and promotion.  Kill those, and what have you got?  People already think professors don’t care enough about teaching.  Removing the one piece of measurement and accountability for teaching that exists in the system – no matter how flawed – is simply not on.

That’s not to say there aren’t alternatives to measuring teaching.  One could imagine a system of peer evaluation, where professors rate one another.  Or one could imagine a system where the act of teaching and the act of marking are separated – and teachers are rated on how well their students perform.  It’s not obvious to me that professors would prefer such a system.

Besides, it’s not as though the current system can’t be redeemed.  Solutions exist.  If we know that easy markers get systematically better ratings, then normalize ratings based on the class average mark.  Same thing for gender and race: if you know what the systematic bias looks like, you can correct for it.  And as for ugly stuff in the comments section, it’s hardly rocket science to have someone edit the material for demeaning comments prior to handing it to the prof in question.

There’s one area where the OCUFA commentary goes beyond the evidence however, and that’s in trying to translate the findings of student teaching evaluations (ie. how did Professor X do in Class Y) to surveys of institutional satisfaction.  The argument they make here is that because the one is known to have certain biases, the other should never be used to make funding decisions.  Now, without necessarily endorsing the idea of using student satisfaction as a funding metric, this is terrible logic. The two types of questionnaires are entirely different, ask different questions, and simply are not subject to the same kinds of biases.  It is deeply misleading to imply otherwise.

Still, all that said, it’s good that this topic is being brought into the spotlight.   Teaching is the most important thing universities do.  We should have better ways of measuring its impact.  If OCUFA can get us moving along that path, more power to them.

September 28

International Rankings Round-Up

So, the international rankings season is now more or less at an end.  What should everyone take away from it?  Well, here’s how Canadian Universities did in the three main rankings (the Shanghai Academic Ranking of World Universities, the QS Rankings and the Times Higher Rankings).

ottsyd20160928

Basically, you can paint any picture you want out of that.  Two rankings say UBC is better than last year and one says it is worse.  At McGill and Toronto, its 2-1 the other way.  Universities in the top 200?  One says we dropped from 8 to 7, another says we grew from 8 to 9 and a third says we stayed stable at 6.  All three agree we have fewer universities in the top 500, but they disagree as to which ones are out (ARWU figures it’s Carleton, QS says its UQ and Guelph, and for the Times Higher it’s Concordia).

Do any of these changes mean anything?  No.  Not a damn thing.  Most year-to-year changes in these rankings are statistical noise: but this year, with all three rankings making small methodological changes to their bibliometric measures, the year-to-year comparisons are especially fraught.

I know rankings sometimes get accused of tinkering with methodology in order to get new results and hence generate new headlines, but in all cases, this year’s changes made the rankings better, either making them more difficult to game, more reflective of the breadth of academia, or better at handling outlier publications and genuine challenges in bibliometrics.  Yes, the THE rankings threw up some pretty big year-to-year changes and the odd goofy result (do read my colleague Richard Holmes’ comments the subject here) but I think on the whole the enterprise is moving in the right direction.

The basic picture is the same across all of them.  Canada has three serious world-class universities (Toronto, UBC, McGill), and another handful which are pretty good (McMaster, Alberta, Montreal and then possibly Waterloo and Calgary).  16 institutions make everyone’s top 500 (the U-15 plus Victoria and Simon Fraser but minus Manitoba, which doesn’t quite make the grade on QS), and then there’s another half-dozen on the bubble, making it into some rankings’ top 500 but not others (York, Concordia, Quebec, Guelph, Manitoba, Concordia).  In other words, pretty much exactly what you’d expect in a global rankings.  It’s also almost exactly what we here at HESA Towers found when doing our domestic research rankings four years ago. So: no surprises, no blown calls.

Which is as it should be: universities are gargantuan, slow-moving, predictable organizations.  Relative levels of research output and prestige change very slowly; the most obvious sign of a bad university ranking is rapid changing of positions from year to year.   Paradoxically, of course, this makes better rankings less newsworthy.

More globally, most of the rankings are showing rises for Chinese universities, which is not surprising given the extent to which their research budgets have expanded in the past decade.  The Times threw up two big surprises; first by declaring Oxford the top university in the world when no other ranker, international or domestic, has them in first place in the UK, and second by excluding Trinity College Dublin from the rankings altogether because it had submitted some dodgy data.

The next big date on the rankings calendar is the Times Higher Education’s attempt to break into the US market.  It’s partnering with the Wall Street Journal to create an alternative to the US News and World Report rankings.  The secret sauce of these rankings appears to be a national student survey, which has never been used in the US before.  However, in order to get a statistically significant sample (say, the 210-students per institution minimum we used to use in the annual Globe and Mail Canadian University Report) at every institution currently covered by USNWR would imply an astronomically large sample size – likely north of a million students.  I can pretty much guarantee THE does not have this kind of sample.  So I doubt that we’re going to see students reviewing their own institution; rather, I suspect the survey is simply going to ask students which institutions they think are “the best”, which amounts to an enormous pooling of ignorance.  But I’ll be back with a more detailed review once this one is released.

August 10

Ontario’s Quiet Revolution

Last year, the Government of Ontario announced it was moving to a new and more generous systems of student grants.  Partly, that was piggybacking on a new and enhanced federal grants and partly it was converting its own massive system of loan forgiveness and tax credits into a system which – more sensibly – delivered them upfront to students.  For most students from low-income backgrounds, this means they will receive more in grants than they pay in tuition.

Now, while the new federal grants came into place last week (yay!), the new provincial program isn’t due to be introduced until 2017-18.  But the *really* important piece of the Ontario reform actually won’t kick in until even later.  As I noted back here, it’s the move to “net billing” (that is, harmonizing the student aid and institutional application systems) which has the most interesting potential because now students will see net costs at the time of acceptance rather than just sticker costs.  It has been generally appreciated (in part because I keep banging on about it) that this will be revolutionary for students and their perceptions of cost.  What is not as well appreciated is how revolutionary this change will be for institutions.

Currently, Ontario universities use merit scholarships as a major tool in enrolment management.  At the time students are accepted, institutions offer them money based on their grades.  The scale differs a bit between institutions (an 85% might get you $1,000 at one university and $2,000 at another), but the basics of it is that over two-thirds of entering students receive some kind of financial reward, usually for one year.  It’s a total waste of money for institutions, but everybody does it – so no institution feels it can stop doing it.

But the effect this money has on students is predicated on the fact that the institutional award offer is the first time anyone has talked about money with them.  In our current system of student aid, you have to be accepted at an institution before you can apply for student aid.  Even $1,000 is a big deal when nobody else is offering you any money.  But as of early 2018, students will learn about their institutional award at exactly the same time as they find out their student aid award.  How will that affect the psychology of the money being offered?  No one knows. How should universities therefore adjust their policy?

Bigger questions abound.  “Net billing” implies that institutions will know the outcome of a student’s provincial need assessment before the student does.  Will they be allowed to adjust their own aid offers as a result?  Could the province stop them from doing so even if they wanted to?

What will new letters of acceptance look like?  When an institution tells a student about tuition, aid, and “net cost”, will they be required to lump all aid together, or will they be allowed to label their own portion of the aid separately?  You would think institutions would fight hard to keep the label on their own money but prohibiting labeling might be the best way to cut down on these scholarships and re-direct the money to better use, something I advocated a couple of years ago.  With no labellings, there would be no incentive to spend on this item, and institutions could back away from it with no opprobrium.  We’ll see if institutions are actually that shrewdly or not.

Even if they do retain the right to separate labeling, what will the effect on students be?  Getting an offer of a $1,000 merit scholarship is undoubtedly psychologically different than receiving a $1,000 scholarship on top of a $6,000 need-based grant.  And when placed in context with a tuition fee, the effect may vary again.  In other words, we’re heading into a world where Ontario universities – who collectively spend tens of millions of dollars a year on these scholarships – have literally no idea what effect they will have in the minds of the people they are trying to attract.  I suspect we may see one or two institutions re-profile their aid money and head out in very new strategic directions as a result of this.

Universities have a lot of business-process work to do to make net billing work over the next 12 months or so.  But more importantly, they have some big strategic decisions to make about how to dish out money to students in the absence of much hard intelligence.  How they react will be one of the more interesting stories of 2017-18.