A few months ago, I wrote a very harsh review of a paper written by the former head of the Canadian Council on Learning, Paul Cappon. I was mostly cheesed off by Cappon’s mindless (and occasionally mendacious) cheerleading on behalf of an expanded role for the federal government in education. But in one respect, Cappon had a point: though I disagree with him about what level of government should be doing it, we need someone in Canada setting goals for our systems of higher education. Because as it stands, we effectively have none.
Take Ontario (please). Here we have a government that will sanctimoniously tell you how much it cares about access. My God, they love access. They love access so much that they will hand out money to just about anyone in its name, no matter how preposterous. But ask yourself: if the government cares so much about access, why does it not have a measurable access goal against which to evaluate progress? In the absence of such a goal, one gets the sense that the Ontario government measures progress by how much money it spends, not by what it actually achieves.
Ontario’s Ministry of Training, Colleges and Universities publishes an annual “Results-based Plan”, which contains a list of “goals” that are laughable in their vagueness (the most specific goal being: “raise Ontario’s post-secondary attainment rate to 70%”, without either defining what is meant by post-secondary attainment rate, or attaching a date to the goal). But none of the provinces to Ontario’s east have any targets for access, either; the closest we get is Quebec, where the ministry does have quite a list of annual targets, but they tend toward the picayune – there are no targets on access either in terms of participation rates as a whole, or for under-represented groups.
Heading west, things hardly get better. Ministries of Advanced Education in Manitoba and Saskatchewan have annual reports that track trends on key educational goals, but that offer no associated targets to meet. British Columbia does have a target for “increasing participation and successful completion of all students”, but bizarrely, the indicator chosen for this is not participation rates, but rather unemployment rates for graduates (and the “target” – if we can call it that – is to have unemployment rates less than or equal to high school graduates, a bar so low it may actually be underground). Alberta alone actually publicly sets itself goals on participation rates.
It’s more or less the same for other policy areas. Retention? Quebec has a couple of commitments with respect to doctoral students, but that’s about it. Research? Again, only Alberta. Post-graduate employment rates? Only those seriously unambitious ones from British Columbia.
Does it have to be this way? I point your attention to this very useful document from the European Commission on the “Modernisation of Higher Education in Europe” (which in this case covers issues of access, retention, flexibility of studies, and transitions to the labour market). It shows quite clearly how many governments are adopting specific, measurable targets in each of these areas. Ireland has set a target of 20% of new university entrants being mature students. Finland wants to increase male participation to equal that of women by 2025. A few years ago, France set a target 31.5% of its undergraduates to come from disadvantaged socioeconomic groups by 2015. Similarly, Slovenia set a target of reducing non-completion rates from 35% to 12% by 2020.
Goal-setting is important. It encourages a focus on outcomes and not activities and, as a result, makes governments more open to experimentation. But it’s also hard: it exposes failure and mediocrity. Canadian policymakers, for reasons that I think are pretty deeply etched in our national character, prefer a model of “do your best” or “let’s spend money and see what happens”. It’s a model where there can never be failure because no one is asked to stretch, and no one is held accountable for results.
Policy-making in higher education doesn’t have to be this way. We could do better; we choose not to do so. What does that say about us?
So, if I, as an Ontarian and a college faculty member, wanted to do something about this, which is largely outside of my area of expertise, where would I push to get the most (or any) leverage? And how would you suggest I push?
I couldn’t agree more! I’ve often found it fascinating how some us as well-recognised, large research institutions, can operate in a way that is not based on market research or statistics of any kind, and definitely not held accountable for some of the spending that is done. I would often see money going to waste on programs or plans based on questionable policies, on decisions made on a whim, rather than real information about what we can reasonably expect out of a planned policy change. I admire the US a lot in that regard. Even in things like diversity, they acknowledge cultural diversity as well as in mental and physical challenges and socioeconomic background. In Canada I find the conversation is only on multiculturalism or internationalisation brining diversity to campus. We have promoted programs that are designed to bring inner city kids into the university to hopefully attract them to study here, since some central admin staff learned that many of them do not go on to university, but the program did not recognise the social barriers that make it difficult for these kids to get in–to even complete high school–and provide supports the rest of their school years to help prepare them and equip them to succeed in a post-secondary environment. It would seen naive if smaller organisations and businesses were to operate this way, but it seems like it’s almost deliberately neglectful in a university to act this way, and that’s what I find the most frustrating.
There are two major problems with measurable goals:
1. You can have the wrong goals;
2. You can achieve the measures, but not the goals.
The first of these explains why the Luftwaffe failed: Hitler wanted to build lots of great airplanes, and so neglected training new crews or supplying existing air units with spare parts. It also explains why the designers of the RMS Titanic built a huge, luxurious, high-speed ship that couldn’t turn around.
The second of these explains why the recession occurred: corporate well-being was measured by stock-prices, which were, of course, then inflated. The general rule in finance, I’m told, is called Goodhart’s Law: Any metric becomes useless once it’s treated as a goal.
For an academic example of the first, we could think about a place that builds pyramids for its donors or football stadiums for its alumni. It achieves its goals, but becomes less of a university insofar as it does.
For an academic example of the first, we can find examples such as professors inflating grades in hopes of inflating their class evaluations or institutions squandering resources to attract nobel laureates in order to increase their Shanghai scores. They aren’t actually doing better teaching or research, just increasing their measures.
You yourself wrote on how international rankings are kind of bunk, but now we’re to submit our institutions to similar quantitative measures as a matter of principle. God help us.