The Mediocrity of Canadian Quality Assurance

We have two more provinces looking at Performance-Based Funding (PBF): Manitoba and New Brunswick.  The former isn’t a huge surprise – it was something recommended by the Manitoba College Review (conducted in part by yours truly) and it was clear that if PBF was enacted, it would need to include colleges as well.  We can only hope that they follow the part of the recommendation which limited the performance funding to graduation outcomes, and not try to jam in every ludicrous concept under the sun the way, say, Ontario and Alberta have done.  New Brunswick – well, we’ll see.  Minority governments are tricky.

There are alternatives to PBF, which both provinces should consider before running headlong into these processes.  Notwithstanding that PBF is (in principle at least) A Good Thing, much of what seems to get governments freaked about universities and colleges is often better solved through beefed-up external quality assurance.  Now, I have said this a few times before and I always get the same response: but we have external quality assurance, look at external program accreditation!  To which I think there are several responses, but the main one is: there is a world of difference between program accreditation/quality assurance and institutional quality assurance. 

Canada has three kinds of external program quality assessment.  In regulated professions – medicine, law, etc. – we have something close to genuine external quality assurance, where an external body rates the quality of institutional programs and occasionally actually passes a negative judgement.  For new programs – particularly if they are being delivered by non-traditional providers – there is an external quality process focussed mainly on inputs.  For existing programs – particularly those at established institutions (Political Science at Western, say) – we have very little external oversight.  Rather, what we have are internal quality assessments, spiced up with a couple of external advisers, which are conducted in a manner where the likelihood of at least partial co-optation is pretty high.

Ontario and British Columbia have tried to go a step further by instituting a meta-quality process for these program reviews through the Ontario Universities Council on Quality Assurance (OUCQA) and the Quality Assurance Process Audit (QAPA), respectively.  Basically, what these do is examine institutional program review processes, though not the individual program reviews themselves.  This is theoretically positive, but it falls short in two very significant ways.

The first is the ludicrous amounts of secrecy involved.  Here is a typical institutional audit summary from the OUCQA.  It is five pages long and contains almost nothing of substance about what was found during the review process (here’s a QAPA report for comparison – it’s really no better).  Because – I mean, God forbid anyone say out loud that our institutions might have faults or deficiencies.  But this is in keeping with general practice with these reviews.  Institutions keep their own departmental/program reviews a secret too.  In fact, even though the whole OUCQA process is based on “a culture of continuous improvement and support for a vision of a student-centred education based on clearly articulated program learning outcomes” most university departments keep their learningoutcome statements a secret: not only are they are not available to the public, they often aren’t even available to people inside universities for whom knowledge of their existence might be beneficial (like, say, librarians).   

It is not surprising that institutions engage in this kind of cover-your-ass secrecy, but it is a bit weird that external bodies would collude in it.  American accreditation agencies tend not to publish reports on institutions, but they do make accreditation decisions based on published self-study documents and believe me, these are *way* more comprehensive than anything we ask for in Canada.  Compare UBC’s 40-page self -study for QAPA with the massive self-study report (and appendices) Simon Fraser University did when it voluntarily sought US accreditation as part of its drive for NCAA status.  See also the accreditor’s response and compare it to the OUCQA and QAPA ones: American processes are simply a lot more thorough and transparent that ours.

(To be clear: Simon Fraser voluntarily signed up for this more demanding quality assurance process, which is completely brilliant.  That place is gold, as far as I’m concerned.)

Actually, look at the Simon Fraser study again, because you might notice something pretty quickly: it covers the entire organization, and not just programs.  This is a massive difference from Canada.  When the OUCQA and QAPA processes say they provide “institutional” quality assurance, they are being quite misleading, unless you start from the premise that a university is nothing more than a collection of departments and programs.  Literally none of the bits of the university which join it all together – student services, libraries, advising, student success, basic and contract research, interactions with employers, etc. – get touched in the Canadian approach, whereas they do in the US.  It’s like we’ve taken the most caricatured, professor-centric, curriculum-focused, don’t-give-a-toss-about-anything else vision of the university and said “yup, that’s what we want our Quality Assurance processes to promote!”

Or, take Sweden as an example.  When the government there does institutional reviews, it looks at six things: i) governance and organization, ii) preconditions (not quite sure what this is but I have a feeling it means things like standards of faculty and facilities), iii) program design, implementation and outcomes, (that is, what we do in Canada, only actually looking at outcomes) iv) student and doctoral students perspectives (which in Canada we more or less ignore COMPLETELY), v) working life and collaboration and vi) gender equality.

Don’t those things sound like they might be important? 

The frequent comeback here is that accreditation and other forms of external quality assurance are too time-consuming, adversarial, contrary to institutional autonomy, etc. The answer is: it certainly is in some places (the English system comes to mind).  But there are also a lot of examples of these systems working to promote “quality enhancement” in a collaborative fashion with institutions (Scotland’s system is usually spoken of quite highly in this regard.)

Think about it.  Say you’re in the position of the Alberta government, which is very concerned about the role that institutions are playing with respect to economic growth.  Which approach do you think will get you faster and better results – some performance-based funding system where a poorly-measurable statistic like graduate salaries is one of a dozen different metrics?  Or a system where external quality assurance requires institutions to adhere to certain processes in maintaining program advisory councils (e.g. include entrepreneurs, not just big legacy businesses), report on their industry liaisons, and follow processes of quality management to fuel continuous improvement in the area?

Long story short: external quality assurance could be an important means to improve Canadian post-secondary education.  Too bad we seem not to want to use it.

Posted in

5 responses to “The Mediocrity of Canadian Quality Assurance

  1. In the Maritimes there is what is supposed to be a quality assurance process mandated by the MPHEC, would be interested in your thoughts on that. We are certainly asked to do program reviews periodically.

  2. Are USA regionally accredited institutions better because they have more rigorous quality assurance than Canadian institutions? USA regional accrediting bodies are not instruments for implementing the priorities of state legislatures or governors.

    1. With apologies from answering a question with a question (two questions, actually):
      Why equate accreditation with quality assurance? Accreditation only indicates compliance with a minimum standard.
      Whether American-style accreditation does or doesn’t assure quality, why measure quality only at the institutional level? Whatever one thinks about the effectiveness of quality assurance protocols in Canada, they, for good reason, overall focus much more on program quality than institutional quality.

  3. My colleagues at UBC are very keen both on declaring learning outcomes and publishing them. I don’t agree with this focus, but I do think that they should be creditted with their commitment.

    My worry about external quality assurance isn’t just that it’s expensive and time-consuming — and everything which is so effectively sabotages what we ought to be doing instead — but that it implies that universities cannot govern themselves.

    In any case, if you want quick results in making universities more responsive to market forces, you probably don’t really want universities, in the first place. Defining features like the disinterested pursuit of knowledge are contrary to such a narrow goal, tenure stands in the way of flexibility, and so forth. Technical institutes would probably do.

    I suspect, in fact, that any quality assurance system from such a perspective would just be a pretense to burn it all to the ground.

  4. Two points
    1. Quality assurance and quality enhancement are not the same. Quality enhancement is a series of steps taken over time to improve student learning. Quality assurance, in contrast, is an inherently static tranche de vie in time. In both, quality occurs periodically and retrospectively at given times according to pre-determined schedules and procedures. Quality enhancement, however, is not only a process, it is a continuous process and, at least notionally, infinite with no fixed point of reference. For example, a student seeking a signal about quality would be misled by a signal about enhancement. This is not to say that one is superior to the other, but they are different.
    2. Agreed: performance funding in Alberta and Ontario is in an embarrassing shambles. However, even if were not, we should not assume that it is a replacement for quality assurance. As the (Ontario) Broadhurst report on university accountability, to its credit, pointed-out not all performance indicators are about quality. It listed nine out of 25 as measures of quality. Who knows where Ontario, for example, will end-up, but of the ten metrics so far proposed, at most three are directly related to quality.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.