Yesterday, we considered how provincial governments could get serious about higher education. Today, I want to start talking about institutions can get serious about their most important function: teaching.
When it comes to provincial goal setting and making institutions accountable, measurement is the key to improvement. I am not convinced this is entirely the case with teaching, because frankly no one knows how to measure it holistically. There are things that can be learned by having students write tests like the Collegiate Learning Assessment or AHELO. But that data is not really fine-grained enough to look at actual learning outcomes that matter at a program/disciplinary level (one might suggest something like Brazil’s ENADE system, which does look at disciplinary out-comes, provides some of these outcomes, but it has the converse problem of not being particularly good at the transversal stuff).
No, at the end of the day being serious about instruction and assessment concerns program structure and the assessment of skills (assessment is measurement of a sort, though it’s probably best dealt with through rubrics rather than tests, per se). And those are things which are rightfully the province of professors and assessors. So, the key to improvement in teaching lies in the rigour with which academic standards and processes are applied at the program level.
Conceptually, we start with two key premises, neither of which is consistently applied in Canadian post-secondary education. First, the correct unit of analysis for teaching and accountability is the program, not the individual professor. The second is that we should never think of students as clients; rather, they are the product. We admit students into programs, and what should come out are graduates who are prepared for thoughtful lives and good jobs (the balance of those two may legitimately differ by program, but both always matter). We should therefore always be very clear as to what knowledge graduates should possess and what attributes/competencies they must demonstrate in order to be deemed “finished products”. Once that is decided, one can move backwards to design two things: curricula that ensure the necessary material is covered, and assessment methods which demonstrate that students have both relevant knowledge (which is mostly subject-specific) and attributes/competencies (which should be mostly transversal).
Now, there are lots of examples in Canadian institutions where we do this well. My favourite example of this kind of thing is CanMEDS, the educational framework used by the Royal College of Physicians for all of its medical specialist programs, but there are lots of others. Most college programs and university programs in regulated fields have the curriculum design and disciplinary knowledge transfer parts covered reasonably well; the competencies/skills less so (though we are seeing some interesting experiments in this area, such as the way the Minerva Project deals with what it calls “habits of mind”). But in any event, success in this area starts with a deep commitment from program instructors and professors collectively teaching towards certain commonly-agreed “learning objectives”.
One of the problems we have is that “learning objectives” are often dismissed as a fad by many professors in unregulated fields. Although many departments are now formally required to have them, they are often treated as a box-ticking exercise to be done once and then ignored (here’s a fun challenge: go to the web pages of a random Arts faculty in Canada, where programs are all supposed to have learning objectives, and see how many such statements you can find posted publicly. I’ll wait.) Sometimes, this is run-of-the-mill faculty obstreperousness with respect to any new idea coming down the pike from the administration, but it also reflects a professional preference for a smorgasbord approach to curriculum where individual professors are sovereign over their own atomised course content and assessment methods. This is likely less an ideological position than a preference to put courses together individually rather than collaboratively with other members of their departments, because let’s face it, colleagues can be a drag.
(Yes, yes, I know there are exceptions & not everyone is like this, don’t @me.)
Quality teaching in higher education is not simply a matter of each professor bringing their “A” game to class and being a good lecturer, scholar, mentor, etc. That’s important, but it’s not enough. Quality comes from the consistent, collective discipline of turning out successful graduates. That means a) having a clear conception of goals at the program/departmental level of what successful graduates look like and a collective theory of how to produce them; b) a curricula designed to achieve these goals, c) an assessment system in place to ensure that students are progressing towards those goals, and d) a constant scanning of student and (especially) alumni experiences (and, ideally, their employers) to confirm if goals are being met and that they remain relevant.
In a system where teaching matters, the program/department is required to execute these four steps. The job of the university/college is to ensure programs/departments are getting the needed resources to execute them (including assistance with the scanning functions), and to take swift and firm action to ensure that standards are being met. The job of government – preferably though some arms-length agency – is to monitor the institutional execution of this function.
Do we do this in Canada? Well, partially, and certainly not uniformly. Strong curricula built around learning outcomes are common in colleges and regulated programs but weak elsewhere. Assessment of transversal skills is weaker still. Assessment of teaching is focussed almost exclusively on individual professors rather than on collective outcomes. Institutional oversight/follow-up of program reviews is uneven (some institutions do it well, others less so). And as for external oversight, at the university level anyway, this process is weak to non-existent by the very conscious design of institutions themselves. The closest we really come to this system at the university level in Canada is Ontario with the Ontario Universities Council on Quality Assurance (OUCQA) at the apex, but honest provosts will tell you this new system hasn’t shifted the culture much with respect to looking at learning outcomes. And even a quick glance at OUCQA’s institutional audit summaries will show you this is a pretty milquetoast system by international quality assurance standards.
In sum: quality in higher education is a process, and it’s hard. It requires constant internal collaboration, focus, work, and attention to data. And it requires not just a willingness to change and adapt curriculum in the face of changing evidence, but an accountability system designed to make it impossible not to change when the data says change is warranted. Too often under the present system, it is possible for units and institutions to avoid change, using that particularly Canadian form of passive aggressiveness that we like to tell foreigners is actually politeness; to do only half of what’s necessary when deep-rooted change is needed.
If we were serious about higher education, every institution would have stronger internal quality assurance systems and we would have external quality assurance systems worthy of the name. What we have now is not good enough. And that’s a problem.
Tomorrow: being serious about federalism.