HESA

Higher Education Strategy Associates

Tag Archives: Learning Outcomes

October 04

New Quality Measurement Initiatives

One of the holy grails in higher education – if you’re on the government or management side of things, anyway – is to find some means of actually measuring institutional effectiveness.  It’s all very well to note that alumni at Harvard, Oxford, U of T (pick an elite university, any elite university) tend to go on to great things.  But how much of that has to do with them being prestigious and selective enough to only take the cream of the crop?  How can we measure the impact of the institution itself?

Rankings, of course, were one early way to try to get at this, but they mostly looked at inputs, not outputs.  Next came surveys of student “engagement”, which were OK as far as they went but didn’t really tell you anything about institutional performance (though it did tell you something about curriculum and resources).  Then came the Collegiate Learning Assessment and later the OECD’s attempt to build on it, which was called the Assessment of Higher Education Learning Outcomes, or AHELO.  AHELO was of course unceremoniously murdered two years ago by the more elite higher education institutions and their representatives (hello, @univcan and @aceducation!) who didn’t like its potential to be used as a ranking (and, in fairness, the OECD probably leant too hard in that direction during the development phase, which wasn’t politically wise).

So what’s been going on in quality measurement initiatives since then?  Well, two big ones you should know about.

The first is one being driven out of the Netherlands called CALOHEE (which is, sort of, short for “(Measuring and Comparing Achievements of Learning Outcomes in Higher Education in Europe).  It is being run by more or less the same crew that developed the Tuning Process about a decade ago, and who also participated in AHELO though they have broken with the OECD since then.  CALOHEE builds on Tuning and AHELO in the sense that it is trying to create a common framework for assessing how institutions are doing at developing students’ knowledge, skills and competencies.  It differs from AHELO in that if it is successful, you probably won’t be able to make league tables out of it.

One underlying assumption of AHELO was that all programs in a particular area (eg. Economics, Civil Engineering) were trying to impart the same knowledge skills and competencies – this was what made giving them a common test valid.  But CALOHEE assumes that there are inter-institutional differences that matter at the subject level.  And so while students will still get a common test, the scores will be broken up in ways that are relevant to each institution given the set of desired learning outcomes at each institution.  So Institution X’s overall score in History relative to institution Y’s is irrelevant, but their scores in, for instance, “social responsibility and civic awareness” or “abstract and analytical thinking” might be, if they both say that’s a desired learning outcome.  Thus, comparing learning outcomes in similar programs across institutions becomes possible, but only where both programs have similar goals.

The other big new initiative is south of the border and it’s called the Multi-State Collaborative to Advance Quality Student Learning (why can’t these things have better names?  This one’s so bad they don’t even bother with an initialism).  This project still focuses on institutional outcomes rather than program-level ones, which reflects a really basic difference of understanding of the purpose of undergraduate degrees between the US and Europe (the latter caring a whole lot less, it seems, about well-roundedness in institutional programming).  But, crucially in terms of generating acceptance in North America, it doesn’t base its assessment on a (likely low-stakes) test.  Rather, samples of ordinary student course work are scored according to various rubrics designed over a decade or more (see here for more on the rubrics and here for a very good Chronicle article on the project as a whole). This makes the outcomes measured more authentic, but implicitly the only things that can be measured are transversal skills (critical thinking, communication, etc) rather than subject-level material.  This will seem perfectly fine to many people (including governments), but it’s likely to be eyed suspiciously by faculty.

(Also, implicitly, scoring like this on a national scale will create a national cross-subject grade curve, because it will be possible to see how an 80 student in Engineering compares in to an 80 in history, or an 85 student at UMass to an 85 student at UWisconsin.  That should be fun.)

All interesting stuff and worth tracking.  But notice how none of it is happening in Canada.  Again.  I know that after 25 years in this business the lack of interest in measurable accountability by Canadian institutions shouldn’t annoy me, but it does.   As it should anyone who wants better higher education in this country.  We can do better.

February 26

MOOCs vs. Learning Outcomes

If you’ve been paying attention at all to higher ed stories in the past year or so, you’ll recognize that, apart from cutbacks, people are mainly talking about two things: Massive, Open, Online Classes (MOOCs), and Learning Outcomes.

MOOCs weren’t invented to respond to cutbacks, but policymakers sure seem to treat them as if they were.  The idea that someone out there is giving away courses for FREE just seems like manna from heaven.  Good someones, too: Harvard, MIT, Duke, Toronto, UBC – if they’re prestigious, they’ve either been signed up by Coursera or set up their own platform (like EdX).

Learning Outcomes approaches (which also come up under names like competence-based learning, or Tuning process, etc.), don’t get the same level of attention, because they don’t feed into a techno-fetishist disruption meme.  But they’re pretty important all the same.  All of this indicates that institutions are taking quality more seriously, and setting actual objectives for courses of study.  Some people take that even further, and suggest that setting demonstrable outcomes obviates the need even for classes; as long as you can prove you have the competencies, you should be able to get the degree (South Korea has a system like this for certain courses of study).

The “fad” crowd in education – the ones who advocate management-by-headline, or who think Glen Murray was on to something – tend to view all these things as interconnected, and part of a general “newness” to which higher education must inevitably bow.  But the fact is, the move to embrace MOOCs is actually completely incompatible with the idea of a move to a stricter learning outcomes regime.   They are opposed, not complementary.

MOOCs, by design, are classes, not programs (otherwise, they would be MOOPs, which would be hilarious).  They are designed to be one-offs.  Degrees full of MOOCs will inevitably be even more of a mish-mash than the degrees we give out today.  Learning outcomes, properly done, are about the exact opposite – they aim to partially reverse the smorgasboard approach, ensuring that knowledge and skills are built upon in a consistent way throughout a student’s course of study.

Learning outcomes matter because, increasingly, the public, employers, and students all need to be reassured that a degree signifies the acquisition of a particular body of knowledge and skills rather than sitting through a particular number of hours of classes.  A wider, forced adoption of MOOCs actively hinders that goal because they cannot be co-ordinated with the rest of a program.

So: do we want coherent degrees, or do we want free MOOCs?  Time to decide.

January 08

Left Behind Again

One of the most interesting phenomenon in global higher education these days is a movement known as the Tuning Process.  And, surprise, surprise, Canada’s allegedly-globally-linked-in, ultra-internationalized universities are nowhere to be found.

The Tuning Process is a process of detailing learning outcomes at the program-of-study level – a mostly faculty-driven process to determine what students should know, and be able to do, by the end of their degree.  What distinguishes Tuning from the kind of learning outcomes process we see at Canadian universities, such as Guelph, is that the process of determining outcomes statements aren’t the responsibility of faculty members at a single institution; rather, they emerge from the collaborative effort of multiple institutions.

The original Tuning was designed to come up with Europe-wide outcomes statements in a few fields of study.  Since then, it has spread around the world: to Latin America, Russia, and Japan.  More recently, it has expanded to places such as China (where, to be honest, it seems hard to believe there was much practical difference in learning outcomes between institutions, anyway) and Africa (where the degree of faculty particularism makes it really hard to imagine this process taking off).  Globally, Tuning has been at the heart of the OECD’s AHELO project, which aims to compare general cognitive skills and specific subject knowledge.

But perhaps the biggest surprise is what’s happening in the United States.  There, the Lumina Foundation launched a Tuning project about three years ago with a number of US states (Indiana, Minnesota, Texas, and Utah) in a variety of subjects; more recently, they have attempted to do a Tuning process nationally, on a single subject area, through a partnership with the American Historical Association.

Tuning is a big deal.  Though institutional participation in Tuning is everywhere voluntary, the speed at which it is spreading around the world means that within a relatively short space of time degrees that are “tuned” (that is, come complete with widely accepted learning outcomes statements) will be the norm.  Once that’s the case, there will be implications for the ability of the “untuned” to attract students.  In professional programs, this isn’t a huge deal because accreditation serves more or less the same function.  But in other disciplines, while a few institutions are stepping up to the plate, we haven’t yet got to the point where we can have grown-up, national conversations about program outcomes.

We’ll pay for this, eventually, if we don’t board this train.  Someone needs to kick-start this discussion here in Canada.  But who?

November 09

Modularization vs. Learning Outcomes

If you’ve been near education conferences in the last year or so, the chances are that you’ve heard at least one of the two following propositions.

1)      “Modularization is the Future”.  People don’t need full degrees, they need knowledge in bite-size chunks, and they need it “on-demand”.  That means that learning needs to come in tiny little bits, and certification for learning needs to come in tiny, bite-size pieces, too.  This is partly what’s pushing the enthusiasm behind certain MOOCs and ideas like “Open badges”, but even within mainstream institutions, you’re seeing this as well.  In the US, parts of the Michigan community college system  are giving out “micro-credits” for as little as a two hours worth of classes.

2)      “Learning Outcomes are the Future”.  Part of the general movement for accountability in higher education is going to require institutions to describe expected student outcomes and figure out ways to credibly certify that students who have passed a given course of studies have in fact mastered the competencies and skills linked to those outcomes.

There’s something to both of these propositions.  The problem is, they can’t both be right, because they contradict each other in one very fundamental way.

The whole point of the learning outcomes is to allow institutions to certify with some degree of precision what kind of knowledge and skills a person who has finished a particular program of studies has.  That logic necessarily leads program design away from the  frequently smorgasboard-buffet approach to course selection which is prevalent in arts and sciences in North America, and towards program with larger core curricula.

Basically, the more “core” courses there are, the more curriculum planners can be sure that particular skills and knowledge are being taught (and, presumably, learned as well).  If learning outcomes are difficult to ensure with smorgasboard curriculum, they’re well-nigh impossible with a fully modularized one.  The point of the modularization agenda is very much about making the credentials easier to obtain, and the explicit trade-off made is the coherence of the degree being offered.

To put this another way: the learning outcomes agenda is based on a human capital vision of higher education; the modularization agenda is very much about credentialism.  The public policy rationale is probably stronger for the former, but there’s clearly a strong market rationale for the latter.  Both are important, neither will trump the other.

Anyone who says either “the future is learning outcomes” or “the future is modularization” without offering any qualifications should be ignored.  Different institutions with different missions serving different populations are – quite appropriately – going to favour different strategies.  Grown-up, pluralistic education systems are capable of having trends moving in several directions at once.