Higher Education Strategy Associates

Tag Archives: Curriculum

June 14

Two Approaches to Student Success

I’ve recently been doing a little bit of work recently on student success and I am struck by the fact that there are two very different approaches to student success, depending on which side of the Atlantic you are sitting on.  I’m not sure one is actually better than the other, but they speak to some very different conceptions of where student success happens within an institution.

(To be clear, when I say “student success” I mostly mean “degree/program completion”.  I recognize that there are evolving meanings of this which mean something more/different.  Some are extending the notion to not just completion but career success – or at least success in launching one’s career; others suggest completion is overrated as a metric since some students only attend to obtain specific skills and never intend to complete, and if these students drop out in order to take a job, that’s hardly a failure.  I don’t mean to challenge either of these points, but I’m making a point about the more traditional definition of the term).

What I would call the dominant North American way of thinking about student success is that it is an institutional and, to a lesser extent, a faculty matter rather than something dealt with at the level of the department or the program.  We throw resources from central administration (usually, from Institutional Research) to identify “at-risk” students.  We use central resources to bolster students’ mental health, and hire councillors, tutors and academic support centrally as well.  Academic advisors tend to be employed by faculties rather than the central admin, but the general point still stands – these are all things that are done on top of, and more or less without reference to, the actual academic curriculum.

The poster child for this kind of approach is Georgia State University (see articles here and here).  It’s an urban university with very significant minority enrolments, one that at the turn of the century had a completion rate of under 30%.  By investing heavily in data analytics and – more importantly – in academic tutors and advisors (I’ve heard but can’t verify that their ratio of students to advisors is 300:1 or less, which is pretty much unimaginable at a Canadian university).  Basically, they throw bodies at the problem.  Horrible, dreaded, non-academic staff bloat bodies.  And it works: their retention rates are now up over 50 percent – their improvement among minority students has been a whopping 32 percentage points.

But what they don’t seem to do is alter the curriculum much.  It’s a very North American thing, this.  The institution is fine, it’s students that have to make adjustments, and we have an army of counsellors to help them do so.

Now, take a gander at a fascinating little report from the UK called What Works: Student retention and success change programme phase 2  In this project, a few dozen individual retention projects were put together across 13 participating institutions, piloted and evaluated.  The projects differed from place to place, but they were built on a common set of principles, the first and most important one being as follows; “interventions and approaches to improve student retention and success should, as far as possible, be embedded into mainstream academic provision”.

So what got piloted were mostly projects that involved some adjustment to curriculum, either in terms of the on-boarding process (e.g. “Building engagement and belonging through pre-entry webinars, student profiling and interactive induction”) or the manner in which assessments are done (e.g., “Inclusive assessment approaches: giving students control in assignment unpacking”) or simply re-doing the curriculum as a whole (e.g. “Active learning elements in a common first-year engineering curriculum”).

That is to say, in this UK program, student success was not treated as an institutional priority dealt with by non-academic staff.  It was treated as a departmental-level priority, dealt with by academic staff.

I would say at most North American universities this approach is literally unimaginable.  Academic staff are not “front-line workers” who deal with issues like academic preparedness; in fact, often professors who do try to work with a student and refer them to central academic or counselling services will discover they cannot follow up an individual case with central services because the latter see it as a matter of “client confidentiality”.  And outside of professional faculties, our profs teach individual courses of their own choosing rather than jointly manage and deliver a set curriculum which can be tweaked.  Making a curriculum more student-friendly assumes there is a curriculum to alter, rather than simply a basket of courses.

Part of this is a function of how university is conceptualized.  In North America, we tend to think that students choose an institution first and a program of study later (based on HESA’s research on student decisions, I think this is decreasingly the case, but that’s another story). So, when we read all the Vince Tinto-related research (Tinto being the guru of student retention studies, most of which is warmed-over Durkheim) about “belonging”, “fit” and so on, we assume that what students are dropping out of is the institution not the program, and assign responsibilities accordingly.  But in Europe, where 3-year degrees are the norm and they don’t mess around with things like breadth requirements, the assumption is you’re primarily joining a program of study, not an institution.  And so when Europeans read Tinto, they assume the relevant unit is the department or program, not the institution.

But also I think the Europeans – those interested in widening access and participation, anyway – are much less likely to think of the problem as being one of how to get students to adapt to university and its structures.  Quite often, they reverse the problem and say “how can the institution adapt itself to its students”?

It’s worth pondering, maybe, whether we shouldn’t ask that question more often, ourselves.  I think particularly when it comes to Indigenous students, we might be better served with a more European approach.


April 11

Populists and Universities, Round Two

There is a lot of talk these days about populists and universities.  There are all kinds of thinkpieces about “universities and Trump”, “universities and Brexit”, etc.  Just the other day, Sir Peter Scott delivered a lecture on “Populism and the Academy” at OISE, saying that over the past twelve months it has sometimes felt like universities were “on the wrong side of history”.

Speaking of history, one of the things that I find a bit odd about this whole discussion is how little the present discussion is informed by the last time this happened – namely, the populist wave of the 1890s in the United States.  Though the populists never took power nationally, they did capture statehouses in many southern and western states, most of whom had relatively recently taken advantage of the Morrill Act to establish important state universities.  And so we do have at least some historical record to work from – one that was very ably summarized by Scott Gelber in his book The University and the People.

The turn-of-the-20th-century populists wanted three things from universities. First, they wanted them to be accessible to farmers’ children – by which they meant both laxer admissions standards and “cheap”.  That didn’t necessarily mean they wanted to increase expenditures on university budgets substantially (though in practice universities did OK under populist governors and legislators); what it meant was they wanted tuition to remain low and if that entailed universities having to tighten their belts, so be it.  And the legacy of the populists lives on today: average state tuition in the US still has a remarkable correlation to William Jennings Bryan’s share of the vote in the 1896 Presidential election.


Fig 1: 2014-15 In-State Tuition Versus William Jennings Bryan’s Vote Share in 1896

Populism Graph


The second thing populists wanted was more “practical” education.  They were not into learning for the sake of learning, they were into learning for the sake of material progress and making life easier for workers and farmers; in many ways, one could argue that their attitude about the purpose of higher education was pretty close to that of Deng/Jiang-era China.  And to some extent they were pushing on an open door because the land-grant universities – particularly the A&Ms – were already supposed to have that mandate.

But there was a tension in the populists’ views on curriculum.  They weren’t crazy about law and humanities programs at state universities (too much useless high culture that divided the masses from the classes), but they did grasp that an awful lot of people who were successful in politics had gone through law and humanities programs and – so to speak – learned the tricks of the trade there (recall that rhetoric was one of the seven Liberal arts which still played a role in 19th century curricula).  And so, there was also concern that if public higher education were made too vocational, its beneficiaries would still be at a disadvantage politically.  There were various solutions to this problem, not all of which were to the benefit of humanities subjects, but the key point was this: universities should remain places where leaders are made.  If that meant reading some Marcus Aurelius, so be it: universities were a ladder into the ruling class, and the populists wanted to make sure their kids were on it.

And here, I think is where times have really changed. The new populists are, in a sense, more Gramscian than their predecessors.  They get that universities are ladders to power for individuals, but they also understand that the cultural function of universities goes well beyond that.  Universities are – perhaps even more so than the entertainment industry – arbiters of acceptable political discourse.  They are where the hegemonic culture is made.  And however much they may want their own kids to get a good education, today’s populists really want to smash those sources of cultural hegemony.

This is, obviously, not good for universities.  We can – as Peter Scott suggested – spend more time trying to make universities “relevant” to the communities that surround them.  Nothing wrong with that.  We can keep plugging away at access: that’s a given no matter who is in power.  But on the core issue of the culture of universities, there is no compromise.  Truth and open debate matter.  A commitment to the scientific method and free inquiry matter.  Sure, universities can exist without these things: see China, or Saudi Arabia.  But not here.  That’s what makes our universities different and, frankly, better.

No compromise, no pasarán.

April 07

Innovation to Watch at the University of Sydney

Australian universities seem to do “Big Change” a lot better than universities elsewhere.  A few years ago, the University of Melbourne radically overhauled its entire curriculum in the space of about two years partly to create a more North American-like distinction between undergraduate and professional degrees and partly to reduce degree clutter by winnowing the number of different degrees from over a hundred to just six.  (For a refresher, I wrote about this back here).

If you read press reports about the University of Sydney’s new strategic plan (read the full document here,  it’s completely worth it) you might think Sydney is just aping Melbourne: it’s culling of degrees from 120 to 20, mostly by wiping out five-year “double degrees”, and also reducing the number of faculties from 16 to 6.

But the reduction in the number of degrees is actually a much less interesting story than what Sydney plans to do in terms of its curriculum.  From 2018, every program is to have two courses in third-year: one to integrate and apply disciplinary skills and another to apply disciplinary knowledge and skills in context.  Every degree will culminate in a final-year project or practicum.  Every program will have cultural competency embedded within it, and support for international studies will rise so that (hopefully) the proportion of students with an international experience will rise from 19% to 50%.  A strong framework to support career transitions will also be set up. Involving both curricular and co-curricular efforts

Here’s the most interesting bit: And an entirely new “open learning environment” will be created within the university, which will provide short, on-demand courses in areas such as entrepreneurship, ethics, project management, leadership (you know, all the employability-related skills universities usually claim students pick up by osmosis).  Some of these courses will be online, while some will be blended online/workshop; some will be non-credit and some will be small-credit. 

Did I mention they are going to develop a university-wide approach to measuring how desired graduate qualities such as disciplinary depth, interdisciplinary effectiveness, communication ability and cultural competence have been attained?  Yes, really.

What makes this kind of change deeply impressive – and potentially highly significant – is that it is not coming from a second-tier, ambitious institution trying to catch attention by doing something new.  This is the country’s oldest university.  This is a big, old prestigious institution taking big serious steps to actually change the undergraduate degree structure in order to provide students with better skills without sacrificing academic rigour.  It’s a research university that cares enough about undergraduate learning outcomes that it will measure them in some way beyond graduation rates and immediate employment rates. 

This is cutting edge stuff.  It may even be a world first.  We should all hope it is not the last; this kind of approach needs to spread quickly.

June 09

STEM and STEAM: The “Two Cultures” and Academic Incentives

About a month ago, I wrote about whether institutions would adjust their program mix if it would help improve economic growth.  Nearly everyone that wrote me implicitly assumed that the “right” mix for economic growth implied a switch to a more STEM-heavy system, before going on to say something like “but what about the humanities?”  I found this kind of amusing, because I actually don’t automatically assume that STEM (Science, Technology, Engineering, and Mathematics) degrees are where it’s at in terms of growth, and there are a couple of quite high-powered papers out that support this view.

The first, Revisiting the STEM Workforcecomes from the National Science Board in the US.  This publication makes a couple of sensible points, the most important being that STEM skills and STEM degrees are not the same thing.   Lots of STEM graduates end up in non-STEM employment; conversely, many STEM-field jobs are held by people who are not themselves STEM graduates (Steve Jobs, famously, went to Reed College and was self-taught as far as computers went).  Basically, the link between higher education credentials and labour market skills is nowhere near as tight as people tend to assume.

The second new STEM report, from the Canadian Council of Academies, makes an even more important point: namely, that STEM skills are a necessary condition for innovation, but not a sufficient one.  The panel that wrote the report (led by former Bank of Canada Governor David Dodge) did not go quite as far as Don Tapscott did in his plea to replace a focus on STEM degrees with a focus on STEAM degrees (i.e. STEM + Arts).  They did, however, point to a number of other types of skills, such as communication, team work, leadership, creativity, and adaptability, which they felt were at least as important as narrow STEM skills.  The panel also made the point that the best way to meet future human resource challenges is to focus more broadly on skill acquisition from pre-primary to higher education, across a range of subjects – because, frankly, you never know what kind of labour market you’re going to need.

Both reports say we need to get over our obsession with STEM, a conclusion that typically brings cheers from the humanities’ defenders.  But be careful here: even if you buy the “more STEAM” conclusion, it says nothing about the number of Arts degrees that should be produced.  Companies are not dying to hire more Arts grads so they can add that little something of creativity and communication to existing teams of STEM workers.  What they are looking for are individuals who can integrate all of those skills.  It’s a call for more crossover degrees involving both Arts and STEM.  It’s a call to get beyond C.P. Snow’s Two Cultures.

The real problem is that universities genuinely do not know how to deliver programs like this.  Fundamentally, they are designed to focus on degrees rather than skills. Sure, programs can cross departmental lines; however, programs that cross faculty lines are the red-headed step-children of higher education.  As a result, “real” programs – read: prestigious programs – more or less follow disciplinary lines.  Within universities, faculties count success by how many students are “theirs”, but cross-faculty programs exist in a kind of no-man’s-land: they simultaneously belong to everyone and no one.  With no incentives, there’s simply no pressure from below – that is, from faculty – to embark on the arduous journey of creating a curriculum, and working it through the academic approval process.  In other words, STEAM only works for Arts at a resource level (and hence a political level) if it means more Arts degrees; if not, then forget it.

It would all be so much easier if institutions were built around what we wanted students to learn; instead, they are organized by academic disciplines that are necessary guardians of research quality, but in many respects actively hinder the development of balanced graduates who can succeed in work and society.  Finding ways to mitigate this problem is one of the most important questions facing higher education, but we can’t seem to talk about it openly.  That’s a problem that needs solving.

December 10

The History of the Smorgasbord

One of the things that clouds mutual understanding of higher education systems across the Atlantic is the nature of the Arts curriculum.  And in particular, the degree to which they actually have them in Europe, and don’t over here.

When students enroll in a higher education program in Europe, they have a pretty good idea of the classes they’ll be taking for the next three years.  Electives are rare; when you enter a program, the required classes are in large part already laid out.  Departments simply don’t think very much in terms of individual courses – they think in terms of whole programs, and share the teaching duties required to get students through the necessary sequence of courses.

If you really want to confuse a European-trained prof just starting her/his career in Canada, ask: “what courses do you want to teach?”  This is bewildering to them, as they assume there is a set curriculum, and they’re there to teach part of it.  As often as not, they will answer: “shouldn’t you be telling me what courses to teach”?  But over here, the right to design your own courses, and have absolute sovereignty over what happens within those courses, is the very definition of academic freedom.

And it’s not just professors who have freedom.  Students do too, in that they can choose their courses to an extent absolutely unknown in Europe. Basically, we have a smorgasbord approach to Arts and Sciences (more the former than the latter) – take a bunch of courses that add up to X credits in this area, and we’ll hand you a degree.  This has huge advantages in that it makes programs flexible and infinitely customizable.  It has a disadvantage in that it’s costly and sacrifices an awful lot of – what most people would call – curricular consistency.

So why do we do this?  Because of Harvard.  Go back to the 1870s, when German universities were the envy of the world.  The top American schools were trying to figure out what was so great about them – and one of the things they found really useful was this idea called “academic freedom”.  But at Harvard, they thought they would go one better: they wouldn’t just give it to profs, they’d give it to students, too. This was the birth of the elective system.  And because Harvard did it, it had to be right, so eventually everyone else did it too.

There was a brief attempt at some of the big eastern colleges to try and put a more standard curriculum in place after World War II, so as to train their budding elites for the global leadership roles they were expected to assume.  It was meant to be a kind of Great Books/Western Civ curriculum, but profs basically circumvented these attempts by arguing for what amounted to a system of credit “baskets”.  Where the university wanted a single course on “drama and film in modern communication” (say), profs argued for giving students a choice between four or five courses on roughly that theme.  Thus, the institution could require students to take a drama/film credit, but the profs could continue to teach specialist courses on Norwegian Noir rather than suffer the indignity of having to teach a survey course (not that they made their case this way – “student choice” was the rallying call, natch).

Canadian universities absorbed almost none of this before WWII – until then, our universities were much closer to the European model.  But afterwards, with the need to get our students into American graduate schools, and so many American professors being hired thereafter (where else could we find so many qualified people to teach our burgeoning undergrad population?), Canadian universities gradually fell into line.  By the 1970s, our two systems had coalesced into their present form.

And that, friends, is how Arts faculties got their smorgasbords and, to a large extent, jettisoned a coherent curriculum.

November 24

The Arts Problem(s)

There’s no polite way to say this: Canadian universities have an Arts problem.

At the heart of institutions’ looming fiscal problems is their inability to convince major customer groups (government, students) to pay the desired price for the product they’re offering.  The reason for this, mainly, is the perception that the product on offer is not value-for-money.  Part of this is due to our ludicrously opaque student aid systems, which lead students and families and politicians into thinking that net tuition is a heck of a lot higher than it actually is (see here for more on that, or here for the full report).  But part of it also has to do with the fact that people are under the impression that returns on education ain’t what they used to be.

That’s not entirely fair, of course.   The recession is responsible for most of the downturn in graduate jobs, not some sudden change in what the market “wants” in terms of skills.  And it’s not even true that returns are falling for all fields of study: some have held up relatively well in recent years.  But it is a problem in Arts.  Look what data from the annual survey of Ontario Graduates says: though employment rates remain high, the actual monetary returns are very bad at the moment – down roughly 20% in real terms over the past few years.

Figure 1: Average Income (in $2013) Two Years After Graduation, Ontario Graduating Classes from 2003-2011, Selected Disciplines














Not surprisingly, students are voting with their feet.  Look at the pattern of applications by program in Ontario: after a series of small declines in Arts, last year saw a decline of 10%.

Figure 2: Share of Total Applications to Ontario Universities, by Selected Fields of Study, 2003-14















The point here is that, increasingly, the perception of Arts is that they aren’t very useful.  And yes, it’s annoying that people want to reduce education to considerations of short-term employment, but it is what it is.  When we ask people to pay so much (either privately or via tax dollars), people expect results, and they aren’t seeing them.

So something has to change in the Arts; not just for their own sake, but for the sake of all of higher education, which is being tarred with the same brush.  And that something is a greater focus on employability.

Now, even saying something like that causes paroxysm among some: “I’m not going to create cannon-fodder for the knowledge economy, etc. etc.”  But as I’ve said before, it shouldn’t be beyond the wit of talented academics to devise a curriculum that meets both the traditional aims of a liberal arts degree, and that places more emphasis on employability skills (what is the ability to critically appraise arguments, appreciate complex chains of causation, and clear and effective writing if not employability skills?).  Indeed, I’ve even suggested there are some good models available from fields like medicine to do exactly this.

But if fixing the Arts was as simple as that, it probably would have happened already.  The biggest problem with Arts isn’t that the curriculum is difficult to alter, it’s that to a large extent curriculum simply doesn’t exist.  For decades, Arts faculties in North America have been headed inexorably towards a “buffet”, where if you take a few courses from column A, a few from column B, and we’ll call it a degree as long as the credit hours line-up.  Or, more bluntly, there is no curriculum, there’s just a bunch of courses.  This is completely unlike Arts faculties in the rest of the world, where course choice is more limited and degrees are much more structured.

So here’s the real issue: the preliminary work required to improve curriculum – that is, getting folks to realize there’s a curriculum in the first place – is therefore pretty massive.  And this is why it’s likely that, even though Arts needs to improve quickly to stem declining enrolments, it’s unlikely that change will actually occur quickly.

In the best of all worlds, this is a task people should have started working on years ago.  But as they say, the second-best time to start anything is now.  We should roll up our sleeves and get cracking.

October 10

A Miracle in Melbourne

Today, I want to tell you about one of the most amazing stories in recent higher education history.  It happened at the University of Melbourne about eight years ago, and it involved having the country’s leading university completely up-end its entire curriculum – every single degree program – in the space of about 24 months.  Ladies and Gentlemen, I give you: the Melbourne Model.

The basic story is this: A decade ago, Melbourne – like all Australian universities – had a three-year undergraduate study degree, with law and medicine being direct-entry first degree programs (a bit like how Quebec allows direct-entry to these programs for select CEGEP grads), and with a fourth-year acting as an honours year for those wishing to pursue graduate (mainly doctoral) studies.  Then in 2005, a new President (Glyn Davis) came to office, vowing to make Melbourne a more research-intensive kind of place.  In the first draft of a widely-circulated strategic plan, Davis suggested it might be time to “examine the possibility” of moving to, what he called, a “US graduate-school model”, with a much more generalist three-year undergraduate program, followed by graduate/professional studies (it was referred to internally as a 3+2+3 system, which implied a much larger role for Master’s programs).  The proposal was seen as useful both because it might increase research-intensiveness and because a major re-design might force the Melbourne community to think harder about graduate outcomes and what it actually meant to be a “Melbourne Graduate”.

The professional model was by no means the centrepiece of the strategic plan, but it generated curiously little comment, and eventually ended up in the final version in February 2006 without having been subject to much debate.  Having got that far, Davis and his team went for broke: all faculties were told to re-design their curricula in time for implementation in January 2008.

It was at this point, of course, that people freaked.  Much of the Arts faculty thought it was going to be sold down the river – until then, many of their students took joint courses with professional programs (e.g. law/history) and many reckoned that without the professional link, they’d be sunk.  It took a while for it to sink-in that with law now inaccessible for direct entrants (a fact that enraged many parents), more students had time to take three years to study something purely for interest.  History – and most of the rest of the Faculty – in fact did just fine.

One of the most interesting decisions was to limit the number of Bachelor’s degrees being offered to just six – Arts, Science, Bioscience, Environments, Music, and Commerce, and to some extent de-link the degree from the faculties in which professors resided (there were 12 faculties).   These degrees were also designed to have common elements between them regarding program depth and what they called “knowledge transfer” (what we could probably call experiential learning).  They didn’t achieve this goal perfectly, but then, when you’re trying to re-vamp every single degree in a university with 40,000 students in the space of under 18 months, you can tolerate the odd imperfection.

There still remained the trick of selling the idea to government and students.  The former was important for financial reasons because Australia doesn’t fund non-research graduate degrees, so the switch to a “professional” model theoretically put money for all those students at risk – but since allowing the switch didn’t cost the government anything (it would spend what it had always spent on those students) it was a relatively quick sell.  A more serious issue was convincing students that this was a good idea.  After all, students bent on law or medicine would now have to go through three years of undergraduate study first, while other institutions could still offer it to them straight out of high school.  Partly through effective marketing, and partly because of the institution’s own brand power (Melbourne is essentially Australia’s U of T) this fear never materialized.  Applications from top students held up, and in some fields the institution was able to become even more selective.

Try, if you will, to imagine a Canadian institution that could re-jig all of its curricula from top to bottom in less than 24 months, not because a government told them to, but simply because it seemed like a good way to make the university a better place.  I can’t, but I wish I could.  What Melbourne achieved here is proof positive that universities can change, and at speed, if they wish to do so.  And that’s news everyone needs to hear.

May 20

Re-Imagining an Arts Curriculum

Basically, Canadian higher education programs can be divided into two types.  First, are programs that must be accredited, and where graduates’ ability to work in their field is tied to accreditation (e.g. Law, Medicine, Engineering).  These programs start with desired outcomes, and work backwards to make sure that graduates have the skills required to meet professional certification.  Second, there are the Arts and Sciences, which more closely resemble a free-for-all, in which curriculum is driven at least as much by faculty research interests as by any sense of ensuring students graduate with a coherent body of knowledge and skills.

(Yes, yes, there are in-between cases, like business, where accreditation occurs but is not tied to professional practice, and so isn’t quite as rigorous on the outcomes side. Leave these aside for the moment.)

It wasn’t always this way.  Back before World War II, Arts and Science degrees had much more standardized curricula.  They weren’t by any means designed according to 21st Century learning outcomes-lingo, but a lot of thought did go into what specific body of knowledge Arts students would master.  It was only in the 1950s and 1960s that the smorgasbord approach to arts education became standard (a story which is entertainingly told by Louis Menand in his book, The Marketplace of Ideas).

But it doesn’t need to remain this way.  There’s actually a lot that Arts can learn from some of the regulated professions, in particular from medicine where Canadian curriculum is considered one of the world’s best.

Take a look, if you will, at  something called CanMEDS.  It’s a curriculum framework that applies to  the residency training of all the medical specialties under the auspices of the Royal College of Physicians.  The College started by working out what kind of people they wanted doctors to be.  What they decided was that, in addition to being a medical expert, it was important for doctors to play six other roles – that of: i) communicators, ii) collaborators, iii) professionals, iv) managers, v) health care advocates, and vi) scholars.  All medical residency training since 2005 has incorporated not just training to those ends, but also frequent assessment activities as well.  And it’s so well-regarded internationally that countries like the Netherlands (no slouches in professional education) have adopted it wholesale for their own medical training.



















Now, there’s absolutely no reason one couldn’t structure an Arts curriculum the same way.   The expected role graduates should play wouldn’t be the same, obviously (though keeping subject matter expert, communicator, and collaborator seem like no-brainers to me).  Then you’d work to build-in activities and assessment for all of these intended roles/outcomes in every single course.  And if you couldn’t do that (it would likely be difficult in larger classes), you’d find ways to create new mandatory modules that develop and assess more specific competencies.

There’s no really good reason – other than inertia and intransigence – why it couldn’t work.  And the upside would be enormous.  Students who graduated from such a program would have actual tangible evidence of skills and competencies, as opposed to all the “we’re-not-sure-how-but-boy-the-Arts-are-important” hand-waving we do now.

Who’s first?

February 28

The Future of Massive Open Online Courses (MOOCs)

The extent to which MOOCs will be a genuinely revolutionizing force in higher education is going to depend on three things:  their pedagogy, their ability to convert learning into useful credentials, and their business model.  At the moment, it’s hard to see how MOOCs are succeeding on any of those criteria.

Take pedagogy.  The techno-fetishist crowd wants people to believe that, just because a course is online, it must be interactive.  But this is simply false.  Though some MOOCs are genuinely interactive, in the sense of having online tutorials and tests that provide genuine feedback on learning, many – particularly those from Coursera – are literally nothing more than traditional lectures, uploaded to the web.  As Tony Bates and others have pointed out, not only is there nothing new here, it is remarkable how much weaker the new online providers are in terms of online pedagogy compared to established online education providers like Athabasca and the Open University.  Strike one.

Next, let’s take credentials.  No MOOC provider is in the business of offering degrees.  For the most part, they aren’t even providing them for credit, although in the US the American Council for Education has now certified a handful of courses as being “of college standard”, meaning that traditional universities may now start accepting them as a form of transfer credit (though this will require a fee, and possibly some extra Prior Learning Assessment work).   For adults who are looking to pick up some knowledge or skills, or just be entertained, this is not a big deal.  For traditional undergraduates looking for a degree to start their career, it’s a deal-breaker. Strike two.

(Actually, the best defense universities have against MOOCs is precisely the signaling power of their degrees.  But they don’t like to say this too loudly because their claim on public money stems from an understanding that the value of a degree lies not in its signaling function but rather in the human capital formation it embodies.  Rhetorical contortions ahoy.)

Then there’s the lack of a business model.  EdX is currently financed by large wealthy universities doing stuff for free, Udacity is powered by VC money, and Coursera uses a bit of both.  To be sustainable, money has to start changing hands at some point.  Once that happens, students are going to want assistance and services which don’t exist in the current MOOC model, with predictable effects on enrolment.  One University of Washington course saw enrolment fall from 25,000 to 5 when a fee was charged.

Now, this isn’t immutable.  Maybe MOOCs will eventually get around all these problems and become wildly successful.  But until they do, tales of revolutionizing the undergraduate experience will remain just that: stories.

November 09

Modularization vs. Learning Outcomes

If you’ve been near education conferences in the last year or so, the chances are that you’ve heard at least one of the two following propositions.

1)      “Modularization is the Future”.  People don’t need full degrees, they need knowledge in bite-size chunks, and they need it “on-demand”.  That means that learning needs to come in tiny little bits, and certification for learning needs to come in tiny, bite-size pieces, too.  This is partly what’s pushing the enthusiasm behind certain MOOCs and ideas like “Open badges”, but even within mainstream institutions, you’re seeing this as well.  In the US, parts of the Michigan community college system  are giving out “micro-credits” for as little as a two hours worth of classes.

2)      “Learning Outcomes are the Future”.  Part of the general movement for accountability in higher education is going to require institutions to describe expected student outcomes and figure out ways to credibly certify that students who have passed a given course of studies have in fact mastered the competencies and skills linked to those outcomes.

There’s something to both of these propositions.  The problem is, they can’t both be right, because they contradict each other in one very fundamental way.

The whole point of the learning outcomes is to allow institutions to certify with some degree of precision what kind of knowledge and skills a person who has finished a particular program of studies has.  That logic necessarily leads program design away from the  frequently smorgasboard-buffet approach to course selection which is prevalent in arts and sciences in North America, and towards program with larger core curricula.

Basically, the more “core” courses there are, the more curriculum planners can be sure that particular skills and knowledge are being taught (and, presumably, learned as well).  If learning outcomes are difficult to ensure with smorgasboard curriculum, they’re well-nigh impossible with a fully modularized one.  The point of the modularization agenda is very much about making the credentials easier to obtain, and the explicit trade-off made is the coherence of the degree being offered.

To put this another way: the learning outcomes agenda is based on a human capital vision of higher education; the modularization agenda is very much about credentialism.  The public policy rationale is probably stronger for the former, but there’s clearly a strong market rationale for the latter.  Both are important, neither will trump the other.

Anyone who says either “the future is learning outcomes” or “the future is modularization” without offering any qualifications should be ignored.  Different institutions with different missions serving different populations are – quite appropriately – going to favour different strategies.  Grown-up, pluralistic education systems are capable of having trends moving in several directions at once.