HESA

Higher Education Strategy Associates

Tag Archives: Quality

October 04

New Quality Measurement Initiatives

One of the holy grails in higher education – if you’re on the government or management side of things, anyway – is to find some means of actually measuring institutional effectiveness.  It’s all very well to note that alumni at Harvard, Oxford, U of T (pick an elite university, any elite university) tend to go on to great things.  But how much of that has to do with them being prestigious and selective enough to only take the cream of the crop?  How can we measure the impact of the institution itself?

Rankings, of course, were one early way to try to get at this, but they mostly looked at inputs, not outputs.  Next came surveys of student “engagement”, which were OK as far as they went but didn’t really tell you anything about institutional performance (though it did tell you something about curriculum and resources).  Then came the Collegiate Learning Assessment and later the OECD’s attempt to build on it, which was called the Assessment of Higher Education Learning Outcomes, or AHELO.  AHELO was of course unceremoniously murdered two years ago by the more elite higher education institutions and their representatives (hello, @univcan and @aceducation!) who didn’t like its potential to be used as a ranking (and, in fairness, the OECD probably leant too hard in that direction during the development phase, which wasn’t politically wise).

So what’s been going on in quality measurement initiatives since then?  Well, two big ones you should know about.

The first is one being driven out of the Netherlands called CALOHEE (which is, sort of, short for “(Measuring and Comparing Achievements of Learning Outcomes in Higher Education in Europe).  It is being run by more or less the same crew that developed the Tuning Process about a decade ago, and who also participated in AHELO though they have broken with the OECD since then.  CALOHEE builds on Tuning and AHELO in the sense that it is trying to create a common framework for assessing how institutions are doing at developing students’ knowledge, skills and competencies.  It differs from AHELO in that if it is successful, you probably won’t be able to make league tables out of it.

One underlying assumption of AHELO was that all programs in a particular area (eg. Economics, Civil Engineering) were trying to impart the same knowledge skills and competencies – this was what made giving them a common test valid.  But CALOHEE assumes that there are inter-institutional differences that matter at the subject level.  And so while students will still get a common test, the scores will be broken up in ways that are relevant to each institution given the set of desired learning outcomes at each institution.  So Institution X’s overall score in History relative to institution Y’s is irrelevant, but their scores in, for instance, “social responsibility and civic awareness” or “abstract and analytical thinking” might be, if they both say that’s a desired learning outcome.  Thus, comparing learning outcomes in similar programs across institutions becomes possible, but only where both programs have similar goals.

The other big new initiative is south of the border and it’s called the Multi-State Collaborative to Advance Quality Student Learning (why can’t these things have better names?  This one’s so bad they don’t even bother with an initialism).  This project still focuses on institutional outcomes rather than program-level ones, which reflects a really basic difference of understanding of the purpose of undergraduate degrees between the US and Europe (the latter caring a whole lot less, it seems, about well-roundedness in institutional programming).  But, crucially in terms of generating acceptance in North America, it doesn’t base its assessment on a (likely low-stakes) test.  Rather, samples of ordinary student course work are scored according to various rubrics designed over a decade or more (see here for more on the rubrics and here for a very good Chronicle article on the project as a whole). This makes the outcomes measured more authentic, but implicitly the only things that can be measured are transversal skills (critical thinking, communication, etc) rather than subject-level material.  This will seem perfectly fine to many people (including governments), but it’s likely to be eyed suspiciously by faculty.

(Also, implicitly, scoring like this on a national scale will create a national cross-subject grade curve, because it will be possible to see how an 80 student in Engineering compares in to an 80 in history, or an 85 student at UMass to an 85 student at UWisconsin.  That should be fun.)

All interesting stuff and worth tracking.  But notice how none of it is happening in Canada.  Again.  I know that after 25 years in this business the lack of interest in measurable accountability by Canadian institutions shouldn’t annoy me, but it does.   As it should anyone who wants better higher education in this country.  We can do better.

October 08

The War Between Universities and Disciplines

From the outside, universities look like a single united entity, with many administrative subdivisions – kind of the organizational equivalent of the United States.  However, the closer political analogy is actually early 1990s Yugoslavia: at a very basic level, universities are the sites of permanent civil wars between central authorities and the disciplines whose interests they purportedly serve.

Disciplines – which, except for law and theology, mostly started their existence outside universities – allowed themselves to be subsumed within universities over the course of the early 20th century.  They did so for administrative reasons, not intellectual ones; bref, it seemed like a good bargain because universities offered a way for disciplines to obtain much larger amounts of money than they could get on their own.  Governments (and to a lesser degree, philanthropists) found it easier to do business with universities than with, say, random groups of anthropologists or chemists.  Similarly, banding together within a university made it easier for disciplines to attract the ever-growing number of students and, with them, their tuition dollars.  The deal was that the anthropologists and chemists would lend their prestige to universities, and in return the university would take care of raising the money necessary to meet academics’ need for space, students, and steady pay cheques.  The idea that the university had any corporate interests that superseded those of the disciplines at an intellectual level was simply a non-starter: disciplinary interests would remain supreme.  As far as academics were concerned that was – and is – the deal.

Thus, there is little that drives academics crazier than the idea that a university might deign to choose between the various disciplines when it dispenses cash.  If an institution, in response to an external threat (e.g. a loss of government funding), says something like, “hey, you know what?  We should stop spending so much money on programs that lose money, and redistribute it to programs that might attract more outside money”, they are immediately pilloried by academics because who the hell gave the university the right to decide which disciplines are more valuable than others?  When you hear people bemoaning universities acting “corporately”, this is usually what they’re on about.

This attitude, which seems so normal to academics, provokes absolute bewilderment from the outside world (particularly governments and philanthropists), who believe universities are a single corporate entity.  But they’re not.  As ex-University of Chicago President Robert Hutchins said, universities are a collection of warring professional fiefdoms, connected by a common steam plant.  A more recent formulation, from the excellent New America Foundation analyst Kevin Carey, is that the modern university is just a holding company for a group of departments, which in turn are holding companies for a group of individual faculty research interests.  In other words, Yugoslavia.

But the actual point of a university, the reason for its existence beyond sheer administrative convenience, is that it serves the advancement of knowledge by getting the disciplines to act together to tackle problems in ways they would not do so independently.  And that means that the university’s raison d’etre is, in fact, to continually make choices on resource allocation across disciplines, to the areas that make the most sense, both financially and intellectually.

Yet there exists within universities a substantial and determined constituency that claims it is immoral for institutions to make such choices.  Much of the incoherence, idiocy, and sheer weenieness of university “strategy” documents come from senior managers trying to square this circle: appearing to make choices, while acting in deference to the autonomous disciplinary republics, to avoid actually making any.

In short, strong disciplines are necessary and important to insure academic quality.  But letting them run the university is madness.

July 07

How to Measure Teaching Quality

One of the main struggles with measuring performance in higher education – whether of departments, faculties, or institutions – is how to measure the quality of teaching.

Teaching does not go entirely unmeasured in higher education.  Individual courses are rated by students through course evaluation surveys, which occur at the end of each semester.  The results of these evaluations do have some bearing on hiring, pay, and promotion (though how much bearing varies significantly from place to place), but these data are never aggregated to allow comparisons of quality of instruction across departments or institutions.  That’s partly because faculty unions are wary about using individual professors’ performance data as an input for anything other than pay and promotion decisions, but it also suits the interests of the research-intensive universities who do not wish to see the creation of a metric that would put them at a disadvantage vis-a-vis their less-research-intensive brethren (which is also why course evaluations differ from one institution to the next).

Some people try to get around the comparability issue by asking students about teaching generally at their institution.  In European rankings (and Canada’s old Globe and Mail rankings), many of which have a survey component, students are simply asked questions about the quality of courses they are in.  This gets around the issue of using course evaluation data, but it doesn’t address a more fundamental problem, which is that a large proportion of academic staff essentially believes the whole process is inherently flawed because students are incapable of knowing quality teaching when they see it.  There is a bit of truth here: it has been established, for instance, that teachers who grade more leniently tend to get better course satisfaction scores.  But this is hardly a lethal argument.  Just control for average class grade before reporting the score.

It’s not as though there isn’t a broad consensus on what makes for good teaching.  Is the teacher clear about goals and expectations?  Does she/he communicate ideas effectively?  Is he or she available to students when needed?  Are students challenged to learn new material and apply this knowledge effectively?  Ask students those kinds of questions and you can get valid, comparable responses.  The results are more complicated to report than a simple satisfaction score, sure – but it’s not impossible to do so.  And because of that, it’s worth doing.

And even the simple questions like “was this a good course” might be more indicative than we think.  The typical push-back is “but you can’t really judge effectiveness until years later”.  Well, OK – let’s test a proposition.  Why not just ask students about a course they took a few years ago, and compare it with the answers they gave in a course evaluation at the time?  If they’re completely different, we can indeed start ignoring satisfaction types of questions.  But we might find that a good result today is in fact a pretty good proxy for results in a few years, and therefore we would be perfectly justified in using it as a measure of teaching quality.

Students may be inexperienced, but they’re not dumb.  We should keep that in mind when dismissing the results of teaching quality surveys.

February 14

Chinese Higher Education: Where to From Here?

So, now that China has 30 million students and a half-dozen “world class universities”, where to next?

Well, the first thing to note is that the system hasn’t finished growing.  While the major metropolitan areas of the north and centre have PSE attendance rates that approach those in Canada, outside of those very small areas, the average is less than half that.  Even in fairly prosperous coastal provinces like Zhejiang and Guangdong, participation rates are less than half of what they are in the rich metropolitan zones.  Now, some of the growth in participation rates will be taken care of via demography.  China’s 20-24 age cohort will shrink in size by about a third between 2011 and 2016 thanks to the One Child Law (demographically, think of China as just a really big Cape Breton), so participation rates will rise significantly even if China does no more than stay steady in terms of enrolments.  But it’s still a safe bet that in most of the country, we can expect more universities to open, and existing second-tier institutions will need to be expanded.

Participation rates by Region, 2008

Image001

 

 

 

 

 

 

 

So with demand for education set to rise, and the country still struggling to absorb all the graduates from the past few years, I suspect the bigger issue going forward for China is going to be quality.  China’s worked out how to expand its system.  What it hasn’t done quite yet is worked out how to spread excellence beyond its top research schools (the Chinese equivalent of the Ivy League is the C-9: Peking, Tsinghua, Fudan, Zhejiang, Nanjing, Harbin Tech, University of Science and Technology of China, and the Jiao Tongs at Shanghai and Xi’an).

And even at these schools, some of the excellence is only skin deep: they might be able to butt into world league tables based on publication counts, by doing things like requiring all graduate students at 985 universities to get two publications in Thomson ISI-indexed journals (seriously… can you imagine doing that here? The system would totally collapse), but those articles’ citation counts are much lower than at large western universities, indicating that the rest of the scientific world doesn’t think they’re up to much.  But changing that means changing academic cultures – some of which have become sclerotic and corrupt (this stinging editorial in Science magazine [link is to an ungated copy] by two Chinese academics who had returned home from academic careers in the US, Shi Yigong and Rao Yi, is one of many pieces of evidence that could be cited here).  As we know here in North America, there is very little that is harder than changing academic cultures.  If China works out how to fix that problem, then there’s genuinely no reason it won’t lead the world at pretty much everything.  But I have my doubts.

The outlook in China then is pretty simple – mostly continuations of recent trends, with a greater emphasis on quality and employability.  And until they get that sorted out, there will continue to be opportunities for western institutions seeking to poach those who can’t get into 985 universities.

May 17

Trying to Have it Both Ways

Everyone should check out this story from the Guardian on Tuesday, which nicely encapsulates the way universities have rhetorically boxed themselves in on the student experience.

Some background: in late 2010, the UK government decided to cut operating grants to universities by 41%, and to allow tuition fees at universities in England and Wales to rise to £9,000 (+150% or so).  Even though the policy change hasn’t had a huge effect on access, students are clearly now paying a lot more for essentially the same experience.  Last week, one student consumer group published a report showing exactly how little has changed: despite the massive fee rise, students are only getting an extra 18 minutes a week of contact time with professors.

This brings us to the story I’ve linked to at the beginning of this post, in which Universities UK CEO, Nicola Dandridge, dismissed the findings about the 18 extra minutes by saying, “It is misleading to make a crude assumption that time spent in lectures and seminars can be equated with university course quality”.

Hmm.

Hmmmmmm.

I get the point Dandridge is trying to make – it’s not just course hours that matters, but also teacher quality, infrastructure, curriculum, etc.  But would she have said the same thing if the journalist had been asking about MOOCs vs. traditional universities?  Not in a million years.  Unless you’re employed by a university that owns shares in EdX, the standard response is that “MOOCs are a great product for some, but most students still want the irreplaceable experience of being in a classroom with a great teacher”.

Universities can’t simultaneously say, “the in-class experience is brilliant and irreplaceable”, and “it really doesn’t matter how many contact hours you have”.  That’s called “trying to have it both ways”.

It’s absolutely true, of course, that the way learning occurs at universities is only partly dependent on what happens in classrooms; a lot of the benefits of university learning comes from the serendipity that occurs when you cram lots of young, curious people into the same physical space for four years, and let them rip.  And MOOCs are low on serendipity.

The problem is that we don’t know much about measuring serendipity (which is why we fall back on measures like class time and contact hours), and where we do have an inkling, universities often avoid presenting this information (at least in a format accessible to outsiders).  And yet, when it comes to undergraduate education, this very serendipity provides universities their genuinely unique value proposition – it’s what they can do that no one else does.

If universities genuinely want to prove value, they need to focus on measuring serendipity, and working relentlessly to increase it.  That’s their UVP.  That’s the ballgame.

March 13

Apprenticeships: Time for Quality over Quantity

We have a problem with skilled trades and apprenticeships in Canada.  At the root of it are three things: short-term thinking, bad forecasting, and a training schedule driven by money over pedagogy.

Short-termism is embedded in Canadian apprenticeships.  When the economy is booming, we take on more apprentices; when it’s in the tank, we cut back.  This is because “enrolment” is based entirely on decentralized private sector demand for young, cheap labour.   Given that finishing an apprenticeship takes four or five years (roughly the length of an economic cycle), this more or less guarantees that the supply of apprentices is always going to be out of whack; not enough when the economy is at full tilt, and too many when the economy is slowing down.  Yet, for some reason, this simple fact is never acknowledged in policy circles, let alone the subject of any serious reform measures.

Aggravating this situation is the fact that industry is pounding on government’s doors, asking for financial help to expand apprenticeships (read: supply of inexpensive labour).  Usually, these are couched in terms about looming “shortages” of people in the skilled trades, but some of these predictions depend on overly optimistic views about future demand.  As a recent – and quite brilliant – paper from the Certified General Accountants of Canada noted, we really only have a skills shortage if you assume future demand will increase at the pace it did between 2003 and 2007.  If you assume a more moderate pace – say, the 2001-2011 average – the only shortages one sees are in carpentry, which has plenty of new apprentices, but the highest discontinuation rate of any trade in the country.  Heal thyself, guys.

And then, of course, there’s the problem that Canada’s apprenticeship system is among the world’s least coherent, pedagogically speaking.   Other countries send apprentices on day release for in-class training instruction, because theory and practice are best integrated that way.  On the other hand, we use the pedagogically dubious block-release system for in-class training – which is basically a ruse to suck money out of the EI system (apprentices are technically unemployed while studying, and hence are EI-eligibile).  Might that be one reason our apprentices take longer than any in the world to reach journeyman status?  Maybe.  I don’t see anyone rushing to find out, though.

Sadly our federal and provincial governments these days seem to reach instinctively towards skilled trades and apprenticeships as the answer to most issues of youth employment and human resource development.  What we actually need is a better and more efficient apprenticeship system, one that smooths out the swings in supply and demand.  But all the clamour right now is in favour of more of the same.  Expect our skilled trades problem to fester.

December 17

The Beagles Have Landed

How do you run a business when profit is meaningless?

This is a key question confronting every university administration. Our PSE institutes are businesses – complex organizations which require enormous amounts of money, from diverse sources, in order to succeed. For many reasons, it is a blessing that they are not oriented towards profit. But without a clear bottom line, how do you actually know when to spend, and when not to spend? What replaces the discipline of the market, come budget time?

At one level, the answer is the same as it is for every other non-profit: priorities are governed partly by mission, and partly by the availability of funds. But universities’ mandates are unbelievably broad – almost limitless, actually. Take “meeting students’ needs”, which is a common enough mandate. What could that encompass? Better seating in the cafeteria? A new gym, or hockey arena? What about an international airport? Similarly, consider a university’s mission to, “advance the frontiers of knowledge”. Does that just entail upgrading computer labs, or could it also include, say, purchasing a space flight simulator? What about an actual space station?

Here’s the problem. The only bar universities set for themselves is “quality”, and virtually anything can be justified in the name of “quality”, so long as the money is available to pay for it. So if you give universities money – any amount of money at all – they will spend it, because they have no instinct to tell themselves not to. The polite term for this is, “the revenue theory of expenditures”. However, another way to think about this is to consider that universities – like beagles and hamsters –lack a brain mechanism that tells them to stop eating.

Basically, universities are good at growing, but terrible at shrinking. The answer to every problem is always “more”. If more comes from government, great. If it comes from alumni or business, great. If it comes from students, great. Universities are agnostic about sources of money, but fundamentalist about aggregate incomes.

But we seem to be entering a phase in which governments aren’t in a position to provide more money, and students and parents seem more reluctant than usual to pay or donate more. And if more revenue is out of the question, then diets are the order of the day. How will universities react? Nick Rowe has painted one particularly nasty, but plausible, scenario, which bears pondering. His specifics are a bit different than mine would be, but the basic point – that universities on a diet are incredibly difficult to manage – deserves a lot more attention that it currently receives.

University income has been flying upward for most of the last decade; when these beagles land, it will be with a heck of a thud.

October 31

Reforming J-Schools

I see that a number of foundations – including the Knight, McCormick and Scripps-Howard Foundation– have written an open letter  to American university presidents, urging that they make Journalism schools “more like medical schools” and teaching them through immersion in “clinical, hands-on, real-life experience”. From a historical perspective, this is a deeply weird development.

Foundations have played a significant role in changing the course of professional education on a couple of occasions. In 1910, the American Medical Association and the Carnegie Foundation teamed up to pay Abraham Flexner to report on the state of medical education in North America. His finding – that medical schools were too vocational an insufficiently grounded in scientific disciplines such as biology – was a key development in the history of medicine. It was only after Flexner that university-based medical schools decisively ousted the proprietary medical schools as the primary locus of training future doctors, and turned the medical profession into one which mixed practice with research.

Forty-five years later, widespread dissatisfaction with American business schools led the Carnegie and Ford Foundations to instigate reports and programs designed to transform business schools into more research-oriented units with intimate links to various branches of the social sciences such as sociology, anthropology and economics. These had substantial short-and medium-term impacts, if more ambiguous long-term ones.

(On this, btw, I recommend The Roots, Rhetoric and Reality of Change: North American Business Schools since WWII by March and Augier. It ends flabbily, but the first 200 pages are excellent intellectual history).

In both cases professional education was to be improved by making it more “disciplinary” and more concerned with “fundamental knowledge”. It’s therefore more than passing strange that Foundations are now telling universities to make their J-schools less concerned with fundamental knowledge and more concerned with day-to-day experience.

Partly, it’s a different in the nature of the Foundations involved (Carnegie and Ford were considerably more removed from the worlds of medicine and business than the Scripps Howard Foundation is from journalism). Part of it, too, might be the nature of journalism itself; its practitioners may simply not need fundamental knowledge in order to be effective in the way doctors need biology and business-folks need econ/finance. (By extension, maybe journalism shouldn’t be taught in a university setting at all.)

What is unmistakable though – and more than slightly worrying – is the flat-out threatening tone taken by the Foundations in their letter, telling Presidents that institutions which don’t get with the program “will find it difficult to raise funds from Foundations concerned with the future of news”. Apart from being classless, a touch more humility about proclaiming any given educational model as the One True Way is surely in order.

October 24

Bologna – The Real Lessons

Europe’s Bologna Process may be winding down, but that’s not to say it was a failure. In fact, one could argue that one of the reasons Bologna is not quite so front-and-centre as it used to be is that it did its job spectacularly well and that barriers to both educational and labour market mobility have fallen significantly in the last decade.

There are some lessons for Canada here. Briefly, these are:

1) Improving Mobility Means Paying Attention to Quality. This is a fairly simple concept. Credits are a form of currency. If I’m going to take my credits from institution A to institution B, the folks at B are going to need some kind of exchange rate to make that work. No reliable exchange rate, no exchange. The problem in Canada is that we find actual discussions about quality, level and intensity to be, for lack of a better word, icky. Heck, we might have to say things out loud that would be upsetting to certain groups of institutions or students. For example – why do some Ontario universities require 24 hours of contact hours to be deserving of a half-credit while others requires 39? There may well be reasons to consider them equivalent, but unless they are made explicit, it’s hard to imagine how real, universal exchange rates are possible.

2) Improving Mobility Means More External Assessment. At the end of the day, any currency is based on trust. For one institution to accept credits from another requires an institution to believe that the other has credibility. Within small groups of institutions, that works. But it’s ludicrous to think that anyone at (say) Memorial really has a sense of how (say) Kwantlen is handling the transition from uolytechnic to university and hence whether credits from the latter are equivalent to their own. The role of external quality agencies is precisely to provide a neutral “seal of approval.” No seal of approval, no trust, no mobility. Simple as that.

3) Improving Quality and Harmonizing Outcomes Means More Inclusive Policy-Making. Possibly the most interesting thing about Bologna is that it wasn’t exclusively or even primarily an inter-governmental process. To do a Bologna means building a table that includes not just governments, but professional bodies, universities and students as well; it also means moving ahead with less than full consensus when necessary to preserve forward momentum. In Canada no mechanism exists to call these parties together, and important bodies like CMEC and AUCC get queasy without consensus.

Doing a Bologna in Canada would thus require overcoming some deep-set habits. Yet, it’s something we may need to do, and soon. More tomorrow.

August 26

Trust

It’s a big day at HESA, as it’s release day for our final report on the Consultation on the Expansion of Degree-granting in Saskatchewan that we’ve been working on for a few months (available here). I can’t tell you what it says before it comes out – but I would like to talk about one of the key themes of the report: trust.

If you issue degrees, people need to be able to trust that the degree means something. In particular, students need to know that a degree from a given institution will be seen as a mark of quality by employers; otherwise, the degree is worthless. Worldwide, the function of quality assurance agencies – third-parties giving seals of approval either to individual programs or to institutions generally (either by looking directly at quality or by looking at an institution’s internal quality control mechanisms) – is to assure the public that degrees are trustworthy.

In Canada, many people have looked askance at these bodies, seeing them as unnecessary bureaucratic intrusions. “We never needed these before,” people grumble. “Why do we need them now?”

To an extent, the grumblers have a point. Trust is usually earned through relationships. People in, say, Fredericton, trust UNB not because some agency tells them to trust it but because it’s been granting degrees for going on 200 years now; they’ve seen the graduates and can gauge the quality for themselves. This is true across most universities in Canada; they’re old, solid and hardly fly-by-night and people know who they are. And there tend not to be more than four in any given urban area, so pretty much everyone knows someone who went to school “X” and can thus gauge an institution’s quality directly.

But what happens when you let new players, like private universities or community colleges, into the degree-granting game? What happens when universities start having to look abroad for students? How can employers in Canada trust new players? How can employers in Turkey or Vietnam trust any Canadian university they’ve never heard of?

Canada was able to get away without quality assurance for so long mainly because our system of giving a relatively small number of large public universities a monopoly over degree-granting was well-suited to engendering trust – especially when 90% of their students were local. But open up degree-provision, or widen the scope of your student base, and suddenly trust isn’t automatic anymore. You need a third-party to give a seal of approval to replace the trust that used to come naturally.

Quality assurance isn’t anyone’s idea of fun. But it isn’t the frivolous, makework bureaucracy the grumblers criticize, either. Rather, it’s a rational response to changing patterns in the provision and consumption of higher education.