HESA

Higher Education Strategy Associates

Category Archives: Worldwide PSE

Post-secondary education issues and policy in countries other than Canada.

February 25

Rankings in the Middle East

If you follow rankings at all, you’ll have noticed that there is a fair bit of activity going on in the Middle East these days.  US News & World Report and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; this week, the Times Higher Education (THE) produced a MENA (Middle East and North Africa) ranking at a glitzy conference in Doha.

The reason for this sudden flurry of Middle East-oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status, and this is one way to do it.  Both THE and QS tried to tap this market by making up “developing world” or “BRICs” rankings, but frankly most Arab universities didn’t do too well on those metrics, so there was a niche market for something more focused.

The problem is that rankings make considerably less sense in MENA than they do elsewhere. In order to come up with useful indicators, you need accurate and comparable data, and there simply isn’t very much of this in the region.  Let’s take some of the obvious candidates for indicators:

Research:  This is an easy metric, and one which doesn’t rely on local universities’ ability to provide data.  And, no surprise, both US News and the Times Higher Ed have based 100% of their rankings on this measure.  But that’s ludicrous for a couple of reasons.  First is that most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE, and Saudi Arabia) there’s no available money for it.  Within the Gulf, most universities are staffed by expats teaching 4 or even 5 classes per term, with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities that are part of Education City in Doha, and in some of the larger Saudi Universities.  Of course the problem with Saudi universities, as we know, is that at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings, without actually changing university cultures very much (see for example this eye-raising piece).

Expenditures:  This is a classic input variable used in many rankings.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them to spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward.  Transparency in government spending isn’t universal in the area, either; I suspect that getting financial data out of an Egyptian university would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data will be massively wonky from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf unis) are paid partly in cash and partly through free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US, and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in a multi-national ranking, so those are out.

Student Surveys: In Europe and North America, student surveys are one way to gauge quality.  However, if you are under the impression that there is a lot of appetite among Arab elites to allow public institutions to be rated by public opinion then I have some lakeside property in the Sahara I’d like to sell you.

Graduate Outcomes:  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country, and that many female graduates are either discouraged or forbidden by their families from working? 

What’s left?  Not much.  You could try class size data, but my guess is most universities outside the Gulf wouldn’t have an easy way of working this out.  Percent of professors with PhDs might be a possibility, as would the size of the institution’s graduate programs.  But after that it gets pretty thin.

To sum up: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will amount to anything useful, even by the relatively low standards of the rankings industry.

February 23

Demand Sponges

If you’ve ever spent any time looking at the literature on private higher education around the world – from the World Bank, say, or the good folks at SUNY Albany who run the Program for Research on Private Higher Education (PROPHE) shop – you’ll know that private higher education is often referred to as “demand-absorbing”; that is, when the public sector is tapped-out and, for structural reasons (read: government underfunding, unwillingness to charge tuition), can’t expand, private higher education comes to the rescue.

To readers who haven’t read such literature: in most of the world, there is such a thing as “private higher education”.  For the most part, these institutions don’t look like US private colleges in that (with the exception of a few schools in Japan, and maybe Thailand) they tend not to be very prestigious.  But they aren’t all bottom-feeding for-profits, either.  In fact, for-profits are fairly rare.  Yes, there are outfits like Laureate with chains of universities around the world, but most privates have either been setup by academics who didn’t want to work in the state system (usually the case in East-central Europe) or are religious institutions trying to do something for their community (the case in most of Africa).

Anyways, privates as “demand-absorbers” – that’s still the case in Africa and parts of Asia.  But what’s interesting is what’s happening as to private education in countries heading into demographic decline, such as those in East-central Europe.  There, it’s quite a different story.

Let’s start with Poland, which is probably the country in the region that got private education regulation the least wrong.  There was a massive explosion of participation in Poland after the end of socialism, and not all of it could be handled by the private sector, even if they could charge tuition fees (which they sort of did, and sort of didn’t).  The private sector went from almost nothing in the mid-90s to over 650,000 students by 2007.  But since then, private enrolments have been in free-fall.  Overall, enrolments are down by 20%, or close to 400,000 students.  But that drop has been very unequally distributed: in public universities the drop was 10%; in privates, it was 40%.

Tertiary Enrolments by Sector, Poland, 1994-2013

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

In Romania, the big picture story is the same as in Poland, but as always, once you get into the details it’s a bit more on the crazy side – this is Romania, that’s the way it is.  Most of the rise in private enrolments from 2003 to 2008 was due to a single institution named Spiru Haret (after a great 19th century educational reformer), which eventually came to have 311,000 students, or over a third of the entire country’s enrolment.

Eventually – this is Romania, these things take time – it occurred to people in the quality assurance agency that perhaps Spiru Haret was closer to a degree mill than an actual university; they started cracking down on the institution, and enrolments plummeted.  And all this was happening at the same time as: i) the country was undergoing a huge demographic shift (abortion was illegal under Ceausescu, so in 1990 the birthrate fell by a third, which began to affect university enrolments in 2008); and, ii) the national pass-rate on the baccalaureate (which governs entrance to university) was halved through a combination of a tougher exam and stricter anti-cheating provisions (which I described back here).  Anyways, the upshot of all this is that while public universities have lost a third of their peak enrolments, private universities have lost over 80% of theirs.

Tertiary Enrolments by Sector, Romania, 2003-2013

unnamed-1

 

 

 

 

 

 

 

 

 

 

 

 

There’s a lesson here: just as private universities expanded quickly when demand was growing, they can contract quickly as demand shrinks.  The fact of the matter is that with only a few exceptions, they are low-prestige institutions and, given the chance, students will pick high-prestige institutions over low-prestige ones most of the time.  So as overall demand falls, demand at low-prestige institutions falls more quickly.  And when that happens, private institutions get run over.

So maybe it’s time to rethink our view of private institutions as demand-absorbing institutions.  They are actually more like sponges: they do absorb, but they can be wrung out to dry when their absorptive capacities are no longer required.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

January 29

Universities and Economic Growth

If you read the OECD/World Bank playbook on higher education, it’s all very simple.  If you raise investments into higher education and research, growth will follow.

At the big-picture national level, this is probably true.  But it’s maddeningly inspecific.  What is the actual mechanism by which higher spending on a set of institutions translates into growth?  Is it the number of trained graduates produced?  Is it the quality or type of education they receive?  Does concentrating research in certain areas mean greater growth?  What about the balance between “pure” and “applied” research (insofar as those are useful distinctions)? What about technology transfer strategies?

Most importantly for a country like Canada: what about geography?  Is a strategy of widely distributing funds better than a strategy of concentration for spurring economic growth?  Should urban universities – nearer the centres of economic production – get more than universities in smaller conurbations?

Anyone telling you they have the definitive answer to these questions is lying.  Fact is, the literature on most of these topics is embarrassingly thin and provides little to no guidance to governments.  And the literature as it pertains to individual universities is even thinner.  Say you want an institution to “do better” at helping deliver regional economic growth: what do you ask it to do, exactly?  Here, the literature mainly consists of anecdotes of success parading as universally-applicable rules for university conduct (this European Union document is an example).  Which of course is tosh.

One solution you often see to the problem of decreased regional economic growth in smaller cities is for PSE institutions to “work more with industry”.  But if your local industry is in decline, there are limits to this strategy.  You can educate more people in a given field in order to lower the price of skilled labour.  You can get profs to work on upstream blue-sky research that will revolutionize the field, but the spillovers are enormous and the likelihood they will be captured by local business is small.  You can get your profs to work on downstream innovation with local business, but that’s not foolproof. Many companies won’t have the receptor capacity to work with you, either because they are too small or because they are too big and rely on a centralized R&D system, which more often than not is located outside the country (usually the US).

From a PSE point-of-view there’s two ways you can go from here.  There’s the route of “give us more money and we’ll give the local workforce a broader set of skills”.  But the fact that a local population has high levels of relatively generic skills does not necessarily make a region a particularly attractive place for investment.  I’m not an economic geographer, but it seems to me that one of the driving forces of the modern era is that the most profitable companies and industries are those that effectively capitalize on agglomerations of very specific types of talent.  And by and large, to get agglomerations of very specific types of talent you tend to need a large population to begin with, which is why big cities keep getting bigger.

The other option is a “place your bets” approach.  For emerging industries to find the right kinds of skills in a particular region, you have to place bets.  You have to say: “we’re going to invest in training and facilities to produce workers for X, Y, and Z industries, which at the moment do not exist in our region, and indeed may never do so.  Cape Breton University’s emphasis on renewable energy is a good example of this strategy.  It’s a bet: if they get good at this and produce enough graduates, maybe within a few years there will be enough of a talent agglomeration that business will go there and invest.

Maybe.  And maybe not.  Problem is, public universities and their government paymasters get nervous about “maybes”.  Higher education is a risk-averse industry.

Tomorrow, we’ll look at a case study in this: Southwestern Ontario.

January 28

Another Australian De-regulation Update

So the last time we tuned into antics in Canberra, the government was trying to pass a fairly ambitious piece of legislation that would completely de-regulate tuition fees while (more or less) maintaining the HECS system, which means post-graduate contributions are always tied to income, and thus do not become too onerous.  The government was also going to cut institutional grants by about 20%, but keep the “demand-driven” system in which government dollars follow students no matter how many students attend.

The problem with this, politically, is that the government does not control the Senate.  With Labour and Green opposed to the coalition, the government needs to attract six of eight votes belonging to independents and minor parties to get anything done in the upper chamber.  Late last year, four of those senators were making positive noises. Two others (including the delightful Jacqui Lambie – seriously, click that link, it’s totally worth it) were implacably opposed, while the final two in play – Dio Wang of the vaguely Ford-ist (minus the drugs) Palmer United Party, and Independent Nick Xenophon – were opposed, but perhaps open to passing a deal with amendments.

So the government switched into deal-making mode, right?  Wrong.  For reasons that defy rational explanation, the coalition opted for a snap December Senate vote on the package, as-was.  Cue a two-vote loss.  Frank Underwood would not have been amused.  In Australian parlance: the coalition whips were clearly a few sheep short of a full paddock.

But that wasn’t the end of the story.  In response, the lower house simply adopted the same bill again, and sent it back to the Senate – only this time the government was ready to deal.  The 20% cut in government grants, those “necessary” savings that required the government to introduce de-regulation?  Turns out they’re negotiable – de-regulation apparently now matters more than fiscal probity.  The universities can hardly believe their luck: full freedom to increase fees, and no loss in government grant?  I don’t know exactly what’s going through vice-chancellors’ heads right now, but I’m betting that the words “eating”, “too”, “having”, and “cake” are all in there somewhere.

Problem is the government isn’t negotiating with vice-chancellors, it’s negotiating with Wang and Xenophon.  Wang has now said he is personally for de-regulation, but will abide by a caucus decision on the party line when voting (the question remains open as to how this would work, given that the only other Palmer senator is dead-set against the bill).  Xenophon wants the whole matter of university funding tossed over to a big bipartisan commission, a position which manages to be both sensible and absurd.  Sensible in that yes, changes of this magnitude are best done in bipartisan fashion, because otherwise institutions get policy whiplash if the opposition comes to power and undoes the change (a point I made back here).  On the other hand, it’s absurd in that there have been quite a few inquiries and commissions into higher education in the last few years; there are few secrets about the current system’s strengths and weaknesses.  Also, bipartisan commissions take two to tango, and there’s precious little sign the opposition Labour Party has any interest in handing the government a way out of this debacle (even if, as is whispered, some of them actually approve of the policy).  Xenophon presumably knows this, which is why his position – in Canadian parlance – is classic ragging the puck.

The clock is ticking on this proposal.  Universities need to know what to tell incoming students about their fees, so the question pretty much has to be solved by March.  That means another vote in the next five weeks or so.  Given the government’s ham-handedness in handling the file to date, odds are it’s not going to pass, though stranger things have happened.  But given the vaulting ambitions of Australian universities, and the seemingly limited desire of Australian government to fund higher education (per-student government funding is 30-40% lower than in Canada), it’s hard to imagine this option not coming back in some form in 2016.

January 26

King Abdullah bin Abdulaziz al-Saud

King Abdullah bin Abdulaziz al-Saud, King of Saudi Arabia for the past ten years (after effectively being regent for the ten years before that, due to his brother King Fahd’s incapacitation from stroke), died last week.  There can have been very few individuals who have had a greater effect on their country’s system of higher education.

Perhaps his best-known initiative was the creation of his eponymous institution, the King Abdullah University of Science and Technology. Opened in 2009 near Jeddah, KAUST made headlines because of its lavish construction and endowment ($20 billion worth – third in the world after Harvard and Yale) and its attempts to recruit star faculty from around the world.  KAUST to date hasn’t set the world on fire – for all the money on offer, there aren’t a lot of serious scientists interested in moving to Saudi Arabia, even if women are allowed to drive and be unveiled within the university’s heavily guarded compound – and there is some truth to the jibe that there are more buildings than professors.  But it’s early days yet, and KAUST remains an interesting strategy to try to build the elements of a knowledge economy to help the country eventually transition away from petroleum.

The outside world focused on the KAUST story because it was a big, single institution, a contained story that fit the money-to-burn Gulf Arab stereotype.  But Abdullah’s agenda wasn’t simply about KAUST.  Over the course of his (effective) 20-year reign, 36 new universities were built in Saudi Arabia.  Enrolment in universities rose nearly sixfold, from a little over 200,000 to 1,200,000, the majority of whom are women.  Gross enrolment rates went from 18 to 50.  Few countries anywhere can match that kind of record.  Perhaps even more significantly for the outside world was his creation of the King Abdullah Scholarship Program (KASP), which funded Saudis to get their education in the west, primarily the United States.  The Scholarship sends 125,000 students abroad every year at a cost of about $2 billion (Canadian universities collectively receive about 9% of those students).

With Abdullah’s death, many people wonder about the fate of KASP and KAUST: what will be their fate under King Salman?  My guess is that KAUST is considerably more vulnerable than KASP.  That may seem counterintuitive since the former has an endowment while the latter’s funding is recurring, but the politics are different.  KASP benefits a lot of middle-class Saudi families, there is much demand to go abroad for school, and in the wake of the Arab spring, Gulf monarchs tend not to cut back on popular subsidies.  KAUST may have its own massive endowment, but it still faces financial challenges if corporate donations slow.  Saudi Arabia doesn’t tax corporations, but companies that work there do get a lot of “suggestions” about what causes they should support, and in what amounts.  What causes get supported has a lot to do with the interests of whoever’s running the country; hence, a change of regime is likely to affect patterns of philanthropy.  Unlike KASP, KAUST is seen to benefit foreigners as much as it does Saudis and would therefore make an easier target.

How do we evaluate such a legacy?  The problem is that Saudi Arabia challenges many western notions about higher education.  Though we shrink from talking openly about universities’ “civilizing” function because it’s deeply uncool in a post-colonial world, the fact is most of us in the west still implicitly believe in that function.  Yet despite all these impressive increases in educational attainment – even western education – Saudi Arabia remains in our eyes a deeply uncivilized kind of place: the beheadings, the floggings, the misogyny all evoke notions of barbarity.  The spread of higher education in the country has done precious little to change that.  Whether that says more about Saudis or about higher education is an exercise for the reader.

January 16

Some Interesting New Models of Student Representation

Historically, the development of student movements has been heavily linked with nationalism, anti-colonialism, modernity, and the development of the welfare state (i.e. they were pro all four of those).  However, as higher education has become massified around the world, students have by and large become less concerned with larger social issues, and more concerned with narrower, student-based concerns.  That hasn’t always led to a loss of radicalism (viz. the carré rouge), but it’s broadly true that over time student leadership has become increasingly demure.

Arguably, this trend actually began in Canada.  The Ontario Undergraduate Student Alliance and Canadian Alliance of Student Associations – both formed in the early/mid-90s – were possibly the first student groups anywhere in the world that viewed themselves as interest groups rather than “movements”.  This is an important distinction: interest groups are prepared to act as insiders in order to gain benefits for their members, while movements resist working with insiders for fear of losing “purity”.

But I would argue there are a couple of other student organizations that have taken things considerably further.  The first is the European Students’ Union (ESU), which is a federation of various national unions.  Their focus is to lobby Brussels, which might sound like a pretty easy job since education policy is still mostly decided in individual countries (though of course our own Canadian Federation of Students [CFS] has managed to lobby Ottawa on tuition for 35 years without realizing fees are under provincial jurisdiction).  But by adjusting its work to mirror the rather technocratic work done by the European Commission, the ESU has turned into one of the nerdiest and best-spoken student groups in the world.

Want proof?  ESU talks intelligibly (arguably more so than some national governments) about quality assurance and the role of students in ensuring it (do take a look at their series of publications on the subject).  It also has done a lot of work looking at graduate employability and how to improve it.  This is really good stuff.

But the UK’s National Union of Students has perhaps gone even further in that it seems to have made a strategic decision to become partners with institutions, so as to drive improvements in student experience.  It co-sponsors the National Student Survey (which is kind of a cross between NSSE and the old Globe and Mail) and the Student Engagement Partnership, which acts as a resource for institutional practitioners across the country.  It creates a set of tools for individual member institutions to help students benchmark and improve teaching quality at institutions.  And while I can understand people being upset that NUS has chosen to focus on this stuff rather than lead a fire-and-brimstone attack on the Tory government for fee hikes, the fact remains: this is a really impressive contribution to improving educational quality and the student experience.

Could these kinds of innovations happen here?  I’d say it’s a pretty solid no on the CFS side, where this stuff would look too much like giving in to The Man.  For the non-CASA schools, it’s possible, though unlikely.  Organizations like OUSA and CASA are, for the moment, quite focused on lobbying government on financial issues rather than dealing with institutions.  The real innovator lately has been Students NS, whose members have launched an independent governance review of… themselves.  More self-centred than the UK and European initiatives, perhaps, but still a novel and welcome step to protect students’ interests.

Bon weekend.

January 12

That Obama Free Community College Proposal

I was going to start on a series about growth in non-academic staff numbers today, but the news out of Washington late last week was too spectacular, so I’m bumping it.  Did Obama really say he wanted to make community college free?

Well, yes he did.  But he might not have meant it the way we all heard it.  And whatever happens, it’s unlikely to occur any time soon.

Let’s start with what he actually said (White House fact sheet, here).  He said he would make tuition free for “responsible students” (read: on course to graduate on-time, with a 2.5 GPA) attending community colleges and taking courses towards a 4-year degree, or an occupational training course in an “in-demand” field.  But there were some catches.  Only institutions that adopt “promising and evidence-based” programs to improve graduation rates will qualify.  States also have to agree to participate, kick-in 25% (or thereabouts) of the funding without cutting any other higher education programs, plus adopt a new outcome-based formula-funding system that funds completions rather than enrolments.  It’s not clear how many states will agree to this (nor, indeed, is there much likelihood that a republican congress would agree to those kind of state spending mandates).

There are obviously a whole bunch of questions that weren’t answered in the initial announcement.  The main one was whether Obama meant “free”, or if he actually meant “government would cover the cost”.  That makes a big difference; Pell grants already cover the cost of tuition for nearly half of all community college students.  If that were the standard, it would imply that all of the new money would be going to students currently considered wealthy enough not to need grants.  That would make the new program very similar in distributional consequences to the notionally universal $1,500 refundable tax credit that Bill Clinton introduced in his second term, but which in fact was only available to those receiving less than $1,500 in Pell.

Another question, not raised much in the US, is: if the initiative is in fact successful at increasing the number of students at 2-year institutions (some of whom, to be fair, could simply be people switching from 4-year to 2-year), where are they all going to study?  In many states – California, for example – the systems are already at breaking point.  Who funds the growth required to make this system successful?

A lot of people seem to think that the President really did mean “free tuition” (i.e. no displacement of Pell grants, which are income-based), based on a comment made last week by his spokesman.  But on the other hand, the spokesman also said the program had been costed at $60/billion over ten years, or $6 billion per year, or about $666 per community college student.  Given that average tuition is about $3,800, it’s hard to see how this plan makes sense unless the administration: a) doesn’t expect most states to participate; b) doesn’t think many students will qualify; and, c) doesn’t in fact mean free tuition, but rather just “cover the cost”.  Or maybe the administration threw together a bunch of nonsense numbers that don’t matter.  Regardless, the likelihood of this becoming policy anytime soon is pretty low; it’s value is mainly rhetorical and as a marker for future policy initiatives by future Presidents.

As I said a last year, free tuition in community colleges makes a fair bit of sense.  The main rationale for fees is that: a) there are substantial private benefits, and, b) the clientele is mainly better-off and don’t need all the subsidies.  But those don’t hold true in community colleges the way they do in universities.   So while there might be some better ways to use that amount of money, this is still a generally worthwhile and positive initiative.  Would that a Canadian government could be so bold.

December 10

The History of the Smorgasbord

One of the things that clouds mutual understanding of higher education systems across the Atlantic is the nature of the Arts curriculum.  And in particular, the degree to which they actually have them in Europe, and don’t over here.

When students enroll in a higher education program in Europe, they have a pretty good idea of the classes they’ll be taking for the next three years.  Electives are rare; when you enter a program, the required classes are in large part already laid out.  Departments simply don’t think very much in terms of individual courses – they think in terms of whole programs, and share the teaching duties required to get students through the necessary sequence of courses.

If you really want to confuse a European-trained prof just starting her/his career in Canada, ask: “what courses do you want to teach?”  This is bewildering to them, as they assume there is a set curriculum, and they’re there to teach part of it.  As often as not, they will answer: “shouldn’t you be telling me what courses to teach”?  But over here, the right to design your own courses, and have absolute sovereignty over what happens within those courses, is the very definition of academic freedom.

And it’s not just professors who have freedom.  Students do too, in that they can choose their courses to an extent absolutely unknown in Europe. Basically, we have a smorgasbord approach to Arts and Sciences (more the former than the latter) – take a bunch of courses that add up to X credits in this area, and we’ll hand you a degree.  This has huge advantages in that it makes programs flexible and infinitely customizable.  It has a disadvantage in that it’s costly and sacrifices an awful lot of – what most people would call – curricular consistency.

So why do we do this?  Because of Harvard.  Go back to the 1870s, when German universities were the envy of the world.  The top American schools were trying to figure out what was so great about them – and one of the things they found really useful was this idea called “academic freedom”.  But at Harvard, they thought they would go one better: they wouldn’t just give it to profs, they’d give it to students, too. This was the birth of the elective system.  And because Harvard did it, it had to be right, so eventually everyone else did it too.

There was a brief attempt at some of the big eastern colleges to try and put a more standard curriculum in place after World War II, so as to train their budding elites for the global leadership roles they were expected to assume.  It was meant to be a kind of Great Books/Western Civ curriculum, but profs basically circumvented these attempts by arguing for what amounted to a system of credit “baskets”.  Where the university wanted a single course on “drama and film in modern communication” (say), profs argued for giving students a choice between four or five courses on roughly that theme.  Thus, the institution could require students to take a drama/film credit, but the profs could continue to teach specialist courses on Norwegian Noir rather than suffer the indignity of having to teach a survey course (not that they made their case this way – “student choice” was the rallying call, natch).

Canadian universities absorbed almost none of this before WWII – until then, our universities were much closer to the European model.  But afterwards, with the need to get our students into American graduate schools, and so many American professors being hired thereafter (where else could we find so many qualified people to teach our burgeoning undergrad population?), Canadian universities gradually fell into line.  By the 1970s, our two systems had coalesced into their present form.

And that, friends, is how Arts faculties got their smorgasbords and, to a large extent, jettisoned a coherent curriculum.

Page 10 of 22« First...89101112...20...Last »