HESA

Higher Education Strategy Associates

Category Archives: grad students

April 05

Student/Graduate Survey Data

This is my last thought on data for awhile, I promise.  But I want to talk a little bit today about what we’re doing wrong with the increasing misuse of student and graduate surveys.

Back about 15 years ago, the relevant technology for email surveys became sufficiently cheap and ubiquitous that everyone started using them.  I mean, everyone.  So what has happened over the last decade and a half has been a proliferation of surveys and with it – surprise, surprise – a steady decline in survey response rates.  We know that these low-participation surveys (nearly all are below 50%, and most are below 35%) are reliable, in the sense that they give us similar results year after year.  But we have no idea whether they are accurate, because we have no way of dealing with response bias.

Now, every once in awhile you get someone with the cockamamie idea that the way to deal with low response rates is to expand the sample.  Remember how we all laughed at Tony Clement when he claimed  the (voluntary) National Household Survey would be better than the (mandatory) Long-Form Census because the sample size would be larger?  Fun times.  But this is effectively what governments do when they decide – as the Ontario government did in the case of its sexual assault survey  – to carry out what amounts to a (voluntary) student census.

So we have a problem: even as we want to make policy on a more data-informed basis, we face the problem that the quality of student data is decreasing (this also goes for graduate surveys, but I’ll come back to those in a second).  Fortunately, there is an answer to this problem: interview fewer students, but pay them.

What every institution should do – and frankly what every government should do as well – is create a balanced, stratified panel of about 1000 students.   And it should pay them maybe $10/survey to complete surveys throughout the year.  That way, you’d have good response rates from a panel that actually represented the student body well, as opposed to the crapshoot which currently reigns.  Want accurate data on student satisfaction, library/IT usage, incidence of sexual assault/harassment?  This is the way to do it.  And you’d also be doing the rest of your student body a favour by not spamming them with questionnaires they don’t want.

(Costly?  Yes.  Good data ain’t free.  Institutions that care about good data will suck it up).

It’s a slightly different story for graduate surveys.  Here, you also have a problem of response rates, but with the caveat that at least as far as employment and income data is concerned, we aren’t going to have that problem for much longer.  You may be aware of Ross Finnie’s work  linking student data to tax data to work out long-term income paths.  An increasing number of institutions are now doing this, as indeed is Statistic Canada for future versions of its National Graduate Survey (I give Statscan hell, deservedly, but for this they deserve kudos).

So now that we’re going to have excellent, up-to-date data about employment and income data we can re-orient our whole approach to graduate surveys.  We can move away from attempted censuses with a couple of not totally convincing questions about employment and re-shape them into what they should be: much more qualitative explorations of graduate pathways.  Give me a stratified sample of 2000 graduates explaining in detail how they went from being a student to having a career (or not) three years later rather than asking 50,000 students a closed-ended question about whether their job is “related” to their education every day of the week.  The latter is a boring box-checking exercise: the former offers the potential for real understanding and improvement.

(And yeah, again: pay your survey respondents for their time.  The American Department of Education does it on their surveys and they get great data.)

Bottom line: We need to get serious about ending the Tony Clement-icization of student/graduate data. That means getting serious about constructing better samples, incentivizing participation, and asking better questions (particularly of graduates).  And there’s no time like the present. If anyone wants to get serious about this discussion, let me know: I’d be overjoyed to help.

May 19

PhDs in the Humanities

I had the good fortune earlier this week of speaking to the Future of the Humanities PhD conference at Carleton University. It was an interesting event, full of both faculty and students who are thinking about ways to reform a system takes students far too long to navigate. They asked me for my thoughts, so I gave them. Here’s a precis.

One of the most intractable problems with the PhD (and not just in the humanities) is that it serves a dual purpose. First, it’s a piece of paper that says “you’re a pretty good researcher”; second, it’s a piece of paper that says “you too can become a tenured university professor! Maybe.”

The problem is that “maybe”: lots of people can meet the standard of being a good researcher, but that doesn’t mean there will be university professor jobs for them. Simply, more people want to be professors than there are available spots; eventually, the system says yes to some and no to others. Right now we let the job market play that role. But what if those in charge of doctoral programs themselves played a more active role? What if there was more selectivity at the end of – say – the first or second year of a doctorate? Those deemed likeliest to get into academia would then end up on one track, and the others would be told (gently) that academia was unlikely for them, and offered a place in a (possibly shorter-duration) “professional PhD” track designed to train highly skilled workers destined for other industries. Indeed, some might want to be put on a professional PhD track right from the start.

If you’re going to be selective early on in doctoral programs, then you probably want to re-design their front-ends so they’re not all coursework and – especially – comps. Apparently in some programs, it is not unusual to take three years to complete coursework and comps – and this is after someone has done a master’s degree. This is simply academic sadism. It is certainly important for students heading into the teaching profession to have a grasp of the overall literature. But is it necessary for this work to be placed entirely at the beginning of a program, acting as a barrier to students doing what they really want to do, which is research?

Instead, why not have students and their supervisors jointly work out at the start of a program what literature needs to be covered, and agree to a structured program of covering it over the length of the program? Ideally, students would have their tests at the end, close to the time when they would be going on the job market. You’d need to front-end load the more methodological stuff (so those who end up in the professional stream get it too), but apart from that this seems perfectly feasible.

Of course, that implies that departments – and more importantly individual doctoral supervisors – are prepared to do the work to create individual degree plans with students and actually stick to them. There is really no reason why a five- or even a four-year doctorate – that is, one whose length roughly coincides with the funding package – is impossible. But expectations have to be clear and met on both sides. Students should have a clear roadmap telling them roughly what they’ll be doing each term until they finish; professors need to hold them to it but equally, professors also need to be held to their responsibilities in keeping students on track.

Many people think of PhDs as “apprenticeships”. But that doesn’t imply just hanging around and watching the “masters”. Go read any apprenticeship standards documents: employers have a very detailed list of skills that they need to impart to apprentices over a multi-year apprenticeship, and both the apprentice and the journeyman have to sign off every once in a while that such skills have been taught.

Or, take another form of knowledge-worker apprenticeship: medicine. When medical students pick a specialty like internal medicine or oncology, they are embarking on a form of multi-year apprenticeship – but one which is planned in detail, containing dozens of assessment points every year. Their assessments cover not just subject matter expertise, but also the aspiring medic’s communication skills, teamwork skills, managerial skills, etc. All things that you’d think we’d want our doctoral students to acquire.

So how about it, everyone? Medical residencies as a model for shorter, more intensive PhDs? Can’t be worse than what we’re doing now.

October 16

A Simple Solution for Statistics on Doctoral Education

Higher Education statistics in Canada are notoriously bad.  But if you think general stats on higher ed are hard to come by, try looking at our statistical systems with respect to doctoral education and its outcomes.

Time-to-completion statistics are a joke.  Almost no one releases this data; when it is released, it often appears to be subject to significant “interpretation” (there’s a big difference between time-to-completion and “registered” time-to-completion.  If you want to keep the latter down, just tell students getting into sixth year to de-register until they’re ready to submit a thesis).  Employment statistics are even scarcer – and as for statistics on PhDs getting jobs in academia?  Ha!

It’s that last piece of data that students really want published; it’s also the one viewed with the most trepidation by directors of graduate programs, who are a bit worried about what the stats might reveal.  They would probably argue that this isn’t a fair measure of their program’s success since, after all, they don’t control the hiring market.  This is a fair enough point, though a reasonable person might ask in return why, if this is the case, PhD intakes are staying high, or even increasing?

Personally, I think the idea that everyone in a PhD program should aspire to an academic career is demented, and professors who peddle that idea to their students (and there are thousands of them) should be ashamed of themselves.  But sadly, a lot of students do buy into this myth, and when it doesn’t come true, they’re fairly upset.

There is, however, a simple way to address this problem.  Every department in the country should be required to maintain a webpage with statistics both on graduation rates, times-to-completion (real ones, from time-of-enrolment, to degree), and on the last five years worth of graduates.  How many are in tenure track positions?  How many are in post-docs?  How many are temping?  Etc.  No big survey, no nightmare of involving StatsCan and the provinces, and whatnot.  Just every department, publishing its own stats.  And no guff about response burdens.  Between Academia.edu and LinkedIn, this shouldn’t take more than a day to do.

“Sure”, I hear you saying, “and which departments will volunteer for this, exactly?”  But there’s actually a very simple way to get departments to fall into line.  The granting councils – who, as much as anyone, should be concerned about these issues – should simply make publication of such information a pre-requisite to obtaining any funding for graduate students.  Period.

Faced with this, I’m fairly certain that departmental objections would melt like butter.  So how about it, SSHRC and NSERC? Want to land a blow for good data?  Give this plan a try.  You’ll be heroes to grad students from coast to coast.

 

September 23

SSHRC and its Mission

There was a great story by the Globe’s James Bradshaw in July on the fate of the $17.5 Million of SSHRC’s budget that was set aside by the Government of Canada for “business-related degrees” in the 2009 federal budget that didn’t get the attention it deserved on account of coming out too close to the Canada Day weekend. Basically, it revolved around Rotman’s Roger Martin’s assertion that the program was an “abject failure” because it went to almost everyone except MBA students.

What apparently eluded the scheme’s proponents was the fact that the “R” in SSHRC actually standards for “research.”  And since MBAs tend not to do a lot of primary research, most of that money – not unreasonably – went to students doing “business-related” research in a variety of other fields.

 
What’s most interesting to me about Bradshaw’s story – apart from the obvious stuff about how the same community that whined about the PMO “directing” SSHRC to put aside a new pot of money for (horrors) business had no qualms at all about accepting PMO-directed money when it came under the label of the “digital economy” – is how the scheme’s authors (including Martin) came to think their original plan was a good idea.

Martin, apparently, was under the impression that SSHRC operates as a giant slush fund for grad students, and the story implies that what Martin thought he was getting with the 2009 budget was a pot of money that MBA students could use to offset their ever-heftier tuition fees. It’s easy to scoff at the naivete of this view, but it’s easy enough to see where he might have got this impression; directly or indirectly, something like two-thirds of SSHRC’s budget ends up with graduate students.

It makes you wonder: is SSHRC’s primary purpose to relieve institutions of the burden of funding all those social science graduate students? If institutions had to fund their own graduate students, would we have anything like as many graduates students in the arts as we do?  Have we implicitly federalized the Social Sciences and Humanities, and if so, what should the implications of this be?