I had the good fortune earlier this week of speaking to the Future of the Humanities PhD conference at Carleton University. It was an interesting event, full of both faculty and students who are thinking about ways to reform a system takes students far too long to navigate. They asked me for my thoughts, so I gave them. Here’s a precis.
One of the most intractable problems with the PhD (and not just in the humanities) is that it serves a dual purpose. First, it’s a piece of paper that says “you’re a pretty good researcher”; second, it’s a piece of paper that says “you too can become a tenured university professor! Maybe.”
The problem is that “maybe”: lots of people can meet the standard of being a good researcher, but that doesn’t mean there will be university professor jobs for them. Simply, more people want to be professors than there are available spots; eventually, the system says yes to some and no to others. Right now we let the job market play that role. But what if those in charge of doctoral programs themselves played a more active role? What if there was more selectivity at the end of – say – the first or second year of a doctorate? Those deemed likeliest to get into academia would then end up on one track, and the others would be told (gently) that academia was unlikely for them, and offered a place in a (possibly shorter-duration) “professional PhD” track designed to train highly skilled workers destined for other industries. Indeed, some might want to be put on a professional PhD track right from the start.
If you’re going to be selective early on in doctoral programs, then you probably want to re-design their front-ends so they’re not all coursework and – especially – comps. Apparently in some programs, it is not unusual to take three years to complete coursework and comps – and this is after someone has done a master’s degree. This is simply academic sadism. It is certainly important for students heading into the teaching profession to have a grasp of the overall literature. But is it necessary for this work to be placed entirely at the beginning of a program, acting as a barrier to students doing what they really want to do, which is research?
Instead, why not have students and their supervisors jointly work out at the start of a program what literature needs to be covered, and agree to a structured program of covering it over the length of the program? Ideally, students would have their tests at the end, close to the time when they would be going on the job market. You’d need to front-end load the more methodological stuff (so those who end up in the professional stream get it too), but apart from that this seems perfectly feasible.
Of course, that implies that departments – and more importantly individual doctoral supervisors – are prepared to do the work to create individual degree plans with students and actually stick to them. There is really no reason why a five- or even a four-year doctorate – that is, one whose length roughly coincides with the funding package – is impossible. But expectations have to be clear and met on both sides. Students should have a clear roadmap telling them roughly what they’ll be doing each term until they finish; professors need to hold them to it but equally, professors also need to be held to their responsibilities in keeping students on track.
Many people think of PhDs as “apprenticeships”. But that doesn’t imply just hanging around and watching the “masters”. Go read any apprenticeship standards documents: employers have a very detailed list of skills that they need to impart to apprentices over a multi-year apprenticeship, and both the apprentice and the journeyman have to sign off every once in a while that such skills have been taught.
Or, take another form of knowledge-worker apprenticeship: medicine. When medical students pick a specialty like internal medicine or oncology, they are embarking on a form of multi-year apprenticeship – but one which is planned in detail, containing dozens of assessment points every year. Their assessments cover not just subject matter expertise, but also the aspiring medic’s communication skills, teamwork skills, managerial skills, etc. All things that you’d think we’d want our doctoral students to acquire.
So how about it, everyone? Medical residencies as a model for shorter, more intensive PhDs? Can’t be worse than what we’re doing now.
Thank you for your blog posting. I enjoy getting them.
While I agree that the doctoral programs need to be reconceived, I would oppose (in the strongest fashion) any attempt to develop “two streams.” If by the 2nd year a student demonstrates limited ability in the doctoral program, then they should not be in a doctoral program at all.
Academics might be great at some things, but they can’t see into the future. To ask them to decide which students are going to “make it” in the academy and so guide them into or away from an academic stream would be misguided. Quite frankly, I probably would not be a professor today if some of my graduate professors made the call about if I could or should be an academic. What some of my colleagues might think of as lacking in ability is sometimes little more than a different perspective, different approach, a different theoretical perspective, not theoretical enough, etc.
Your proposal sets up a two tier system for those seen as “bright” and those seen as “less promising.” I’m afraid, given the state of affairs in this world, we need bright & promising PhD’s both in and out of the academy.
Instead of the two tier system, we should have a doctoral program that trains students for both academic life & the contributions in other professional contexts. At the end of the degree, one should be able to research, write and think at a high intellectual level AND one should also be able to take those skills and apply it outside of the academy. If one gets an academic job at the end of it, then wonderful. If there are no jobs to be had, then students will be prepared to contribute in other ways. This will bring us closer to bridging the gap between the knowledge produced by academics and “real world” applications.
The PhD is designed to prepare future professors. We don’t need so many people prepared in that way these days. The conclusion I draw from this is that we need fewer PhDs. We still have the MA for people who want to learn more about the subject they love, and develop marketable skills in doing so. So I have a hard time seeing a function for a PhD somehow re-engineered to do something other than what it’s designed to do. The MA already does that other thing.
In my experince, admittedly in Scince and not Humanities, the big problem is not the advisors but the central administraton which assembles piles of micromanaging rules that prevent personalization. Just one example of wasting student and faculty time that comes from above – requiring even students who have an extensive protfolio of peer-reviewed research publications to write a long and heavily rule-guided thesis which will typically not be read by anyone except the student’s PhD examiners (asembling your papers into one document doesn’t satisfy the rules). Another – again coming from adminstrators is exactly the point alluded above – forcing students to complete all/most courses in year one – many advisors do not want this and would gladly have the students do research ASAP, but administrators like their ducks arranged in neat rows… BTW both sets of rules are not universal – plenty of universities outside Canada allow paper-assmblies as theses, and have very lax requirements on when courses are taken.