I’ve recently been doing a little bit of work recently on student success and I am struck by the fact that there are two very different approaches to student success, depending on which side of the Atlantic you are sitting on. I’m not sure one is actually better than the other, but they speak to some very different conceptions of where student success happens within an institution.
(To be clear, when I say “student success” I mostly mean “degree/program completion”. I recognize that there are evolving meanings of this which mean something more/different. Some are extending the notion to not just completion but career success – or at least success in launching one’s career; others suggest completion is overrated as a metric since some students only attend to obtain specific skills and never intend to complete, and if these students drop out in order to take a job, that’s hardly a failure. I don’t mean to challenge either of these points, but I’m making a point about the more traditional definition of the term).
What I would call the dominant North American way of thinking about student success is that it is an institutional and, to a lesser extent, a faculty matter rather than something dealt with at the level of the department or the program. We throw resources from central administration (usually, from Institutional Research) to identify “at-risk” students. We use central resources to bolster students’ mental health, and hire councillors, tutors and academic support centrally as well. Academic advisors tend to be employed by faculties rather than the central admin, but the general point still stands – these are all things that are done on top of, and more or less without reference to, the actual academic curriculum.
The poster child for this kind of approach is Georgia State University (see articles here and here). It’s an urban university with very significant minority enrolments, one that at the turn of the century had a completion rate of under 30%. By investing heavily in data analytics and – more importantly – in academic tutors and advisors (I’ve heard but can’t verify that their ratio of students to advisors is 300:1 or less, which is pretty much unimaginable at a Canadian university). Basically, they throw bodies at the problem. Horrible, dreaded, non-academic staff bloat bodies. And it works: their retention rates are now up over 50 percent – their improvement among minority students has been a whopping 32 percentage points.
But what they don’t seem to do is alter the curriculum much. It’s a very North American thing, this. The institution is fine, it’s students that have to make adjustments, and we have an army of counsellors to help them do so.
Now, take a gander at a fascinating little report from the UK called What Works: Student retention and success change programme phase 2. In this project, a few dozen individual retention projects were put together across 13 participating institutions, piloted and evaluated. The projects differed from place to place, but they were built on a common set of principles, the first and most important one being as follows; “interventions and approaches to improve student retention and success should, as far as possible, be embedded into mainstream academic provision”.
So what got piloted were mostly projects that involved some adjustment to curriculum, either in terms of the on-boarding process (e.g. “Building engagement and belonging through pre-entry webinars, student profiling and interactive induction”) or the manner in which assessments are done (e.g., “Inclusive assessment approaches: giving students control in assignment unpacking”) or simply re-doing the curriculum as a whole (e.g. “Active learning elements in a common first-year engineering curriculum”).
That is to say, in this UK program, student success was not treated as an institutional priority dealt with by non-academic staff. It was treated as a departmental-level priority, dealt with by academic staff.
I would say at most North American universities this approach is literally unimaginable. Academic staff are not “front-line workers” who deal with issues like academic preparedness; in fact, often professors who do try to work with a student and refer them to central academic or counselling services will discover they cannot follow up an individual case with central services because the latter see it as a matter of “client confidentiality”. And outside of professional faculties, our profs teach individual courses of their own choosing rather than jointly manage and deliver a set curriculum which can be tweaked. Making a curriculum more student-friendly assumes there is a curriculum to alter, rather than simply a basket of courses.
Part of this is a function of how university is conceptualized. In North America, we tend to think that students choose an institution first and a program of study later (based on HESA’s research on student decisions, I think this is decreasingly the case, but that’s another story). So, when we read all the Vince Tinto-related research (Tinto being the guru of student retention studies, most of which is warmed-over Durkheim) about “belonging”, “fit” and so on, we assume that what students are dropping out of is the institution not the program, and assign responsibilities accordingly. But in Europe, where 3-year degrees are the norm and they don’t mess around with things like breadth requirements, the assumption is you’re primarily joining a program of study, not an institution. And so when Europeans read Tinto, they assume the relevant unit is the department or program, not the institution.
But also I think the Europeans – those interested in widening access and participation, anyway – are much less likely to think of the problem as being one of how to get students to adapt to university and its structures. Quite often, they reverse the problem and say “how can the institution adapt itself to its students”?
It’s worth pondering, maybe, whether we shouldn’t ask that question more often, ourselves. I think particularly when it comes to Indigenous students, we might be better served with a more European approach.
I would tend to agree with your conclusion here. After working at my university for a number of years, I decided to go back and do another undergrad, this time a BEd. In classes discussing inclusive education and alternatives to traditional assessment, there is much research describing the benefits of these, but we often discussed with our professors how universities teach these concepts (at least in Education faculties) but don’t practice what they teach. If these practices are great for K-12, aren’t they also great for university students? So yes, I think there is something to be said for questioning curricular approaches to increasing student success.
It’s not always easy to ascertain to what extent universities in Canada fall into one of the two approaches to student success you’re describing in your article, Alex. Take peer tutoring, for instance. If you browse institutional websites across the country, you may well come to believe that tutoring services are often administered centrally, but that’s not necessarily the case. Based on the research that I’ve done on the subject (and on a EAB study conducted on behalf of the Université de Montréal), most tutoring services are located in individual faculties and schools. They pick the tutors, they select the courses and they evaluate the impact of the services they offer (when they do, which is not always the case). When central authorities are involved, it’s mainly to establish minimal ethical standards and/or to provide standardized training to would-be tutors.
In my own institution (the UdeM), we work very hard to support the work of faculties. For instance, we recently developped a dashboard that relies on student administrative data to support student success, but it is meant to be used by individual adacemic units. We only provide the training and the technical support required to keep them going. However, I will readily concede that not all faculties have endorsed the notion that they should make supporting student success one of their core missions.
Thanks for bringing What Works?2 to our attention. There’s a lot in there that I plan to borrow (or just plainly steal) in planning new initiatives to support student success at the UdeM.
I definitely agree with your conclusion because I worked at two institutions that were very different in their approach and since I was responsible for retention, I worked collaboratively to change policies and programs that did not fit the needs of the student population. Now, at a public institution, there is a lot of talk but no strategic plan for the institution to meet the needs of the students at every turn. We leave out the most useful collaborator, the student, when devising plans. We leave out alums, those that made it through the system and someone that can inform our strategy.
But, the problem is so much deeper. It has to be fixed with deliberate analysis of everything. Beginning with the instructors that teach first year classes and classes with high DFW rates. Do they know what they are doing, do they take the high touch approach, are they available to discuss a students impediment to success and direct to the resource needed. Is that resource a competent resource, are we vetting peer tutors properly? Do we have enough? Is the space big enough to accommodate all visitors, are the hours appropriate? And to your point what about the advisors? How is the advising process set up and does it work? Are the right people with the right background advising? If student have faculty advisors are they engaged and available, do they know their impact? And finally, this is an unpopular topic, is everyone paid appropriately? Are they recognized for their innovation and creativity? Are they heard?
The zeitgeist of student success in the US is far different than described by the author. Let’s even talk about Georgia State: the claim “these are all things that are done on top of, and more or less without reference to, the actual academic curriculum” bears little semblance to reality. Of five highlighted programs at http://completecollege.org/wp-content/uploads/2014/07/TRenick-Breakout-Session-PowerPoint.pdf, four of them are revisions to academic curriculum. “Summer Success Academy” (commonly called ‘Summer Bridge’) is administrative with an academic component, but agreed: largely outside the programs of study. Supplemental Instruction is also outside the program of study. The other 3, however, are not.
Freshman Learning Communities involve faculty interaction with student groups outside the lecture classroom and are departmentally controlled. College algebra went through a massive course redesign: the very sort of allegedly non-American change discussed. And the nursing major also went through a huge redesign (including moving ACTUAL nursing experience far earlier in the curriculum). GSU gets a lot of attention around their advising because advising resources are so expensive, and being able to use them efficiently is a ‘holy grail’ of sorts. But it’s one small part of what they’ve done.
I’d argue this focus is roughly the same at other campuses thinking hard about student success: 40% administrative, 60% classroom. For instance, Purdue Impact (http://www.purdue.edu/impact/) is a program with huge emphasis on course redesign, led by faculty, expressly to help further student success.
Alex, I think your assumption about where and how “student success” is happening is quite inaccurate. You have conflated all of North America into one experience, which is not the case–not even close! This is particularly true when you consider the enormous difference in sector sizes between Canada and the US. The US has upwards of 10 times the number of colleges and universities as Canada. How can you even imagine trying to put these two things together as the same? Clearly, we can’t forget regional/geographic differences, institutional type difference (research, college, med school, etc.), enrollment demographics, and so on.
Then you need to factor in historical challenges at certain institutions and in certain regions. Are you trying to say that an southern HBCU and an Atlantic Canadian community college manage student success and academic curricula the same?
Next you haven’t considered: Individual agendas of Executive Administrations (Presidents, etc), strategic plans, research plans/agendas, marketing strategies, funding/budgetary challenges, and so on.
Therefore, to try to conclude or assume what you believe is true for all institutions is potentially dangerous to higher ed strategy, because no strategic analysis appears to be applied here. Indeed, at my own institution, what you assume to be true of European schools is the normative reality. Moreover, I’ve done a fair bit of research on GSU’s work and your characterization appears to be totally contradictory compared to what I’ve learned.