The empirical consensus on the question of barriers to access in Canadian education is pretty clear: and among those few secondary school graduates who don’t go on to post-secondary education, affordability is very much a secondary issue (not non-existent, but secondary). The primary issue is that most of these young people don’t feel very motivated by the idea of spending more years in a classroom. It’s a vicious circle: these students don’t identify with education, so they don’t work at it, so they receive poor grades and become even more demotivated.
The problem is that it is easier to identify problems than solutions. Big national datasets like the Youth in Transition Survey (YITS) can help identify relationships between inputs and outputs factors, but are useless at examining the effects of interventions because they simply don’t capture the necessary information. What is needed is more small-scale experimentation with various types of interventions, along with rigorously-designed research to help understand their impacts.
This, by the way, is the tragedy of Pathways to Education. It ought to work because it ticks nearly all the boxes that the literature suggests should ameliorate access. But for some reason there has yet to be any serious attempt to evaluate its outcomes (my bet is that Pathways board members prefer anecdotes to data for fundraising purposes – and given their fundraising success to date, it’s hard to blame them). That’s a shame, because if they are on to something it would be useful to know what it is so that it can be replicated.
Now, one shouldn’t pretend that these evaluations are easy. In the United States, a top-notch research company’s multi-year, multi-million-dollar evaluation of the Upward Bound program is currently the subject of intense controversy because of a dispute regarding how data from different intervention sites was weighted. Do it one way (as the evaluators did) and there’s no significant result, do it another and a significant effect appears.
The Upward Bound controversy is a shame because of its likely chilling effect on research in this area. Governments might well question the point of funding research if the results are so inconclusive. But the nature of social interventions is such that there are hundreds of factors that can affect outcomes and hence research is always going to be somewhat tentative.
So what’s the way forward? Research can’t be abandoned, but probably needs to go small-scale. Having lots of small experimental results aggregated through meta-analysis will in the end probably yield far better results than will mega-experiments or more large-scale surveys. It might take a little longer, but it’s both more financially feasible and more likely to deliver durable results.