The Heinous Difficulty in Understanding What Works

The empirical consensus on the question of barriers to access in Canadian education is pretty clear: and among those few secondary school graduates who don’t go on to post-secondary education, affordability is very much a secondary issue (not non-existent, but secondary). The primary issue is that most of these young people don’t feel very motivated by the idea of spending more years in a classroom. It’s a vicious circle: these students don’t identify with education, so they don’t work at it, so they receive poor grades and become even more demotivated.

The problem is that it is easier to identify problems than solutions. Big national datasets like the Youth in Transition Survey (YITS) can help identify relationships between inputs and outputs factors, but are useless at examining the effects of interventions because they simply don’t capture the necessary information. What is needed is more small-scale experimentation with various types of interventions, along with rigorously-designed research to help understand their impacts.

This, by the way, is the tragedy of Pathways to Education. It ought to work because it ticks nearly all the boxes that the literature suggests should ameliorate access. But for some reason there has yet to be any serious attempt to evaluate its outcomes (my bet is that Pathways board members prefer anecdotes to data for fundraising purposes – and given their fundraising success to date, it’s hard to blame them). That’s a shame, because if they are on to something it would be useful to know what it is so that it can be replicated.

Now, one shouldn’t pretend that these evaluations are easy. In the United States, a top-notch research company’s multi-year, multi-million-dollar evaluation of the Upward Bound program is currently the subject of intense controversy because of a dispute regarding how data from different intervention sites was weighted. Do it one way (as the evaluators did) and there’s no significant result, do it another and a significant effect appears.

The Upward Bound controversy is a shame because of its likely chilling effect on research in this area. Governments might well question the point of funding research if the results are so inconclusive. But the nature of social interventions is such that there are hundreds of factors that can affect outcomes and hence research is always going to be somewhat tentative.

So what’s the way forward? Research can’t be abandoned, but probably needs to go small-scale. Having lots of small experimental results aggregated through meta-analysis will in the end probably yield far better results than will mega-experiments or more large-scale surveys. It might take a little longer, but it’s both more financially feasible and more likely to deliver durable results.

Posted in

2 responses to “The Heinous Difficulty in Understanding What Works

  1. Completely agree re. heinous difficulty. We had some folks from the Social Research and Demonstration Corporation come in to assess our programs’ “data-readiness.” This Millennium-funded initiative ended up concluding that none of the assessed programs (at 5 Ontario and Manitoba institutions) had the data or means to properly evaluate their programs.

    Curious, though — what do you mean when you say Pathways has made no “serious attempt to evaluate its outcomes”? They have published results right on its web site (http://www.pathwaystoeducation.ca/sites/default/files/pdf/Results%202009-10%20Summary.pdf)

    Maybe you mean that their evaluations are not sufficiently rigorous — as in, no randomized, controlled trials? I’m not affiliated with Pathways, I just had the impression that they were running top-notch, well-evaluated programs.

  2. Hi Rachelle. Thanks for reading our stuff.

    You’re right that Pathways has published reports which purport to look at their effectiveness. But they are nearly always comparing apples to oranges; specifically, “post-intervention” only includes students who enrol in their program, whereas their “pre-intervention” comparitor includes *all* students in an area. Even first-year polisci students should be able to pick out that selection bias is an enormous issue here – especially where the proportion of students who did not enrol exceeds the size of the alleged “effect”.

    Random assignment would be nice, but I’d settle for a competent, apples-to-apples pre/post test.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.