How to Answer Questions About WIL

Yesterday, I looked at some reasons why WIL works.  Today, I would like to talk about how we might answer larger questions about the extent to which WIL works (or, more accurately, what the impacts of individual aspects of WIL experiences look like).

The case for WIL “working” in terms of labour market outcomes largely rests on data for co-op placements, and then kind of assuming that WIL is “co-op lite” (which is sort of true, sometimes). C.D. Howe Institute’s Rosalie Wyonch put out a nice piece on co-op outcomes a few months ago, which shows a number of positive impacts.  However, because a main data source is the National Graduates Survey, which is notorious for telling you almost nothing about a student’s life prior to graduation, it cannot control for things like how strong the students were academically (which, you’d think, might have some bearing on their post-graduate success).  To actually answer questions about efficacy, you’d need to build some kind of new data set which includes NGS-like longitudinality, a better set of control data, and a better set of WIL descriptors than WIL/not-WIL.

(In principle, I suppose you could do some Randomized Control Trials, but those are expensive to do well.  I’m talking here mainly about how to do this cheaply.)

More precisely, what you need is a set of linked databases containing the following information for a group of students (both those who experienced WIL and those who did not).

  • A measurement of outcomes. This sounds basic, but it’s worth spelling out.  To measure effectiveness, you need some definition of effectiveness and then you need to measure that definition.  For the sake of argument, let’s say that the purpose of WIL is to get students into the labour market faster at a higher salary than would otherwise be the case (other outcomes are possible).  That means you need a graduate survey in which you can distinguish between graduates who participated in WIL and those who did not.  One could use Statistics Canada’s ELMLP, but that just gets data on annual income.  It would be better to survey students a year out from graduation and ask questions about how quickly each student got a job, whether the job was with someone they had encountered through WIL, etc.
  • Control variables.  Of course, just looking at outcomes by WIL/non-WIL is not that helpful.  There are structural reasons why some people get hired faster than others that have nothing to do with WIL.  Some of these are demographic (location, gender, race) while others are related to WIL itself (field of study) and academic factors (grades).  A useful study on the longer-term effects of WIL would need to include these kinds of variables to get a more accurate sense of the “pure” effects of WIL.  Institutions already have this data – they just need to link it to other sources of data to make this work.
  • WIL descriptors.  Not all WIL experiences are the same.  Just having outcomes and control variables allows you to answer questions about the effects of WIL, but tells you nothing about what works within WIL, or whether there might be some aspects of WIL which have little to no effect.  But WIL experiences differ significantly: degree of integration in the curriculum, length (weeks), intensity, (hours/week), degree and quality of supervision, location (outside the university/college or not), etc., as well as the perceived fit and quality of the experience, which can be measured at least in part through student surveys.

Now, if multiple institutions choose to work together and share samples and – crucially – code their data and offer common surveys to students, then what happens is that you will have is a database with tens of thousands of graduates, their academic backgrounds, their WIL experiences *and* their first jobs.  It’s a sample with millions of data points.  This won’t help generate answers about the efficacy of specific programs at specific institutions, but it will tell you about the success of all programs with similar characteristics across all institutions.

So, say an institution has only a few students in two-week full-time internships at a public sector institution, and it wants to know whether this is better or worse than other types of WIL intervention, because it is considering an expansion.  Using this method of data-sharing, you would have immediate access to knowledge with respect to how a) the length of an internship is related to final outcomes (perhaps two weeks is too short?), b) the relative value of placements in public sector institutions (versus private sector institutions), etc.  Or, to put it another way, you could know how much faster (on average) a WIL graduate who did a 3-month placement gets hired than one who only did two weeks, or what the difference in starting salaries is for people who did private sector work placements vs. in-school capstone projects. Because of the thousands of observations and the complete nature of the database, it would be easy to control for all sorts of factors (field of study, gender, grades) and come up with very strong estimates of not just “the benefits of WIL”  but of the specific benefits of different types of WIL.

And of course, if you design the surveys right, you could more or less answer the question I asked yesterday about why WIL works.  Just ask students in the follow-up survey how they got their first job: if their first job was with the same employer as their WIL, or it was based on a WIL employer’s letter of recommendation, or whatever.  And compare to non-WIL students, who presumably also get jobs based on earlier (non-WIL, often summertime) work experiences.

Now, what would it take to get something like this going?  Basically, you’d need to get a group of institutions – probably no more than 15 or so (though, you know, the more the merrier) to do the following.

  1. Develop common coding of WIL experience characteristics by the institution and collection of post-WIL follow up survey of students,
  2. Help co-ordinate long-term follow-up surveys of WIL students for whom such coded data exists and
  3. Agree to enable data linkage from the outcome measures survey with the WIL-coded variables and the control variables.

Of these, the toughest is probably the first – not necessarily because institutions couldn’t agree on a common coding scheme but because within institutions there is not even a single person responsible for WIL data collection. Faculties each want to do their own thing, which makes intra-institutional co-operation that much harder.

How much would all this cost?  It depends on how much money you want to throw into the survey to boost response rates, and how much analysis you want to pay for, but a bare bones version of this could be done for low six figures (I could no doubt get this into seven figures if Statscan were doing the surveys, but that’s probably unnecessary).  Have each participating institution throw in $5,000, get a couple of different sources to match it, and you’d be there. Given how much the feds and provinces rave about WIL, you’d think they wouldn’t mind throwing a little bit of money at working out how to evaluate effectiveness, right?

(n.b. this exact same approach would work for Study Abroad as well.  Just replace the WIL experience descriptors with study abroad experience descriptors: length of study abroad, nature of learning experience, degree of immersion in a foreign language, plus a post-study experience survey similar to the one described for WIL.  Again, given the millions the feds are spending on this, a couple of hundred thousand to measure overall effectiveness shouldn’t be a big ask.) 

Anyways, this kind of project is very do-able.  People just have to want to do it.  Anyone?

Posted in

2 responses to “How to Answer Questions About WIL

  1. There is a side of the discussion of work integrated learning that is being overlooked. Since the early 1970s, beginning with the research of Ivar Berg, there is evidence of how mis-informed employers are about the skills and academic preparation that their jobs require. The discussion so far seems to presume that colleges and universities, and government “skills” initiatives in seeking to better match preparation with the demands of the workforce, can rely on employers for information about job requirements. This should remind us of Donald Rumsfeld’s infamous explanation of “unknown unknowns” except that in this case it is actually applicable: neither party has complete or accurate information, and neither party knows it. (Unknown unknown). This is not an argument against work integrated learning. It is, however, a word of caution, especially about “control variables, measurements, and descriptors” which should not be over-simplified or presumed to be accurately informed.

    The discussion has been critical of the ESDC COPS projection. The criticism may be justified as a projection. COPS, however, has another dimension: the categorization of “skill levels,” ranging from on the job learning to university and posy-graduate. The stream of research that began in the 1970s, when applied to the current discussion, indicates that the importance of work related learning relative to conventional measures of attainment declines as “skill levels” rise, and vice versa. This, again, is a word of caution about overlooking differences among skills levels and the efficacy of work integrated learning.

  2. “The case for WIL “working” in terms of labour market outcomes largely rests on data for co-op placements”

    In a nutshell, I think this is the problem. Co-op programs are competitive. Graduates of co-op programs *should* demonstrate better indicators, however you want to measure them, than students in non-co-op programs in part because they are already the ‘best’ students. Taking these indicators as pure evidence of co-op success without factoring that in seems disingenuous. Your comments yesterday resonated: a filtering / matching system that reduces time to employment, might be the main co-op result. Extending the thinking that co-op = success, so roll it out for everyone seems to ignore the fact (that you raised) that this does not change the number of jobs available. My guess is that governments, businesses etc. spending millions of dollars on this, probably already know this, but the matching system is worth it if you can shorten hiring processes, and increase retention. In WIL programs where students have to find their own placement, I wonder how much this perpetuates the reliance on existing connections (over skills). The cynic in me says this funding serves to keep the connected, connected and lets just not looks at this too closely, lest that unintended outcome reveal itself.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.