HESA

Higher Education Strategy Associates

Category Archives: grade inflation

November 25

Grades, Satisfaction and Miserable Toronto Students

It’s been noted many times (here, for instance) that professors who give easy As tend to do better on course evaluations than those who don’t. But does this work at the institutional level as well?

It’s hard to tell directly because all institutions essentially grade on the same curve. But we can get at it indirectly by looking at the gap between high school and university grades, which does vary significantly – at more selective institutions, students see a drop; at less selective ones their grades tend to get better.

For this analysis we use self-reported data on grades. Now, we know that skeptics say that this is bad methodology because asking students to self-report on grades is like asking men on a dating site to report on their height or income – all three tend to rise in the telling. But what we’re looking at here is change in reported grades. As long as any exaggeration is consistent across time, the exaggerations should cancel each other out. For math-heads, this can be expressed as:

Onwards to look at our sample from the Globe and Mail. We start by arranging the changes in reported grades between high school and university in bands and looking at average satisfaction in each band. It turns out that there is very little change in satisfaction levels unless students see a very large drop in marks (13% or worse).

Figure 1: Satisfaction by Change in Grades from High School to University

Loyal readers will know where this is going. Guess which city has an abnormally high proportion of students whose grades drop precipitously once they get to university?

Figure 2: Percentage of Students with a Drop in Grades of 13% or Worse

How big a difference does this make to satisfaction? Well, check out the differences in satisfaction between students with large grade drops versus others at Toronto institutions; at Mississauga, the difference between students whose grades have fallen a long way and others is greater than one standard deviation.

Figure 3: Average Satisfaction, Students with Large Mark Drop-Offs vs. Others, Toronto

Oddly, when we look at the five institutions elsewhere in Canada with the most students experiencing large drops in marks, we don’t see anything like the same drop in satisfaction, to wit:

Figure 4: Average Satisfaction, Students with Large Mark Drop-Offs vs. Others, Not Toronto

It’s not quite a story about big fish from little ponds getting shocked by the jump to a larger pool. It’s that big fish from Toronto ponds are both likelier to feel a shock and that they feel a whole a lot worse about the jump that fish elsewhere. A simple case of Torontonians’ elevated sense of entitlement? Maybe. Or maybe Toronto is just a more ruthless environment, with higher social penalties for poor performance.

October 14

Teaching, Testing, Grading

In the last couple of months, some very interesting memes have started to take shape around the role of the professoriate.

Grade inflation – or grade compression as some would have it – is of course quite real. Theories for it vary; there’s the “leave-us-alone-so-we-can-do-research theory,” and also the “professors-are-spineless-in-the-face-of-demanding-students theory.” Regardless of the cause, the agent is clear: professors. They simply haven’t held a consistent standard over time, and that’s a problem.

About two months ago, the Chronicle put together a very interesting article on Western Governors University and how they’ve managed to avoid grade inflation. Simply put: they don’t let teachers grade. Rather, they leave that job to cohorts of assessors, all of whom possess at least a Master’s degree in the subject they are grading, and who are quite separate from the instructors.

This kind of makes sense: teachers are subject matter experts, but they aren’t expert assessors, so why not bring in people who are? Unlike professors, who have to put up with course evaluations, independent assessors have no incentive to skew grades.

One could take this further. Not only are professors not experts at grading, but they aren’t necessarily experts at devising tests, either. Solution? Step forward Arnold Kling of George Mason University who recommends improving testing by having outside professionals taking a professor’s lecture notes and course readings and fashioning a test on the basis of them.

Are there good reasons to try these ideas? On grading, the gains might be on quality rather than cost. Informally, TAs do a lot of the grading in large classrooms anyways so it’s not as if we aren’t already quasi-outsourcing this stuff. But the TAs have no more expertise than professors in terms of assessment, so professionalizing the whole thing might be beneficial. On testing, you might not get cost advantage unless you had some economies of scale (i.e., you’d need multiple participating institutions to make it worthwhile), though again there may be quality advantages.

Of course, to get any cost savings at all on either of these, you’d need to get professors to explicitly trade their testing and marking responsibilities in these areas for greater class loads. Have them teach three courses a term instead of two, but do less in each of them. It’s hard to say if anyone would bite on that one; but given coming funding crunches, it might be worth somebody at least trial-ballooning these ideas during their next collective bargaining round.

August 17

Data Point of the Week: These are Fantastic Graphs

…from U.S. researchers Stuart Rojstaczer and Christopher Healy on the subject of grade inflation. So fantastic, in fact, that I think I’ll mostly let them speak for themselves.

 

And this, of course, is at a time when institutions are becoming less selective, not more. Interestingly, though, it’s U.S. private universities – generally speaking more selective than publics – that are leading the grade inflation charge.

 

Since there’s no data to suggest that students are working harder than they used to, this is pretty much a straight-up change in grading practices. But what’s causing the change?

Canadian commentators James Coté and Anton Allahar would probably have you believe that grade inflation (or grade compression, as they more accurately dub it) is all due to the way that larger classrooms, disengaged students and manipulable teacher- evaluation schemes have given professors incentives to reduce standards – “they pretend to learn, we pretend to evaluate,” so to speak.

But this data – which shows that grade inflation/compression was more severe at the smaller and more selective institutions – suggests something different may be going on. In fact, Rojstaczer and Healy posit that the reverse is true – that diminished faculty disengagement expectations might be leading to disengagement.

Food for thought.