Businesses have a pretty good way of knowing when to offer more or less of a good. It’s encapsulated in the equation MC = MR, and shown in the graphic below.
Briefly, in the production of any good, unit-costs fall to start with as the benefits of economies of scale start to rise. Eventually, however, if production is expanded far enough you get diseconomies of scale, and the marginal cost begins to rise. Where the marginal cost of producing one more unit of a good rises above the marginal revenue one receives from selling it (in the above diagram, Q1), that’s the point where you start losing money, and hence where you stop producing the good.
(This gets more complicated for products like software or apps where the marginal cost of production is pretty close to zero, but we’ll leave that aside for the moment.)
Anyway, when it comes to delivering educational programs, you’d ideally like to think you’re not doing so at a loss (otherwise, you eventually have a bit of a problem paying employees). You want each program to more or less, over time, come close to paying for itself. It’s not the end of the world if they don’t, cross-subsidization of programs is a kind of core function of a university after all; but it would be nice if they did. In other words, you really want each program to have a production function where the condition MC=MR is fulfilled.
But here’s the problem. Marginal revenue’s relatively easy to understand: it’s pretty close to average revenue, after all, though it gets a bit more complicated in places where government grants are not provided on a formula basis, and there’s some trickiness when you start calculating domestic fees vs. international fees, etc. But the number of universities that genuinely understand marginal cost at a program level is pretty small.
Marginal costs in universities are a bit lumpy. Let’s say you have a class of twenty-five students and a professor already paid to teach it. The marginal cost of the twenty-sixth student is essentially zero – so grab that student! Free money! Maybe the twenty-seventh student, too. But after awhile, costs do start to build. Maybe on the 30th student there’s a collective bargaining provision that says the professor gets a TA, or assistance in marking. Whoops! Big spike in marginal costs. Then where you get to forty, the class overfills and you need to split the course into two, get a new classroom, and a new instructor, too. The marginal cost of that forty-first student is astronomical. But the forty-second is once again almost costless. And so on, and so on.
Now obviously, no one should measure marginal costs quite this way; in practice, it would make more sense to work out averages across a large numbers of classes, and work to a rule of thumb at the level of a department or a faculty. The problem is very few universities even do that (my impression is that some colleges have a somewhat better record here, but the situation varies widely). Partly, it’s because of a legitimate difficulty in understanding direct and indirect costs: how should things like light, heat, and the costs of student services, admissions, etc., be apportioned – and then there is the incredible annoyance of working out how to deal with things like cross-listed courses. But mostly, I would argue, it’s because no one wants to know these numbers. No one wants to make decisions based on the truth. Easier to make decisions in the dark, and when something goes wrong, blame it on the Dean (or the Provost, or whoever).
Institutions that do not understand their own production functions are unlikely to be making optimal decisions about either admissions or hiring. In an age of slow revenue growth, more institutions need to get a grip on these numbers, and use them in their planning.