A good example of the importance of incentives

Well, make this a bad example if you must. It is an example of bad incentives that produce bad outcomes. I am referring to the article by Bebchuk, Cohen, and Spamann in the Project Syndicate web site, in which the authors discuss the compensation of the executives of Lehman Brothers.

The article discusses a report by a court-appointed examiner on the finances of Lehman Brothers prior to the bankruptcy of the company. The report shows that while the shareholders of Lehman Brothers lost plenty of money in the bankruptcy, the executives managed to stay in the black, because of the structure of their performance-compensation pay contracts.

The authors point out that the ability of executives to make lots of money by pursuing highly risky strategies and their ability to avoid losses when their risky strategies lead to financial disaster are very bad news for the shareholders. This is the point I want to emphasize here. The shareholders, the owners of the company, gave bad incentives to their managers, and once a fiscal panic came about, the owners lost big.

But they were not the only ones to lose. The risky actions of executives such as those at Lehman (and they were not the only ones to be overly aggressive in taking risks) magnified the financial panic, once it had started, which resulted in the major recession we are still enduring. All of us lost.

Bebchuk, Cohen, and Spamann conclude that executive compensation schemes should be changed to give executives better incentives. This is certainly correct but it does not go far enough. Since the bad incentives created by such pay structures affect everybody, it is really a general reform of the mechanisms for performance pay throughout the economy that is indicated. Mechanism design theorists have the tools to suggest possible solutions. These tools are not a panacea, but they would certainly result in a safer financial system if applied carefully.

The biggest problem is probably political. As the Democratic administration moves to consider financial reform, after getting health reform passed, the Republicans are likely to resist meaningful reform. This is a basic issue with all mechanism design. Indeed, mechanism design theory starts with the assumption that a plan on how the system should be made to work has been agreed upon by society and the remaining problem is only technical: how to give people the right incentives so that the plan is achieved. This, and the reliance of mechanism design theory on classical game theory with its hyper-rational model of players, are the two biggest hurdles in coming up with mechanisms that deliver optimal incentives.

What chess champion Gary Kasparov can teach us

Recently, Gary Kasparov wrote an essay about humans and computers playing chess, under the guise of a book review. Andrew McAfee today published an essay on Kasparov’s ideas, with a specific focus on one observation by Kasparov.

Kasparov noted that recent matches have shown that weak human chess players with computers can beat a chess supercomputer, and, in addition, a chess grandmaster with a computer but a weak organization of the human-computer collaboration. In Kasparov’s words,

Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

McAfee starts from this and says that Kasparov may have stumbled upon a better model of business processes. From my point of view, I see Kasparov’s insight as one example of the great benefit to be gotten if we can only adapt mechanism design theory to capture the fuzziness of humans and the precision of computers, acting in tandem, better. (I think there are many examples to urge us to change mechanism design towards more human-compatible decision-making models, on which I plan to blog more.)

I am making no grand claim that I know how we can approach this goal. I am simply noting that it seems a very worthy goal, one that I would rather see research in mechanism design aim for. Instead, the current thrust of the mainstream mechanism design research seems to be to get more and more refined mathematical results based on the assumption that the actors in the mechanisms studied, whether human or computer agents, behave with the precision of computers. I am aware of some work that attempts to introduce errors in the decision-making of agents in mechanism design theory, such as work by Kfir Eliaz, but I would certainly love it if more of the very clever mechanism theorists attacked the fuzziness problem head on.

Let us not leave the topic of a better business process to Harvard Business Review articles only. Some Econometrica articles on it, please.

The pundits’ dilemma

Mark Liberman says this in the Language Log today, among other good points:

Overall, the promotion of interesting stories in preference to accurate ones is always in the immediate economic self-interest of the promoter. It’s interesting stories, not accurate ones, that pump up ratings for Beck and Limbaugh.  But it’s also interesting stories that bring readers to The Huffington Post and to Maureen Dowd’s column, and it’s interesting stories that sell copies of Freakonomics and Super Freakonomics.  In this respect, Levitt and Dubner are exactly like Beck and Limbaugh.

We might call this the Pundit’s Dilemma — a game, like the Prisoner’s Dilemma, in which the player’s best move always seems to be to take the low road, and in which the aggregate welfare of the community always seems fated to fall. And this isn’t just a game for pundits. Scientists face similar choices every day, in deciding whether to over-sell their results, or for that matter to manufacture results for optimal appeal.

In the end, scientists usually over-interpret only a little, and rarely cheat, because the penalties for being caught are extreme.  As a result, in an iterated version of the game, it’s generally better to play it fairly straight.  Pundits (and regular journalists) also play an iterated version of this game — but empirical observation suggests that the penalties for many forms of bad behavior are too small and uncertain to have much effect. Certainly, the reputational effects of mere sensationalism and exaggeration seem to be negligible.

Mark Thoma says, among other things, this, in the post that brought Liberman’s post to my attention:

I’m not sure I know the answer to that, but I suspect it has something to do with increased competition among media companies for eyeballs and ears combined with an agency problem that causes information organizations to maximize something other than the output of credible information (maximizing profit may not be the same as maximizing the output of factual, useful information).

Though this type of behavior was always present in the media, it seems to have gotten much worse with the proliferation of cable channels and other media as information technology developed beyond the old fashioned antennas on roofs receiving analog signals. I don’t want to go back to the days where we had an oligopolistic structure for the provision of news (especially on network TV), competitive markets are much better, but there seems to be a divergence between what is optimal for the firm and what is socially optimal due to the agency problem.

Some people have argued that there are big externalities to good and bad reporting, and therefore that “some kind of tax credit scheme for non-entertainment news reporting might enhance societal efficiency and welfare.” That might help to change incentives, but I’m not sure it solves the fundamental agency problem. There must be reputation effects that matter to the firm, some way of making the firms pay a cost for bad pundit behavior. But that is up to the public at large, people must reward good behavior and penalize bad, it is not something the government can control. I suppose we could try something like British libel laws to partially address this, but looking at the UK press does not convince me that this solves the problem.

So I don’t know what the answer is.

I would not want to jump in and say that I know what the answer is. However, it is clear that there is a mechanism design question here. The economist’s knee-jerk reaction to this would be “if the consumers of information are more interested in being entertained than informed, then it is efficient to provide them entertainment as long as the marginal cost of entertaining each one of them meets her/his marginal willingness to pay”. As Thoma notes, it is noted that reporting has external effects. These would seem to push us in the direction of amending the rule for social optimality and looking for ways to align pundits’ incentives to what efficiency would require.

But if the majority of the audience want to be entertained and not informed, shouldn’t we economists, as children of the Enlightenment, bow to the consumers’, our multitudinous Kings’, desires? To take the idea that bad reporting carries negative externalities seriously, one has to take seriously the possibility that people express preferences for the wrong things, things that will in the long term, collectively conspire to harm them. Is this only because of the word “collectively” and so only a question of externalities, one step removed? I think that there is more “irrationality” to consumers than that. We need to come to grips, as we consider mechanism design, with “irrational consumers”. The misnamed “behavioral economics” (all economics is behavioral) field has some valuable ideas here. It seems to me economic theorists of the mechanism-design bent, should adopt these ideas and do their formalizing magic with them to reach some results. After all, no lesser theorist than Leonid Hurwicz made a foray into “irrational” agents all the way back in the 1980s.

Remark: I always place “irrational” and “rational” within quotation marks. Given what I know of game theory, including Binmore’s work on the application of Goedel’s Theorem on games played by automata, and games such as the Prisoners’ Dilemma and the Centipede, I feel I have no way of even pretending that I know what “rational behavior” really ought to mean in the case of individuals interacting in a game. Worse, in the context of consumer not knowing “what’s good for them”, we have an additional level of “irrationality” which seems to resolve to time inconsistency in the behavior of a single person. This post being long enough, I have to leave further development on my thoughts on these points to another post.

Paul Romer on Elinor Ostrom’s Nobel prize

My old teacher Paul Romer has a fantastic post on Ostrom’s prize award. A long quote follows, but I do strongly recommend the whole thing:

Most economists think that they are building cranes that suspend important theoretical structures from a base that is firmly grounded in first principles. In fact, they almost always invoke a skyhook, some unexplained result without which the entire structure collapses. Elinor Ostrom won the Nobel Prize in Economics because she works from the ground up, building a crane that can support the full range of economic behavior.

When I started studying economics in graduate school, the standard operating procedure was to introduce both technology and rules as skyhooks. If we assumed a particular set of rules and technologies, as though they descended from the sky, then we economists could describe what people would do. Sometimes we compared different sets of rules that a “social planner” might impose but we never said anything about how actual rules were adopted. Crucially, we never even bothered to check that people would actually follow the rules we imposed.

Economists who have become addicted to skyhooks, who think that they are doing deep theory but are really just assuming their conclusions, find it hard to even understand what it would mean to make the rules that humans follow the object of scientific inquiry. If we fail to explore rules in greater depth, economists will have little to say about the most pressing issues facing humans today – how to improve the quality of bad rules that cause needless waste, harm, and suffering.