What a little honesty can do for implementation theory

Moving beyond consequentialism

The vast majority of the implementation literature follows mainstream economic theory by assuming that agents care about outcomes (consequentialism). Of course this is not true; most people care in some measure about how outcomes are achieved, too. They care about the process that brings the outcomes. They are not proud of themselves if they lie in pursuit of wealth or power. (At least I hope this is still true of most people, even if they are graduates of economics programs.)

It seems worthwhile to inject a little caring-about-process into implementation theory to capture such considerations. A few authors have attempted this recently and found out that a truly minimal injection of caring about process has larger results than one would expect.

An aside that promises a future post

A great example of a paper that goes well beyond a minimal injection of process-caring is Pride and Prejudice: The Human Side of Incentive Theory, by Tore Ellingsen and Magnus Johannesson, American Economic Review 2008, 98:3, 990–1008. Since this study limits itself to the principal-agent model and it modifies the outcome-orientation of standard theory drastically, I will discuss it in a separate post. Here I want to discuss a group of related papers that take the smallest possible departure from outcome-orientation in the context of implementation theory.

Four papers on honesty in implementation

Matsushima’s papers

Hitoshi Matsushima comes first with two papers on this topic. In Role of honesty in full implementation, Journal of Economic Theory 2008, 139, 353–359, he introduces a model of an agent who chooses strategies in order to achieve the best payoff as in standard theory, except when faced with a choice between an honest strategy and a dishonest one, such that they each reach outcomes that have the same payoff. In such a situation, the agent plays the honest strategy, defined as the strategy that goes along with the mechanism designer’s—the planner’s—intention.

In detail, an agent has standard, expected-utility, preferences on the alternatives and furthermore suffers a psychological cost, a disutility, from dishonesty. The agent starts life endowed with a signal and is required to make many announcements regarding her signal in the mechanism. Her cost of dishonesty is an increasing function of the proportion of dishonest announcements she makes in the playing of the mechanism game. Apart from this, the model is standard.

Matsushima proves that if a social choice function is Bayesian-incentive compatible (with the dishonesty cost removed from the agents’ utility functions), then it is implementable in iteratively undominated strategies. Further, the mechanism that achieves this implementation is detail-free, meaning that the planner does not need to know details of the agents’ utility functions or prior belief distributions to design the mechanism. In addition, while the mechanism does involve fines that are to be imposed when certain strategies are played, these fines are small. All in all, this is a very impressive result. However, it is not clear in my mind how the planner would know that the social choice function to be implemented is incentive-compatible without the knowledge of the details on utility functions and priors that are not needed for the design of the implementing mechanism.

An answer to this concern is given in the second Matsushima paper I want to discuss, Behavioral aspects of implementation theory, Economics Letters, 100, 2008, 161–164. In this paper any social choice function is implementable in iterative dominance as long as there is aversion to telling lies among the agents. Remarkably, if this aversion is absent, the mechanism of this paper has a large multiplicity of Nash equilibria, a multiplicity that disappears the moment even a slight white-lie aversion comes in and we turn to iteratively undominated equilibrium. The mechanism is entirely detail-free, not even depending on the form of the social choice function.

Both papers use mechanisms with lotteries among alternatives as outcomes. While this allows the designer to sidestep any issues that the discreteness of the set of alternatives raises, it also is subject to the standard criticisms of mechanisms that use lotteries. This criticism points out that such mechanisms depend heavily on the assumption that agents are expected-utility maximizers, and it assumes a very high level of trust in the mechanism operator by the agents.

One last remark on the Matsushima papers: they study social choice functions. Let us now consider two papers that look at social choice correspondences, not functions, and Nash equilibrium as the implementing equilibrium notion, rather than iterative undomination.

Papers on Nash implementation

Dutta and Sen 2009

Bhaskar Dutta and Arunava Sen released a working paper (PDF) entitled Nash Implementation with Partially Honest Individuals on November 9, 2009. They found that even minimal dishonesty aversion, as already described in this post, expands the class of social choice correspondences that are implementable in Nash equilibrium dramatically when there are three or more agents: any social choice correspondence that satisfies No Veto Power is Nash implementable when there exists at least one partially honest individual! This result is spectacular and it stands in stark contrast to Maskin’s classic results that Maskin Monotonicity is a necessary condition for a social choice correspondence to be Nash implementable, and Maskin Monotonicity with No Veto Power are together sufficient. The planner here only needs to know that there is one partially honest agent, not who this agent is.

The paper has additional results for the case of two agents, when it is known that one of them is partially honest and for the case where there is a positive probability that a certain agent would be partially honest. But I think the clearest advance is the extension of Nash implementability so much beyond the walls of the Monotonicity jail.

Lombardi and Yoshihara 2011

The last paper I discuss here is the one that brought Dutta and Sen to my attention (I knew about the Matshushima papers already). It is Partially-honest Nash implementation: Characterization results, by Michele Lombardi and Naoki Yoshihara, February 13, 2011, available in the Munich Personal RePec Archive. This paper gives necessary conditions for Nash implementation with partially honest agents. It also studies the implications of the presence of partially honest individuals for strategy space reduction. The surprise here is that the equivalence between Nash implementation via Maskin’s “canonical” mechanism (which is very demanding in terms of information transmission) and Nash implementation with more information-economical mechanisms (such as Saijo’s strategy-reduction one) breaks down in the presence of partially honest individuals.

This paper is the longest of the four and contains lengthy proofs. It gives the impression of the authors coming into this area and sweeping a lot of cobwebs away from the corners that the seminal works in the nascent partial-honesty implementation area (the previous three papers) did not clean up as they were more concerned with laying foundations.

Musings

I find this trend in implementation theory very refreshing. While it is still at a highly abstract level, it has already set secure foundations for further study. I am eager to see what else can be done with the general idea of adding a degree of honesty to economic agents involved in a mechanism. There are such people around still, even among those super-cynical types with economics degrees! Yet I can see that applied fields like auction theory might resist the trend. Especially for auctions involving corporations, the assumption that the agent, who now is a representative of a corporation, is partially honest, is less appealing than when the agent is an individual playing for herself. But this trend should be fruitful in applications to problems of externalities and public goods, two applied fields that come readily to mind as well-suited for it.

A good example of the importance of incentives

Well, make this a bad example if you must. It is an example of bad incentives that produce bad outcomes. I am referring to the article by Bebchuk, Cohen, and Spamann in the Project Syndicate web site, in which the authors discuss the compensation of the executives of Lehman Brothers.

The article discusses a report by a court-appointed examiner on the finances of Lehman Brothers prior to the bankruptcy of the company. The report shows that while the shareholders of Lehman Brothers lost plenty of money in the bankruptcy, the executives managed to stay in the black, because of the structure of their performance-compensation pay contracts.

The authors point out that the ability of executives to make lots of money by pursuing highly risky strategies and their ability to avoid losses when their risky strategies lead to financial disaster are very bad news for the shareholders. This is the point I want to emphasize here. The shareholders, the owners of the company, gave bad incentives to their managers, and once a fiscal panic came about, the owners lost big.

But they were not the only ones to lose. The risky actions of executives such as those at Lehman (and they were not the only ones to be overly aggressive in taking risks) magnified the financial panic, once it had started, which resulted in the major recession we are still enduring. All of us lost.

Bebchuk, Cohen, and Spamann conclude that executive compensation schemes should be changed to give executives better incentives. This is certainly correct but it does not go far enough. Since the bad incentives created by such pay structures affect everybody, it is really a general reform of the mechanisms for performance pay throughout the economy that is indicated. Mechanism design theorists have the tools to suggest possible solutions. These tools are not a panacea, but they would certainly result in a safer financial system if applied carefully.

The biggest problem is probably political. As the Democratic administration moves to consider financial reform, after getting health reform passed, the Republicans are likely to resist meaningful reform. This is a basic issue with all mechanism design. Indeed, mechanism design theory starts with the assumption that a plan on how the system should be made to work has been agreed upon by society and the remaining problem is only technical: how to give people the right incentives so that the plan is achieved. This, and the reliance of mechanism design theory on classical game theory with its hyper-rational model of players, are the two biggest hurdles in coming up with mechanisms that deliver optimal incentives.

What chess champion Gary Kasparov can teach us

Recently, Gary Kasparov wrote an essay about humans and computers playing chess, under the guise of a book review. Andrew McAfee today published an essay on Kasparov’s ideas, with a specific focus on one observation by Kasparov.

Kasparov noted that recent matches have shown that weak human chess players with computers can beat a chess supercomputer, and, in addition, a chess grandmaster with a computer but a weak organization of the human-computer collaboration. In Kasparov’s words,

Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

McAfee starts from this and says that Kasparov may have stumbled upon a better model of business processes. From my point of view, I see Kasparov’s insight as one example of the great benefit to be gotten if we can only adapt mechanism design theory to capture the fuzziness of humans and the precision of computers, acting in tandem, better. (I think there are many examples to urge us to change mechanism design towards more human-compatible decision-making models, on which I plan to blog more.)

I am making no grand claim that I know how we can approach this goal. I am simply noting that it seems a very worthy goal, one that I would rather see research in mechanism design aim for. Instead, the current thrust of the mainstream mechanism design research seems to be to get more and more refined mathematical results based on the assumption that the actors in the mechanisms studied, whether human or computer agents, behave with the precision of computers. I am aware of some work that attempts to introduce errors in the decision-making of agents in mechanism design theory, such as work by Kfir Eliaz, but I would certainly love it if more of the very clever mechanism theorists attacked the fuzziness problem head on.

Let us not leave the topic of a better business process to Harvard Business Review articles only. Some Econometrica articles on it, please.

The pundits’ dilemma

Mark Liberman says this in the Language Log today, among other good points:

Overall, the promotion of interesting stories in preference to accurate ones is always in the immediate economic self-interest of the promoter. It’s interesting stories, not accurate ones, that pump up ratings for Beck and Limbaugh.  But it’s also interesting stories that bring readers to The Huffington Post and to Maureen Dowd’s column, and it’s interesting stories that sell copies of Freakonomics and Super Freakonomics.  In this respect, Levitt and Dubner are exactly like Beck and Limbaugh.

We might call this the Pundit’s Dilemma — a game, like the Prisoner’s Dilemma, in which the player’s best move always seems to be to take the low road, and in which the aggregate welfare of the community always seems fated to fall. And this isn’t just a game for pundits. Scientists face similar choices every day, in deciding whether to over-sell their results, or for that matter to manufacture results for optimal appeal.

In the end, scientists usually over-interpret only a little, and rarely cheat, because the penalties for being caught are extreme.  As a result, in an iterated version of the game, it’s generally better to play it fairly straight.  Pundits (and regular journalists) also play an iterated version of this game — but empirical observation suggests that the penalties for many forms of bad behavior are too small and uncertain to have much effect. Certainly, the reputational effects of mere sensationalism and exaggeration seem to be negligible.

Mark Thoma says, among other things, this, in the post that brought Liberman’s post to my attention:

I’m not sure I know the answer to that, but I suspect it has something to do with increased competition among media companies for eyeballs and ears combined with an agency problem that causes information organizations to maximize something other than the output of credible information (maximizing profit may not be the same as maximizing the output of factual, useful information).

Though this type of behavior was always present in the media, it seems to have gotten much worse with the proliferation of cable channels and other media as information technology developed beyond the old fashioned antennas on roofs receiving analog signals. I don’t want to go back to the days where we had an oligopolistic structure for the provision of news (especially on network TV), competitive markets are much better, but there seems to be a divergence between what is optimal for the firm and what is socially optimal due to the agency problem.

Some people have argued that there are big externalities to good and bad reporting, and therefore that “some kind of tax credit scheme for non-entertainment news reporting might enhance societal efficiency and welfare.” That might help to change incentives, but I’m not sure it solves the fundamental agency problem. There must be reputation effects that matter to the firm, some way of making the firms pay a cost for bad pundit behavior. But that is up to the public at large, people must reward good behavior and penalize bad, it is not something the government can control. I suppose we could try something like British libel laws to partially address this, but looking at the UK press does not convince me that this solves the problem.

So I don’t know what the answer is.

I would not want to jump in and say that I know what the answer is. However, it is clear that there is a mechanism design question here. The economist’s knee-jerk reaction to this would be “if the consumers of information are more interested in being entertained than informed, then it is efficient to provide them entertainment as long as the marginal cost of entertaining each one of them meets her/his marginal willingness to pay”. As Thoma notes, it is noted that reporting has external effects. These would seem to push us in the direction of amending the rule for social optimality and looking for ways to align pundits’ incentives to what efficiency would require.

But if the majority of the audience want to be entertained and not informed, shouldn’t we economists, as children of the Enlightenment, bow to the consumers’, our multitudinous Kings’, desires? To take the idea that bad reporting carries negative externalities seriously, one has to take seriously the possibility that people express preferences for the wrong things, things that will in the long term, collectively conspire to harm them. Is this only because of the word “collectively” and so only a question of externalities, one step removed? I think that there is more “irrationality” to consumers than that. We need to come to grips, as we consider mechanism design, with “irrational consumers”. The misnamed “behavioral economics” (all economics is behavioral) field has some valuable ideas here. It seems to me economic theorists of the mechanism-design bent, should adopt these ideas and do their formalizing magic with them to reach some results. After all, no lesser theorist than Leonid Hurwicz made a foray into “irrational” agents all the way back in the 1980s.

Remark: I always place “irrational” and “rational” within quotation marks. Given what I know of game theory, including Binmore’s work on the application of Goedel’s Theorem on games played by automata, and games such as the Prisoners’ Dilemma and the Centipede, I feel I have no way of even pretending that I know what “rational behavior” really ought to mean in the case of individuals interacting in a game. Worse, in the context of consumer not knowing “what’s good for them”, we have an additional level of “irrationality” which seems to resolve to time inconsistency in the behavior of a single person. This post being long enough, I have to leave further development on my thoughts on these points to another post.

Paul Romer on Elinor Ostrom’s Nobel prize

My old teacher Paul Romer has a fantastic post on Ostrom’s prize award. A long quote follows, but I do strongly recommend the whole thing:

Most economists think that they are building cranes that suspend important theoretical structures from a base that is firmly grounded in first principles. In fact, they almost always invoke a skyhook, some unexplained result without which the entire structure collapses. Elinor Ostrom won the Nobel Prize in Economics because she works from the ground up, building a crane that can support the full range of economic behavior.

When I started studying economics in graduate school, the standard operating procedure was to introduce both technology and rules as skyhooks. If we assumed a particular set of rules and technologies, as though they descended from the sky, then we economists could describe what people would do. Sometimes we compared different sets of rules that a “social planner” might impose but we never said anything about how actual rules were adopted. Crucially, we never even bothered to check that people would actually follow the rules we imposed.

Economists who have become addicted to skyhooks, who think that they are doing deep theory but are really just assuming their conclusions, find it hard to even understand what it would mean to make the rules that humans follow the object of scientific inquiry. If we fail to explore rules in greater depth, economists will have little to say about the most pressing issues facing humans today – how to improve the quality of bad rules that cause needless waste, harm, and suffering.