Fukuyama on Hayek

Francis Fukuyama has a review of Hayek’s The Constitution of Liberty in the New York Times. He concludes with this insightful observation:

In the end, there is a deep contradiction in Hayek’s thought. His great insight is that individual human beings muddle along, making progress by planning, experimenting, trying, failing and trying again. They never have as much clarity about the future as they think they do. But Hayek somehow knows with great certainty that when governments, as opposed to individuals, engage in a similar process of innovation and discovery, they will fail. He insists that the dividing line between state and society must be drawn according to a strict abstract principle rather than through empirical adaptation. In so doing, he proves himself to be far more of a hubristic Cartesian than a true Hayekian.

UPDATE (2011-05-09): Although the commenter to this post did not respond to my request to show me who has contradicted Fukuyama online, I did find a whole lot of comments via Easterly’s blog. This one, from beyond the grave, is most entertaining. Do read the comments on Easterly’s blog, too. At the end of the day, Fukuyama’s review, at least the part I quote above, is certainly true about people like Beck and may not be true for Hayek. I stand by my observation in the comment below that Hayek should have looked for the mathematical tools needed to formalize his seminal ideas. (I have almost the same complaint about Keynes’s General Theory, although Keynes was reasonably capable in mathematical modeling. Why did he not try it?)

What a little honesty can do for implementation theory

Moving beyond consequentialism

The vast majority of the implementation literature follows mainstream economic theory by assuming that agents care about outcomes (consequentialism). Of course this is not true; most people care in some measure about how outcomes are achieved, too. They care about the process that brings the outcomes. They are not proud of themselves if they lie in pursuit of wealth or power. (At least I hope this is still true of most people, even if they are graduates of economics programs.)

It seems worthwhile to inject a little caring-about-process into implementation theory to capture such considerations. A few authors have attempted this recently and found out that a truly minimal injection of caring about process has larger results than one would expect.

An aside that promises a future post

A great example of a paper that goes well beyond a minimal injection of process-caring is Pride and Prejudice: The Human Side of Incentive Theory, by Tore Ellingsen and Magnus Johannesson, American Economic Review 2008, 98:3, 990–1008. Since this study limits itself to the principal-agent model and it modifies the outcome-orientation of standard theory drastically, I will discuss it in a separate post. Here I want to discuss a group of related papers that take the smallest possible departure from outcome-orientation in the context of implementation theory.

Four papers on honesty in implementation

Matsushima’s papers

Hitoshi Matsushima comes first with two papers on this topic. In Role of honesty in full implementation, Journal of Economic Theory 2008, 139, 353–359, he introduces a model of an agent who chooses strategies in order to achieve the best payoff as in standard theory, except when faced with a choice between an honest strategy and a dishonest one, such that they each reach outcomes that have the same payoff. In such a situation, the agent plays the honest strategy, defined as the strategy that goes along with the mechanism designer’s—the planner’s—intention.

In detail, an agent has standard, expected-utility, preferences on the alternatives and furthermore suffers a psychological cost, a disutility, from dishonesty. The agent starts life endowed with a signal and is required to make many announcements regarding her signal in the mechanism. Her cost of dishonesty is an increasing function of the proportion of dishonest announcements she makes in the playing of the mechanism game. Apart from this, the model is standard.

Matsushima proves that if a social choice function is Bayesian-incentive compatible (with the dishonesty cost removed from the agents’ utility functions), then it is implementable in iteratively undominated strategies. Further, the mechanism that achieves this implementation is detail-free, meaning that the planner does not need to know details of the agents’ utility functions or prior belief distributions to design the mechanism. In addition, while the mechanism does involve fines that are to be imposed when certain strategies are played, these fines are small. All in all, this is a very impressive result. However, it is not clear in my mind how the planner would know that the social choice function to be implemented is incentive-compatible without the knowledge of the details on utility functions and priors that are not needed for the design of the implementing mechanism.

An answer to this concern is given in the second Matsushima paper I want to discuss, Behavioral aspects of implementation theory, Economics Letters, 100, 2008, 161–164. In this paper any social choice function is implementable in iterative dominance as long as there is aversion to telling lies among the agents. Remarkably, if this aversion is absent, the mechanism of this paper has a large multiplicity of Nash equilibria, a multiplicity that disappears the moment even a slight white-lie aversion comes in and we turn to iteratively undominated equilibrium. The mechanism is entirely detail-free, not even depending on the form of the social choice function.

Both papers use mechanisms with lotteries among alternatives as outcomes. While this allows the designer to sidestep any issues that the discreteness of the set of alternatives raises, it also is subject to the standard criticisms of mechanisms that use lotteries. This criticism points out that such mechanisms depend heavily on the assumption that agents are expected-utility maximizers, and it assumes a very high level of trust in the mechanism operator by the agents.

One last remark on the Matsushima papers: they study social choice functions. Let us now consider two papers that look at social choice correspondences, not functions, and Nash equilibrium as the implementing equilibrium notion, rather than iterative undomination.

Papers on Nash implementation

Dutta and Sen 2009

Bhaskar Dutta and Arunava Sen released a working paper (PDF) entitled Nash Implementation with Partially Honest Individuals on November 9, 2009. They found that even minimal dishonesty aversion, as already described in this post, expands the class of social choice correspondences that are implementable in Nash equilibrium dramatically when there are three or more agents: any social choice correspondence that satisfies No Veto Power is Nash implementable when there exists at least one partially honest individual! This result is spectacular and it stands in stark contrast to Maskin’s classic results that Maskin Monotonicity is a necessary condition for a social choice correspondence to be Nash implementable, and Maskin Monotonicity with No Veto Power are together sufficient. The planner here only needs to know that there is one partially honest agent, not who this agent is.

The paper has additional results for the case of two agents, when it is known that one of them is partially honest and for the case where there is a positive probability that a certain agent would be partially honest. But I think the clearest advance is the extension of Nash implementability so much beyond the walls of the Monotonicity jail.

Lombardi and Yoshihara 2011

The last paper I discuss here is the one that brought Dutta and Sen to my attention (I knew about the Matshushima papers already). It is Partially-honest Nash implementation: Characterization results, by Michele Lombardi and Naoki Yoshihara, February 13, 2011, available in the Munich Personal RePec Archive. This paper gives necessary conditions for Nash implementation with partially honest agents. It also studies the implications of the presence of partially honest individuals for strategy space reduction. The surprise here is that the equivalence between Nash implementation via Maskin’s “canonical” mechanism (which is very demanding in terms of information transmission) and Nash implementation with more information-economical mechanisms (such as Saijo’s strategy-reduction one) breaks down in the presence of partially honest individuals.

This paper is the longest of the four and contains lengthy proofs. It gives the impression of the authors coming into this area and sweeping a lot of cobwebs away from the corners that the seminal works in the nascent partial-honesty implementation area (the previous three papers) did not clean up as they were more concerned with laying foundations.

Musings

I find this trend in implementation theory very refreshing. While it is still at a highly abstract level, it has already set secure foundations for further study. I am eager to see what else can be done with the general idea of adding a degree of honesty to economic agents involved in a mechanism. There are such people around still, even among those super-cynical types with economics degrees! Yet I can see that applied fields like auction theory might resist the trend. Especially for auctions involving corporations, the assumption that the agent, who now is a representative of a corporation, is partially honest, is less appealing than when the agent is an individual playing for herself. But this trend should be fruitful in applications to problems of externalities and public goods, two applied fields that come readily to mind as well-suited for it.

Duncan Foley criticizes Walrasian equilibrium theory

Duncan Foley is an astute critic of Walrasian general equilibrium theory in economics. He knows the theory deeply, having made seminal contributions as early as 1970 to its extension to incorporate public goods and to the limits of using Walrasian equilibrium prices, in the presence of transaction costs, to reflect the present value of future commodities. In a recent paper in the Journal of Economic Behavior and Organization (What’s wrong with the fundamental existence and welfare theorems? volume 75, 2010, 115-131), he makes a strong attack on the reverence with which economists hold Walrasian theory.

I agree with Foley’s criticisms but they do not imply that we should abandon Walrasian equilibrium altogether. Let me discuss the criticisms and my reaction.

At the heart of the matter is the assumption that all trades happen at equilibrium prices. For this to happen, we have to imagine that every trader somehow anticipates which equilibrium will occur (it is not too hard to write down a model economy with many Walrasian equilibria) and treats all transactions that are predicated on different prices as tentative.

This is similar to the logic of Nash equilibrium; Foley does not bring in this connection, but I want to do so. Here’s how the story goes for Nash equilibrium. In a normal form game, each player is supposed to guess which of (the, again, potentially many, depending on the game) Nash equilibria will occur and then chooses a strategy that is a best response to the strategies the other players would play at this Nash equilibrium. Relaxing this rather extreme coordination assumption on the beliefs of players about the strategies of the other players is possible; it has been done in developing the theory of rationalizable strategies. The trade-off is that rationalizability has gained a more reasonable assumption on beliefs but has lost predictive acuity; it is easy to find games in which all strategies are rationalizable, so the theory makes no useful prediction.

Foley proposes an equilibrium concept he calls exchange equilibrium. This notion envisions people making mutually advantageous trades at disequilibrium prices; these prices then converge to a small set, in terms of size, but still a continuum set. Which exact equilibrium is reached in this set depends on the time path of transactions. Foley suggests that this is the correct equilibrium notion for a competitive marketplace, but it implies that preferences, endowments, and technology are not, taken together, enough information to pin down the equilibrium. Furthermore, in an exchange equilibrium agents with the same preferences need not be treated equally, unlike the situation in Walrasian equilibrium. Foley shows that, under general assumptions on an exchange economy, the set of exchange equilibria is non-empty and also that it has a stability property. His main conclusion is that Walrasian equilibrium has blinded economists from the aforementioned realization that the triumvirate of “preferences, endowments, technology” is just not enough to pin down an equilibrium. This is worth repeating, as economists, especially theorists, too often take the view that the triumvirate explains well enough what needs explaining in terms of economic exchange.

I suspect that Walrasian equilibrium can still serve as an approximation of the exchange equilibrium set. To the extent it is easier to compute Walrasian equilibria, this is useful. But I find Foley’s critique compelling, and wish that theorists will examine the properties of exchange equilibrium more intensively, extend the concept to production economies, and think about was of incorporating uncertainty without ending up with an equilibrium notion that yields a huge set of predictions or relies hugely on the time path of transactions. My hunch is that the time path of transactions is not powerful enough to completely obliterate the equilibrating tendency in markets; in Foley’s exchange equilibrium it does no such thing, but I can imagine a very weak predictive performance in domains with production and uncertainty. Still, exchange equilibrium is worth taking very seriously, and we should be teaching this paper of Foley’s to our graduate students.

I would love to see Foley move on to the multiplicity of equilibria problem in game theory, but I will not hold my breath for it to happen.

Should economic theorists shun simulations?

The blog A Fine Theorem is an excellent source of thoughtful discussions of interesting, mostly very recent, papers in economic theory and related fields. A link to it appears topmost in Sites I Like on the right side of this site. After an altogether too-long hiatus, I returned today to A Fine Theorem to find several items of interest that are new to me.

One of these items is the discussion of why economists (economic theorists, mostly) want to see proofs, not simulation results, to be convinced. As I understand the post, economists recognize that it is totally impossible to nail down economic theories on a few parameters that will be conceivably measured better and better over time. We economic theorists just have such a much harder time dealing with the complexity of a social system than a physicist who studies gravity that, as a profession, we have given up on coming up with a true model. Instead we build models upon a small, unified set of principles, a Foundation for all economics, that can give us ways of thinking consistently in many domains of application within economics. A paper that presents nice theorems and their proofs contributes to this enterprise, especially if the assumptions of the theorems do not depart much from the commonly-agreed Foundation. A paper that reports the results of a few thousand simulations hides the reasoning within a black box. Hence papers with proofs are more usable in building the economic theory pyramid and therefore more popular. Theorists stay away from trying to use the opaque (to them) reasoning that drives simulation results.

(I tried very hard not to make here the obvious analogy of economic theorizing as a pyramid scheme, and as you can see, I failed. If economic theory only aspires to indoctrinate researchers in THE ONE WAY to build models—and we know that econometric testing of the predictions of such models has not been a shining beacon of perfectibility in avoiding narrowminded, blid even, adherence to the Foundation—then we may be building totally useless beautifully constructed mathematical models, in fact harmful models, as they teach policy makers the wrong things. See below for another stab at this subject, and my total capitulation to have to devote a new post to it.)

I am not convinced by the black-box statement. (Confession: I still need to read seriously the paper that the post refers to, which can be found here (PDF).) If the authors of simulations publish their code, the only reason to say that the logic of the result is hidden is the inexperience of the reader with algorithms. Without going so far as to side with Wolfram, who said in his massive A New Kind of Science that all nature (all of it, including the living bits of nature and their behavior) is a bunch of algorithms, I would propose that we economic theorists need a better education in understanding algorithms.

As Leigh Tesfatsion says in one of her surveys of agent-based simulation (unfortunately I have no exact reference right now), every single run of a simulation is a mathematical proof, just with assumptions that appear narrower than we are used to (for instance, because they specify parameters numerically and exactly). A few thousand simulation runs can cover a decently-sized portion of the parameter space and then they are arguably as general, if not more, than a formal theorem on the same conceptual domain.

And things can be even worse for formal theorizing. It feels like we have been working with assumptions and techniques we are familiar with and which deliver consistently publishable papers based on assumptions from the Foundation. We are uncomfortable thinking about the possibility that this foundation is a quicksand. Computer simulations have a clear advantage here: flexibility. You can experiment with a simulation without feeling as restricted as by the straightjacket of mainstream economic theorizing. Simulations are more light-footed than Foundation-based theorizing, so they have a better chance to avoid sinking in the quicksand. (This is veering too far from the topic of the post, however, so I will have to write more in this vein in a subsequent post of mine.)

We could also imagine a source of unease about simulations in general, based on the complexity of the computer’s innards and even of the computer’s output. When the four-color theorem was proved by computer (not by simulation, but by exhaustive enumeration of the possibilities, as I understand it) a debate raged for a while among mathematicians. How could we accept a proof so gigantic that we cannot inspect it ourselves? I may be wrong, but mathematicians as a whole have resolved this debate by accepting that computers can give us acceptable proofs. We simply must accept the black-box problem; it comes with too many demonstrably solid and useful contributions of electronic computation to our lives. This is true even in cases where the box is kept black for commercial reasons, such as the exact Google search algorithm. A Google search these days may give us a few too many content-farm results that are not useful, but it still is an amazing tool whose operation is secret to most people.

As someone who has dabbled in agent-based computer simulations and intends to continue doing so, I appreciate the clarity this discussion gave me on the unpopularity of this kind of work among my colleagues, as well as the stimulation to think more about this issue. The unpopularity should not be overemphasized, however, as simulations are studied enough and published enough in economics to have their own volumes in the Handbook of Economics Series (Handbook of Computational Economics, in two volumes as of now, the second one on agent based computational economics, the kind of simulation I consider most useful for economists).