Reaction to Katharina Pistor’s book “The Code of Capital”

I recently read this book and decided that I will include it in the syllabus of my Economic Inequality course. A few days ago, when I indicated on Twitter my intention to write about the book in this blog, I was intending a review. However, I found good reviews online, to which my own review would have little to add. These are: a post in the Law and Political Economy blog by Sam Moyn, and this piece by Rex Nutting on MarketWatch. To these, I can add little of value from the point of view of a legal scholar, such as Sam Moyn, or a commentator on political economy, such as Rex Nutting. Instead, I will quote from the publisher’s online blurb, so you can get a quick idea what the book is about, before proceeding with my comments.

Capital is the defining feature of modern economies, yet most people have no idea where it actually comes from. What is it, exactly, that transforms mere wealth into an asset that automatically creates more wealth? The Code of Capital explains how capital is created behind closed doors in the offices of private attorneys, and why this little-known fact is one of the biggest reasons for the widening wealth gap between the holders of capital and everybody else.

In this revealing book, Katharina Pistor argues that the law selectively “codes” certain assets, endowing them with the capacity to protect and produce private wealth. With the right legal coding, any object, claim, or idea can be turned into capital—and lawyers are the keepers of the code. Pistor describes how they pick and choose among different legal systems and legal devices for the ones that best serve their clients’ needs, and how techniques that were first perfected centuries ago to code landholdings as capital are being used today to code stocks, bonds, ideas, and even expectations—assets that exist only in law.

I am intrigued by this book, in my capacity as an economist, for two main reasons.

  1. The book gives a new and insightful perspective on the nature of capital, not long after Thomas Piketty’s Capital in the Twenty-First Century, a book most certainly discussed in my course on economic inequality. One big criticism of Piketty’s concept of capital, leveled by other economists, is that it diverges from the standard use of “capital” in macroeconomic / growth theory, even though Piketty does appeal to some results from this theory in his analysis. Pistor offers in her book an intriguing definition of capital as the aggregation of a myriad strategies of highly-paid lawyers, who shop around existing legal systems to create encodings of assets into concepts that can be defended as being legal in some court of a recognized state, encodings that serve to make up assets out of “thin air” and make these assets long-lived, accumulating over time, and convertible to money when their owners desire. I am not a macroeconomist, but I am eager to see what my colleagues in that field will come up with by engaging with this definition. After all, Paul Romer’s 2018 Nobel prize was for his incorporation of ideas into growth theory, as boosting the productivity of all other inputs to production (yes, I am simplifying). Intellectual protection legal regimes matter for this for obvious reasons. Pistor essentially says that the ideas of lawyers are part of this process. She explicitly discusses how these lawyerly inventions have expanded the scope of intellectual property protection (simultaneously shrinking the public domain in the realm of ideas), but she says so much more about these lawyerly inventions that there ought to be plenty of material here for some new macroeconomic theory.
  2. The second reason this book intrigues me is that it suggests a diagnosis for the disease of ever-increasing inequality in incomes and wealth levels, with the attendant problems of social polarization, undermining of democratic systems and norms, and empowerment of more and more economic and political oligarchy. It is not the job of a law professor like Pistor to suggest to economists interested in political economy and mechanism design how to think about modeling a way forward to formulate effective social and policy responses to these trends. But she has done all such economists (and I do count myself as part of this group) a favor by her diagnosis. I hope the policy designs and suggestions from economists are not long in coming.

Network structure and fat tails in macroeconomic shocks

I just came across a working paper by Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi, entitled “MICROECONOMIC ORIGINS OF MACROECONOMIC TAIL RISKS”. Clearly I am a little late to the game, as this paper appeared in an earlier form in June 2013, but it reminded me of this paper, “Network Formation and Systemic Risk, Second Version” by Selman Erol and Rakesh Vohra (who cite what appears to be an earlier version of the Acemoglu et al. paper). I took a look at the latter paper recently. While I have only skimmed the Erol and Vohra paper and only read the abstract of the Acemoglu et al. paper, it seems that both are concerned with how the structure of the network connecting firms or financial institutions to each other can make the economy fragile when random shocks affect parts of it. Surely this is worth reading in more detail by folks interested in network economics, like myself, not just by macroeconomists.

Acemoglu, Robinson, and Verdier ask: Can’t we all be more like Scandinavians?

Daron Acemoglu, James Robinson, and Thierry Verdier have a new paper out that is getting a lot of attention in the economics blogosphere, rightly so. The paper can be found here. I want to make some comments of my own here, as well as drum up attention to the paper.

Acemoglu, Robinson, and Verdier present some evidence based on patents to argue that the US has been more innovative than the Scandinavian countries. They do so even when there are well-known problems with using patents to measure innovation. To their credit, after they present their theoretical model, they state that they hope it will spur more detailed empirical research. The theoretical model itself is based on a standard endogenous growth theory foundation, with some moral hazard thrown in to capture a very mainstream-economics view of innovators. Because of the moral hazard problem, innovators must be given the proper incentives or innovation will suffer.

The other crucial ingredient in their model is a spillover effect from more innovative economies to less. Their main point can be summarized as saying that economies with big social safety nets can be as innovative and wealthy these days because they enjoy innovations developed in economies with much less of a social safety net (and more income inequality) such the US’s. The development of the model is as masterful as one can expect from a group of authors that includes the author of the most advanced recent graduate-level textbook on economic growth (and Clark Medal winner in 2005, to boot).

I turn now to some of the discussion on economics blogs that I have seen in my RSS feeds, as some of the commentary is very good. First, let me start with the homo economicus assumption made in the paper (along with 99% of papers that use math-heavy economic theory, as it seems to me). It makes the protagonists, the would-be innovators, a bit too one-dimensional for believability. OK, you say, so what, if the model captures the aspect of economic arrangements it wants to capture, and does it well? But it does not, precisely because of this one-dimensionality. As a commenter (going by the name “ralmont”) to this post by Mark Thoma correctly pointed out, one of the most important innovations of the last two decades or so has been the development of Linux, on which run most of the web servers in the world, as well as the many, many phones and other devices that run Android. But Linux came out of the “cuddly” capitalism of Scandinavia (and indeed, from a then 21-year old student who opened it up to the world not in order to get rich but to learn and because he loved to tinker with operating system software). The standard one-dimensional model of economic man (the term was made up in backward times and I am keeping its gender slant on purpose, as a criticism) simply does not apply here. Also, by the nature of Linux (and all free/open source software), there is no point to going to look for a patent by which to measure the momentous impact of this development on economic growth.

In the body of the same post just discussed, Mark Thoma also says this:

An enhanced safety net — a backup if things go wrong — can give people the security they need to take a chance on pursuing an innovative idea that might die otherwise, or opening a small business. So it may be that an expanded social safety net encourages innovation.

I think I can let this stand with no further comment from me; it is so well-stated and I find it convincing.

Another blog post well worth mentioning here is one by Lane Kenworthy, a discussion of which started the Mark Thoma post I just discussed. Kenworthy makes many good points. One is that by a measure other than that of patents (the World Economic Forum’s Global Competitive Index), Scandinavian countries are almost as innovative as the US. Another is that in the 1960s and 1970s there was plenty of innovation in the US, even though that period offered less in terms of incentive to overcome the innovators’ moral hazard. A third point is that other countries with a lot of income inequality are not as innovative as as the US, so inequality does not always provide an impetus for innovation. (I can already anticipate that Acemoglu would construct a defense on this point by discussing extractive institutions, in the sense expounded in his book with Robinson Why Nations Fail, and indeed I will be looking forward to rebuttals to this criticism in the blog of the same name. I have a lot more to learn from that book and blog, which I hold in high regard—I am not writing here to bash Acemoglu and friends.) Finally, Kenworthy makes a direct comparison between the US and Sweden based on data and finds little reason to buy the story of the Acemoglu, Robinson, and Verdier paper.

Commenter “cfaman” to the Kenworthy post disputes the view of the US in the paper as not cuddly capitalism, by this:

But we are cuddly! I have a hard time thinking about innovations that have made a big difference since WWII that have not come directly our of government investment or substantially encouraged by government investment.

I have similar trouble, but then I am no economic historian.

Then there is Dylan Matthews writing that the paper’s

[…] finding relies on a model that doesn’t so much compare the American economy to social democracy as a stylized version of the American economy to actual socialism.

Well, of course, they are doing economic theory and you have to give license to theorists to simplify. But Matthews has more to say. In particular, he notes that the definition of “cuddly capitalism” in the paper

requires that there be literally no difference between successful and unsuccessful entrepreneurs’ income levels.

Here the theorist’s license to simplify may have gone too far. Scandinavian countries are not like this; in all of them successful entrepreneurs can make money, if not as much as in the US, enough to be very rich.

Matthews also offers evidence that Scandinavia is doing well compared to the US on innovation, and also points out, as the others I already mentioned did, that patent trolls and the general malaise of the US patent system makes using patent date highly suspect. Finally, he points out that even using the patent measure, the US did not become more innovative relative to Sweden countries as the US level of inequality soared in recent decades. Matthews concludes that the model of the paper does not hold because of this.

All in all, reading the debate has made me think more about innovation and for this I am thankful to Acemoglu and company. Graduate students inspired by the paper will doubtless learn a lot about how to theorize on economic growth; Acemoglu is a world-class master there. But I hope that the Acemoglu, Robinson, and Verdier plea for more empirical work on this takes precedence over theoretical refinements of the model for now. Young economists looking towards a good area for research, this is about as good as it gets. Crank up your statistical software, gather appropriate data, and go! I can imagine good ways to improve the theory in the paper, but only after it’s been bashed around a bit by the econometricians.

The repugnant conclusion and economic theory denialism

Many macroeconomic theorists appear utterly reluctant to accept the abject disaster that macroeconomic theory has become, as made evident by the crisis that started in 2007.

It just occurred to me today that, since most economists, and even more so most macroeconomists, are unquestioningly utilitarian (why else are they always looking to maximize criteria such as a representative agent’s discounted present value of lifetime utility?), they may be just unaware of the repugnant conclusion from population ethics and yet their work pushes the world towards it. Here is a succinct presentation from the page just linked:

In Derek Parfit’s original formulation the Repugnant Conclusion is characterized as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984).

This seems to me to be consonant with the approach of the austerians, who rely on mainstream DSGE models in macroeconomics to make recommendations to make a great number of presently living people more and more miserable with austerity measures, in order to safeguard the well-being of future generations, which the austerians think they only know how to do. Never mind the obvious fact that austerity measures don’t even advance the cause of a market economy and economic growth; all they do is give political power to neonazis and leftist extremists.

Can you tell I am first and foremost an incorrigible economic theorist? Here the world may well be entering a second Great Depression, to be capped off with widespread war, and I am talking about arcane philosophical topics. Except that, there may, just may, be a glimmer of hope in getting austerians to realize just how badly their recommendations are behaving and to learn a bit more social welfare criteria than utilitarianism. (I know, I will keep dreaming.)

The unmourned death of the double coincidence

Yet more on Graeber’s book, this time from John Quiggin.

The unmourned death of the double coincidence:

Graeber shows, convincingly enough for me, that the story conventionally told by economists, in which money emerges as a replacement for barter systems, is nonsense. In fact, as he notes this point has been made by anthropologists many times and ignored just as often. Thanks to the marvels of auto-googling, I’ve been aware for some time that my namesake, Alison Hingston Quiggin gave the definitive demonstration long ago in her ‘Survey of Primitive Money’. Graeber sharpens the point by arguing that the real source of money is as a way of specifying debts.

 

(Via Crooked Timber)

The world economy is not a tribute system

Having just posted about DeLong and Rossman on Graeber’s book, here is another essay on the topic, from Crooked Timber. Choice excerpt follows; I recommend reading the whole thing.

The world economy is not a tribute system:

In short – if there is evidence to support Graeber’s rather sweeping claims about the nature of the global economic system, he doesn’t provide it to the reader. Perhaps this evidence is buried in his sources somewhere. Perhaps not. But when one self-consciously makes grand claims that everyone else is wrong, one should have good evidence, and be prepared to produce it up front. Graeber, unless he’s keeping it very close to his chest indeed, has no evidence at all. This doesn’t seem to me to live up to the (admittedly high) standard that Graeber sets for himself.

(Via Crooked Timber)

How the poor debtors still sell their daughters, How in the drought men still grow fat « Code and Culture

Brad DeLong points to this lengthy review of David Graeber’s “Debt: The First 5000 Years”. I am in the process of reading Graeber’s book in bits and pieces in between my, rather pressing currently, stretches of work and the review is coming at the right time for me. Of particular interest is the example about Apple Inc. that points out that Graeber’s claims about facts cannot always be trusted. DeLong has a long blog post on this. The original essay by Gabriel Rossman is here:

How the poor debtors still sell their daughters, How in the drought men still grow fat « Code and Culture:

(Via codeandculture.wordpress.com)

John Quiggin makes an excellent point

I don’t often discuss macroeconomics here but this blog post by John Quiggin is well worth my attention and yours. He totally demolishes a review of his book by Steven Williamson, who is quoted as saying “Market efficiency is simply an assumption of rationality. As such it has no implications. If it has no implications, it can’t be wrong.” Williamson is also quoted as saying “Like the “efficient markets hypothesis,” DSGE has no implications, and therefore can’t be wrong.”

If you wonder why I have not gone over to my university’s online library site to read the review, it’s because I am utterly disgusted by the Williamson quotations and do not want to waste my time reading his review. Macroeconomics is supposed to be a science, not a part of analytic philosophy or logic; “economists” with such enormous blindfolds as Williamson have too much sway in the discipline and have corrupted its very core.

Like Tjalling Koopmans said in response to Milton Friedman’s methodological emanations, every assumption you make in building a theory is automatically, and rather obviously, also a prediction of the theory. Saying “I like the assumption of rationality, I will always make it, and it’s my mode so you can’t tell me not to play with it to my heart’s content” is odious nonsense. When the efficient market hypothesis or DSGE is part of a model that produces predictions that keep being smacked by the data, insisting that these assumptions (and since when is DSGE an assumption, rather than a whole modeling technique?) can’t be wrong is tantamount to saying that your theory can’t be useful and in fact is eminently ignorable. If you peddle it to the world and your students as science, you are at the minimum corrupting the notion of science itself.