This site has moved to
The posts below are backup copies from the new site.

October 5, 2009

Economist's View - 5 new articles

"Capital Market Theory after the Efficient Market Hypothesis"

Dimitri Vayanos and Paul Woolley argue that we do not need to abandon the assumption of rational expectations in order to better understand how asset markets function, but we do need to incorporate principal agent problems into asset market models.

Setting aside whether agents are rational, I am fully sold on the idea that we need to do a better job of incorporating principal agent problems into these models, I have pointed to them myself and believe they pervade just about every step in the mortgage process (real estate agents, banks, appraisers, mortgage brokers, financial managers, etc.). However, in the past I have thought of these mechanisms as allowing bubbles to inflate rather than being the cause of them. That is, I have argued that two things that must happen for bubbles to occur. First, there must be a source of liquidity that can blow the bubble up, e.g. there must be something like a low interest rate policy from central banks or high savings rates in some countries of the world. Second, the protections that markets and regulations provide must fail so that all of the liquidity can pass through the regulatory "baffles and checkpoints" that would prevent the liquidity from over inflating some sector in the economy such as housing or stocks. Thus, within this framework, I see the ideas below as explaining how excess liquidity might cause a bubble to develop, but it does not tell us why excess liquidity might be present.

However, I now wonder if that separation is valid. Is it possible for a bubble to occur simply through reallocation of existing investments, i.e. without an external source of liquidity to drive it? The story below is one where asset price "momentum" is generated from principal agent issues driven by incomplete information on the quality of investment fund managers. People see high returns in a particular sector, and they cannot tell whether the lower returns they are receiving are due to their fund manager's proper avoidance of risk, or incompetent management. As they increasingly conclude that incompetence is to blame, funds shift to the new sector and this creates a self-reinforcing process where prices are driven above their fundamental values, i.e. a bubble occurs. It seems like such reallocation of investment funds could, if driven by a strong enough incentive, be enough on its own to drive a bubble even without an external source of liquidity.

What does seem to be key is there must be some reason to believe that the higher returns are due to something other than taking more risk. In the present crisis, it was financial innovation coupled with the idea that policymakers and the market could maintain the Great Moderation that led to the idea that higher returns could be generated without a corresponding increase in risk. In the case, it was the promised higher productivity from the internet that provided the necessary story, and in the Great Depression it was electricity and the internal combustion engine (among other technological advances) that convinced everyone that we had entered a new, unprecedented era of higher productivity.

So I would now amend the list a bit and say that (at least) three things are needed to generate a bubble. First, an idea that makes people believe that higher returns are available without assuming more risk needs to be present. Second, there must be a source of liquidity to inflate the bubble. This can come from external sources such as high saving or low interest rate policy, or it can come from reallocation of existing investments (e.g. when people in the U.S. stopped loaning to foreign governments prior to the Great Depression so that they could chase the higher returns at home). And third, there must be regulatory and/or market failures that allow the bubble to inflate with little or no resistance.

In any case, here's the argument on how "momentum" might be created:

Capital market theory after the efficient market hypothesis, by Dimitri Vayanos and Paul Woolley, Vox EU: Forty years have passed since the principles of classical economics were first applied formally to finance through the contributions of Eugene Fama (1970) and his now-renowned fellow academics. Over the intervening years, capital market theory and the efficient market hypothesis have been developed and modified to form an elegant and comprehensive framework for understanding asset pricing and risk.

But events have dealt a cruel blow to these theories, as John Authers argued in his recent FT column. Capital market booms and crashes, culminating in the latest sorry and socially costly crisis, have discredited the idea that markets are efficient and that prices reflect fair value.

Some economists still insist these events are simply the lively interplay of broadly efficient markets and see no cause to abandon the prevailing wisdom. Other commentators, including a number of leading economists, have proclaimed the death of mainstream finance theory and all that goes with it, especially the efficient market hypothesis, rational expectations, and mathematical modelling. The way forward, they argue, is to understand finance based on behavioural models on the grounds that psychological biases and irrational urges better explain the erratic performance of asset prices and capital markets. Presented this way, the choice seems stark and unsettling, and there is no doubt that the academic interpretation of finance is at a critical juncture.

The need for a science-based, unified theory of finance

At stake is the need for a scientifically based, unified theory of finance that is rigorous and tractable; one that retains as much as possible of the existing analytical framework and simultaneously produces credible explanations and predictions. This is no storm in an academic teacup. On the contrary, the implications for growth, wealth and society cannot be overstated. The efficient market hypothesis has beguiled policymakers into believing that market prices could be trusted and that bubbles either did not exist, were positively beneficial for growth, or could not be spotted. Intervention was therefore unnecessary, and regulation could be light-touch. By contrast, a theory of asset pricing that did a good job of explaining mispricing would provide policymakers with a stronger rationale for intervention and more scepticism about mark-to-market, index-tracking, and derivative pricing, to name but a few examples.

Principal-agent investment problems: Mispricing with rationality

We believe that a first step in the search for a new paradigm is to avoid the mistake of jumping from observing that prices are inefficient to believing that investors must be irrational, or that it is impossible to construct a valid theory of asset pricing based on rational behaviour. Finance theory has combined rationality with other assumptions, and it is one of these other assumptions that has proved unfit for purpose. The crucial flaw has been to assume that prices are set by the army of private investors, the "representative household" as the jargon has it. Households are assumed to invest directly in equities and bonds and across the spectrum of the derivatives markets. Theory has ignored the real world complication that investors delegate virtually all their involvement in financial matters to professional intermediaries – banks, fund managers, brokers – who dominate the pricing process.

Delegation creates an agency problem. Agents have more and better information than the investors who appoint them, and the interests of the two are rarely aligned. For their part, principals cannot be certain of the competence or diligence of their appointed agents. The agency problem has been acknowledged in corporate finance and banking but hardly at all in asset pricing. Introducing agents brings greater realism to asset-pricing models and can be shown to transform the analysis and output. Importantly, this is achieved whilst maintaining the assumption of fully rational behaviour on the part of all concerned. Such models have more working parts and therefore a higher level of complexity, but the effort is richly rewarded by the scope and relevance of the predictions.

By doing this in our recent paper (Vayanos and Woolley, 2008), we have been able to explain momentum, the commonly observed propensity for trending in prices, which in extreme form causes bubbles and crashes. Momentum is incompatible with an efficient market and has proved difficult to explain in the traditional framework. Indeed, it has been described by Fama and French (1993) as the "premier unexplained anomaly" in asset pricing. Central to the analysis is that investors have imperfect knowledge of the ability of the fund managers they invest with. They are uncertain whether underperformance against the benchmark arises from the manager's prudent avoidance of over-priced stocks or is a sign of incompetence. As shortfalls grow, investors conclude incompetence and react by transferring funds to the outperforming managers, thereby amplifying the price changes that led to the initial underperformance and generating momentum.1

The dot-com boom

The technology bubble ten years ago illustrates this well. Technology stocks received an initial boost from fanciful expectations of future profits from scientific advance. Meanwhile, funds invested in the unglamorous, value sectors languished, prompting investors to lose confidence in the ability of their underperforming value managers and switch funds to the newly successful growth managers, a response which gave a further boost to growth stocks. The same thing happened as value managers themselves began switching from value to growth stocks to avoid being fired.

Through this conceptually simple mechanism, the model explains asset pricing in terms of a battle between fair value and momentum. It shows how rational profit seeking by agents and the investors who appoint them gives rise to mispricing and volatility. Once momentum becomes embedded in markets, agents then logically respond by adopting strategies that are likely to reinforce the trends. Explaining the formation of asset pricing in this way seems to provide a clearer understanding of how and why investors and prices behave as they do. For example, it throws fresh light on why value stocks generally outperform growth stocks despite offering seemingly poorer earnings prospects. The new approach offers a more convincing interpretation of the way stock prices react to earnings announcements or other news. It also shows how short-term incentives, such as annual performance fees, cause fund managers to concentrate on high-turnover, trend-following strategies that add to the distortions in markets, which are then profitably exploited by long-horizon investors. At the level of national markets and entire asset classes, it will no longer be acceptable to say that competition delivers the right price or that the market exerts self-discipline.

More micro modelling of the financial sector

It seems self-evident that the way forward must be to stop treating the finance sector as a pass-through that has no impact on asset pricing and risk. Incorporating delegation and agency into financial models is bound to lead to a better understanding of phenomena that have so far been poorly understood or unaddressed. Because the new approach maintains the rationality assumption, it makes it possible to retain much of the economist's existing toolbox, such as mathematical modelling, utility maximisation and general equilibrium reasoning. The insights, elegance, and tractability that these tools provide will be used to study more complex phenomena with very different economic assumptions. The new general theory of asset pricing that eventually emerges should relegate the efficient market hypothesis to the status of special and limiting case.

Concluding remarks

Of course, investors may not always behave in a perfectly rational way. But that is beside the point. If this new approach meets the criteria of relevance, validity, and universality required of any new theory, then it provides a valuable starting point in understanding markets. Models based on irrational behaviour can always be helpful in offering supplementary or more detailed insights.

The impact of the new general theory will extend well beyond explaining asset prices and investors' actions.

  • Corporate finance and banking theory have both been developed under the pro-forma assumption of price efficiency and will now need to accommodate systematic mispricing.
  • Macroeconomics has also treated finance as a pass-through and would benefit from changing the economic emphasis and focusing more on the impact of agency and incentives in the savings and investment process.
  • In the context of the recent crisis, governments and regulators can only rebuild and re-regulate banking and finance successfully if they have a better idea of how crises form.
  • Finally, economists may start to ask questions about the social value of the finance sector, its size, and complexity – questions that could be conveniently brushed under the carpet given the prevailing paradigm of efficiency.


Fama, Eugene F. (1970), "Efficient Capital Markets: A Review of Theory and Empirical Work", The Journal of Finance, Vol. 25, No. 2, Papers and Proceedings of the Twenty-Eighth Annual Meeting of the American Finance Association

Fama, Eugene F. and Kenneth R. French, "Common Risk Factors in the Returns on Stocks and Bonds," Journal of Financial Economics, 1993.

Vayanos and Woolley (2008), "An Institutional Theory of Momentum and Reversal," The Paul Woolley Centre for the Study of Capital Market Dysfunctionality, Working Paper Series No.1.


1. We show that as long as fund flows are gradual, as in the real world, price changes are also gradual. Intuitively, rational long-term investors are eager to buy an undervalued stock even when the stock is expected to become more undervalued in the future because of the risk that undervaluation might instead disappear. We term this the "bird in the hand" effect.

This article may be reproduced with appropriate attribution. See Copyright (below).

Paul Krugman: The Politics of Spite

Why have Republicans positioned themselves as defenders of Medicare?:

The Politics of Spite, by Paul Krugman, Commentary, NY Times: There was what President Obama likes to call a teachable moment last week, when the International Olympic Committee rejected Chicago's bid to be host of the 2016 Summer Games.
"Cheers erupted" at the headquarters of the conservative Weekly Standard, according to a blog post by a member of the magazine's staff, with the headline "Obama loses! Obama loses!" Rush Limbaugh declared himself "gleeful." "World Rejects Obama," gloated the Drudge Report. And so on.
So what did we learn from this moment? For one thing, we learned that the modern conservative movement ... has the emotional maturity of a bratty 13-year-old.
But more important, the episode illustrated an essential truth...: the guiding principle of one of our nation's two great political parties is spite pure and simple. If Republicans think something might be good for the president, they're against it — whether or not it's good for America.
To be sure, while celebrating America's rebuff by the Olympic Committee was puerile, it didn't do any real harm. But the same principle of spite has determined Republican positions on more serious matters... — in particular, in the debate over health care reform. ...
The Republican ... line of attack [against health care reform] is the claim — based mainly on lies about death panels and so on — that reform will undermine Medicare. And this line of attack is utterly at odds both with the party's traditions and with what conservatives claim to believe.
Think about just how bizarre it is for Republicans to position themselves as the defenders of unrestricted Medicare spending. First of all, the modern G.O.P. considers itself the party of Ronald Reagan — and Reagan was a fierce opponent of Medicare's creation, warning that it would destroy American freedom. (Honest.) In the 1990s, Newt Gingrich tried to force drastic cuts in Medicare financing. And in recent years, Republicans have repeatedly decried the growth in entitlement spending — growth that is largely driven by rising health care costs.
But the Obama administration's plan to expand coverage relies in part on savings from Medicare. And since the G.O.P. opposes anything that might be good for Mr. Obama, it has become the passionate defender of ineffective medical procedures and overpayments to insurance companies. ...
The key point is that ever since the Reagan years, the Republican Party has been dominated by radicals — ideologues and/or apparatchiks who, at a fundamental level, do not accept anyone else's right to govern.
Anyone surprised by the venomous, over-the-top opposition to Mr. Obama must have forgotten the Clinton years. Remember when Rush Limbaugh suggested that Hillary Clinton was a party to murder? When Newt Gingrich shut down the federal government in an attempt to bully Bill Clinton into accepting those Medicare cuts? And let's not even talk about the impeachment saga.
The only difference now is that the G.O.P. is in a weaker position, having lost control not just of Congress but, to a large extent, of the terms of debate. The public no longer buys conservative ideology the way it used to; the old attacks on Big Government and paeans to the magic of the marketplace have lost their resonance. Yet conservatives retain their belief that they, and only they, should govern.
The result has been a cynical, ends-justify-the-means approach. Hastening the day when the rightful governing party returns to power is all that matters, so the G.O.P. will seize any club at hand with which to beat the current administration.
It's an ugly picture. But it's the truth. And it's a truth anyone trying to find solutions to America's real problems has to understand.

"Kuhn's Paradigm Shift"

Daniel Little discusses Thomas Kuhn's contributions to the philosophy of science:

Kuhn's paradigm shift, by Daniel Little: Thomas Kuhn's The Structure of Scientific Revolutions (1962) brought about a paradigm shift of its own, in the way that philosophers thought about science. The book was published in the Vienna Circle's International Encyclopedia of Unified Science in 1962. (See earlier posts on the Vienna Circle; post, post.) And almost immediately it stimulated a profound change in the fundamental questions that defined the philosophy of science. For one thing, it shifted the focus from the context of justification to the context of discovery. It legitimated the introduction of the study of the history of science into the philosophy of science -- and thereby also legitimated the perspective of sociological study of the actual practices of science. And it cast into doubt the most fundamental assumptions of positivism as a theory of how the science enterprise actually works.

And yet it also preserved an epistemological perspective. Kuhn forced us to ask questions about truth, justification, and conceptual discovery -- even as he provided a basis for being skeptical about the stronger claims for scientific rationality by positivists like Reichenbach and Carnap. And the framework threatened to lead to a kind of cognitive relativism: "truth" is relative to a set of extra-rational conventions of conceptual scheme and interpretation of data.

The main threads of Kuhn's approach to science are well known. Science really gets underway when a scientific tradition has succeeded on formulating a paradigm. A paradigm includes a diverse set of elements -- conceptual schemes, research techniques, bodies of accepted data and theory, and embedded criteria and processes for the validation of results. Paradigms are not subject to testing or justification; in fact, empirical procedures are embedded within paradigms. Paradigms are in some ways incommensurable -- Kuhn alluded to gestalt psychology to capture the idea that a paradigm structures our perceptions of the world. There are no crucial experiments -- instead, anomalies accumulate and eventually the advocates of an old paradigm die out and leave the field to practitioners of a new paradigm. Like Polanyi, Kuhn emphasizes the concrete practical knowledge that is a fundamental component of scientific education (post). By learning to use the instruments and perform the experiments, the budding scientist learns to see the world in a paradigm-specific way. (Alexander Bird provides a good essay on Kuhn in the Stanford Encyclopedia of Philosophy.) A couple of questions are particularly interesting today, approaching fifty years after the writing of the book. One is the question of origins: where did Kuhn's basic intuitions come from? Was the idea of a paradigm a bolt from the blue, or was there a comprehensible line of intellectual development that led to it? There certainly was a strong tradition of study of the history of science from the late nineteenth to the twentieth century; but Kuhn was the first to bring this tradition into explicit dialogue with the philosophy of science. Henri Poincaré (The Foundations of Science: Science and Hypothesis, The Value of Science, Science and Methods) and Pierre Duhem (The Aim and Structure of Physical Theory) are examples of thinkers who brought a knowledge of the history of science into their thinking about the logic of science. And Alexandre Koyré's studies of Galileo are relevant too (From the Closed World to the Infinite Universe); Koyré made plain the "revolutionary" character of Galileo's thought within the history of science. However, it appears that Kuhn's understanding of the history of science took shape through his own efforts to make sense of important episodes in the history of science while teaching in the General Education in Science curriculum at Harvard, rather than building on prior traditions. Another question arises from the fact of its surprising publication in the Encyclopedia. The Encyclopedia project was a fundamental and deliberate expression of logical positivism. Structure of Scientific Revolutions, on the other hand, became one of the founding texts of anti-positivism. And this was apparent in the book from the start. So how did it come to be published here? (Michael Friedman takes up this subject in detail in "Kuhn and Logical Positivism" in Thomas Nickles, Thomas Kuhn (link).) George Reisch and Brazilian philosopher J. C. P. Oliveira address exactly this question. Oliveira offers an interesting discussion of the relationship between Kuhn and Carnap in an online article. He quotes crucial letters from Carnap to Kuhn in 1960 and 1962 about the publication of SSR in the Encyclopedia series. Carnap writes,

I believe that the planned monograph will be a valuable contri­bution to the Encyclopedia. I am myself very much interested in the problems which you intend to deal with, even though my knowledge of the history of science is rather fragmentary. Among many other items I liked your emphasis on the new conceptual frameworks which are proposed in revolutions in science, and, on their basis, the posing of new questions, not only answers to old problems. (REISCH 1991, p. 266)

I am convinced that your ideas will be very stimulating for all those who are interested in the nature of scientific theories and especially the causes and forms of their changes. I found very illuminating the parallel you draw with Darwinian evolution: just as Darwin gave up the earlier idea that the evolution was directed towards a predetermined goal, men as the perfect organism, and saw it as a process of improvement by natural selection, you emphasize that the development of theories is not directed toward the perfect true theory, but is a process of improvement of an instrument. In my own work on inductive logic in recent years I have come to a similar idea: that my work and that of a few friends in the step for step solution of problems should not be regarded as leading to "the ideal system", but rather as a step for step improvement of an instrument. Before I read your manuscript I would not have put it in just those words. But your formulations and clarifications by examples and also your analogy with Darwin's theory helped me to see clearer what I had in mind. From September on I shall be for a year at the Stanford Center. I hope that we shall have an opportunity to get together and talk about problems of common interest. (REISCH 1991, pp.266-267)
Against what Oliveira calls "revisionist" historians of the philosophy of science, Oliveira does not believe that SSR was accepted for publication by Carnap because Carnap or other late Vienna School philosophers believed there was a significant degree of agreement between Kuhn and Carnap. Instead, he argues that the Encyclopedia group believed that the history of science was an entirely separate subject from the philosophy of science. It was a valid subject of investigation, but had nothing to do with the logic of science. Oliveira writes,
Thus, the publication of Structure in Encyclopedia could be justified merely by the fact that the Encyclopedia project had already reserved space for it. Indeed, it is worth pointing out that the editors commissioned Kuhn's book as a work in history of science especially for publication in the Encyclopedia.
Also interesting is to consider where Kuhn's ideas went from here. How much influence did the theory have within philosophy? Certainly Kuhn had vast influence within the next generation of anti-positivist or post-positivist philosophy of science. And he had influence in fields very remote from philosophy as well. Paul Feyerabend was directly exposed to Kuhn at UCLA and picks up the anti-positivist thread in Against Method. Imre Lakatos introduces important alternatives to the concept of paradigm with his concept of a scientific research programme. Lakatos makes an effort to reintroduce rational standards into the task of paradigm choice through his idea of progressive problem shifts (The Methodology of Scientific Research Programmes: Volume 1: Philosophical Papers). An important volume involving Kuhn, Feyerabend, and Lakatos came directly out of a conference focused on Kuhn's work (Criticism and the Growth of Knowledge: Volume 4: Proceedings of the International Colloquium in the Philosophy of Science, London, 1965). Kuhn's ideas have had a very wide exposure within the philosophy of science; but as Alexander Bird notes in his essay in the Stanford Encyclopedia of Philosophy, there has not emerged a "school" of Kuhnian philosophy of science. From the perspective of a half century, some of the most enduring questions raised by Kuhn are these:
  • What does the detailed study of the history of science tell us about scientific rationality?
  • To what extent is it true that scientific training inculcates adherence to a conceptual scheme and approach to the world that the scientist simply can't critically evaluate?
  • Does the concept of a scientific paradigm apply to other fields of knowledge? Do sociologists or art historians have paradigms in Kuhn's strong sense?
  • Is there a meta-theory of scientific rationality that permits scientists and philosophers to critically examine alternative paradigms?
  • And for the social sciences -- are Marxism, verstehen theory, or Parsonian sociology paradigms in the strong Kuhnian sense?
Perhaps the strongest legacy is this: Kuhn's work provides a compelling basis for thinking that we can do the philosophy of science best when we consider the real epistemic practices of working scientists carefully and critically. The history and sociology of science is indeed relevant to the epistemic concerns of the philosophy of science. And this is especially true in the case of the social sciences. Reference Reisch, George (1991). Did Kuhn Kill Logical Empiricism? Philosophy of Science, 58.

"The Medium-Term Outlook for the Recovery"

What happens to innovation, technology and growth during recessions? Are recessions temporary, or do they have a permanent impact on the trend rate of output? Antonio Fatás says "one cannot reject the hypothesis that all output fluctuations leave a permanent scar in the economy," but these questions deserve "more attention in terms of academic research":

More on the medium-term outlook for the recovery, by Antonio Fatás: The magazine The Economist has an article this week on the persistence of the current recession and whether output will return to its trend. The arguments that the article present are similar to those made in the Chapter 4 of the recent World Economic Outlook by the IMF (see our previous post on this matter): it is likely that the current recovery is not strong enough to bring output back to trend. In a recent NBER working paper, Cechetti, Kohler and Upper also provide empirical evidence suggesting that financial crisis leave long-lasting (negative) effects on output.
The question on the connection between recessions (or business cycles in general) and potential output ("the trend") is one that has not been studied much in economics. Most of the models we use tend to think about the trend as being independent of business cycles - so recoveries always bring output back to the pre-crisis trend. Policy makers tend to use the concept of the output gap, the deviation of output from its potential, to think about the strength of the recovery under the assumption that in a "normal" year the output gap should be back to zero.
The strongest evidence one can find in favor of this hypothesis (that recessions are temporary) comes from the US economy. The US economy has displayed a surprising tendency to return to trend even after some major events such as the great depression, World War II or the recessions of the 70s. Below you can see a chart that shows the evolution of GDP per capita in the US during the period 1870-2008. The red line represents a (log-)linear trend using data up to 1928. It is remarkable how close the blue line is to the red line and how the economy recovers to return to trend.
In fact, using 1870-1928 data, a prediction using that (log-)linear trend leads to an error of only 1% for the level of GDP per capita in 2008. Of course, the picture is misleading in the sense that in some cases it took a long time for the economy to come back to this trend, but it is still interesting that it returned to the same trend. It could have returned to the same growth rate but at a different level but that's not what we see, we see that the output loss is always recovered after a number of years. This suggests that the supply side of the economy (innovation, technology) is unaffected by output fluctuations.
If one looks more carefully at the data, the evidence becomes much weaker. In contradiction to what we see in the picture above, empirical economists know that output fluctuations are very persistent. In fact, one cannot reject the hypothesis that all output fluctuations leave a permanent scar in the economy. If we suffer a recession, output never goes back to trend, it remains at a lower level forever (this is what is known in the academic literature as the existence of a "unit root" in output).
From a theoretical point of view, there are two ways to justify the fact that recessions always leave permanent effects:
1. Technological changes are the cause of business cycles. Recessions are period where we are not good at innovating and this causes both a recession and a permanent loss in output. This is what we know as "real business cycle theory".
2. Innovation is affected by recessions. During recessions firms invest less and this lead to a temporary slowdown of technological progress, so the economy never returns to the same trend. It will go back to its normal growth rate but the temporary effects on growth will leave a permanent scar on the economy. This is the argument that we hear these days to support the fear that the current recovery will not be strong enough. A few years ago I wrote a couple of academic papers that presented this theory and some international evidence in favor of this hypothesis (the papers can be found here and here). This is an area of macroeconomics that I believe deserves more attention in terms of academic research (but I am biased, given that I have written on the subject). And it is not just about financial crisis but, more generally, about what happens to innovation, technology and growth during recessions.

links for 2009-10-04

No comments: