This site has moved to
The posts below are backup copies from the new site.

August 7, 2012

Latest Posts from Economist's View

Latest Posts from Economist's View

Posted: 12 Jul 2012 12:06 AM PDT
Posted: 11 Jul 2012 03:24 PM PDT
Tim Duy:
FOMC Minutes Not a Smoking Gun, by Tim Duy: The minutes of the June FOMC meeting are out, and they did not deliver the much-anticipated smoking gun that would indicate QE3 was on its way. In fact, I think the minutes raise questions about another round of QE3 at all. The minutes hold many hints that policymakers are struggling to find a new direction for policy, one not necessarily dependent on balance sheet expansion.
There will be considerable focus on this section:
A few members expressed the view that further policy stimulus likely would be necessary to promote satisfactory growth in employment and to ensure that the inflation rate would be at the Committee's goal. Several others noted that additional policy action could be warranted if the economic recovery were to lose momentum, if the downside risks to the forecast became sufficiently pronounced, or if inflation seemed likely to run persistently below the Committee's longer-run objective.
We already knew that there exists a contingent seeking additional stimulus. And we suspected that there was a sizable middle ground that could be turned to the cause of additional stimulus. But the conditions that they place on further action - lost momentum, downside risks, or low inflation - all seemed to have been met at the last meeting. Indeed, the inflation clause is especially curious because the minutes earlier noted:
Looking beyond the temporary effects on inflation of this year's fluctuations in oil and other commodity prices, almost all participants continued to anticipate that inflation over the medium-term would run at or below the 2 percent rate that the Committee judges to be most consistent with its statutory mandate.
If everyone agrees they are going to miss their target, and everyone agrees that missing the target should be cause for action, then why was action limited to simply maintaining the status quo? (See also Matthew Yglesias). I can find two reasons in the minutes to withhold more aggressive action at this time. First is uncertainty about the inflation forecast:
Most participants viewed the risks to their inflation outlook as being roughly balanced.
Maybe they need most participants to view the risks as skewed toward even lower inflation. Another possibility is that officials simply are wary about the impact of further quantitative easing at this juncture and do not want to take action that does more harm than good. And how might it do such harm?
Some members noted the risk that continued purchases of longer-term Treasury securities could, at some point, lead to deterioration in the functioning of the Treasury securities market that could undermine the intended effects of the policy. However, members generally agreed that such risks seemed low at present, and were outweighed by the expected benefits of the action.
And this:
A few members observed that it would be helpful to have a better understanding of how large the Federal Reserve's asset purchases would have to be to cause a meaningful deterioration in securities market functioning, and of the potential costs of such deterioration for the economy as a whole.
And finally, perhaps most importantly, this:
Several participants commented that it would be desirable to explore the possibility of developing new tools to promote more-accommodative financial conditions and thereby support a stronger economic recovery.
I noted after the last meeting that the "further action" clause of the statement did not specify balance sheet operations as had previous statements. This possibly signaled that the next round of easing could come in a different form. What form I do not know; I would think MBS, but that would also act via the balance sheet. Similarly, we saw comments from Fed officials that further action could come in the form of communications, presumably with respect to the forward guidance.
Bottom Line: The minutes leaves me with the sense that it isn't so much the outlook that is holding back the Fed from further stimulus, but a lack of faith in the beneficial effects of further quantitative easing. That lack of faith may be why the bar to QE3 seems so high. So high that Fed officials are searching for other tools as the next step. Until I see more specific suggestions of other tools, I would continue to expect QE as the tool of choice. Given concerns about the functioning of the Treasuries market, MBS would seem a suitable alternative. But a building desire to explore new tools could mean a delay in any additional action. Hopefully Fed officials will give us more guidance on specific alternatives in the weeks ahead.
Posted: 11 Jul 2012 11:43 AM PDT
Stephen Ziliak, via email:
Dear Mark,
Does graphing improve prediction and increase understanding of uncertainty? When making economic forecasts, are scatter plots better than t-statistics, p-values, and other commonly required regression output?
A recent paper by Emre Soyer and Robin Hogarth suggests the answers are yes, that in fact we are far better forecasters when staring at plots of data than we are when dishing out – as academic journals normally do – tables of statistical significance. [Here is a downloadable version of the Soyer-Hogarth article.]
"The Illusion of Predictability: How Regression Statistics Mislead Experts" was published by Soyer and Hogarth in a symposium of the International Journal of Forecasting (vol. 28, no. 3, July 2012). The symposium includes published comments by J. Scott Armstrong, Daniel Goldstein, Keith Ord, N. Nicholas Taleb, and me, together with a reply from Soyer and Hogarth.
Soyer and Hogarth performed an experiment on the forecasting ability of more than 200 well-published econometricians worldwide to test their ability to predict economic outcomes using conventional outputs of linear regression analysis: standard errors, t-statistics, and R-squared.
The chief finding of the Soyer-Hogarth experiment is that the expert econometricians themselves—our best number crunchers—make better predictions when only graphical information—such as a scatter plot and theoretical linear regression line—is provided to them. Give them t-statistics and fits of R-squared for the same data and regression model and their forecasting ability declines. Give them only t-statistics and fits of R-squared and predictions fall from bad to worse.
It's a finding that hits you between the eyes, or should. R-squared, the primary indicator of model fit, and t-statistic, the primary indicator of coefficient fit, are in the leading journals of economics - such as the AER, QJE, JPE, and RES - evidently doing more harm than good.
Soyer and Hogarth find that conventional presentation mode actually damages inferences from models. This harms decision-making by reducing the econometrician's (and profit seeker's) understanding of the total error of the experiment—or of what might be called the real standard error of the regression, where "real" is defined as the sum (in percentage terms, say) of both systematic and random sources of uncertainty in the whole model. If Soyer and Hogarth are correct, academic journals should allocate more space to visual plots of data and less to tables of statistical significance.
In the blogosphere the statistician Andrew Gelman, INET's Robert Johnson, and journalists Justin Fox (Harvard Business Review) and Felix Salmon (Reuters) have commented favorably on Soyer's and Hogarth's striking results.
But historians of economics and statistics, joined by scientists in other fields – engineering and physics, for example – will not be surprised by the power of visualizing uncertainty. As I explain in my published comment, Karl Pearson himself—a founding father of English-language statistics—tried beginning in the 1890s to make "graphing" the foundation of statistical method. Leading economists of the day such as Francis Edgeworth and Alfred Marshall sympathized strongly with the visual approach.
And as Keynes (1937, QJE) observed, in economics "there is often no scientific basis on which to form any calculable probability whatever. We simply do not know." Examples of variables we do not know well enough to forecast include, he said, "the obsolescence of a new invention", "the price of copper" and "the rate of interest twenty years hence" (Keynes, p. 214).
That sounds about right - despite currently fashionable claims about the role of statistical significance in finding a Higgs boson. Unfortunately, Soyer and Hogarth did not include time series forecasting in their novel experiment though in future work I suspect they and others will.
But with extremely powerful, dynamic, and high-dimensional visualization software such as "GGobi" – which works with R and is currently available for free on-line - economists can join engineers and rocket scientists and do a lot more gazing at data than we currently do (
At least, that is, if our goal is to improve decisions and to identify relationships that hit us between the eyes.
Kind regards,
Stephen T. Ziliak
Professor of Economics
Roosevelt University
Posted: 11 Jul 2012 09:18 AM PDT
Simon Wren-Lewis:
Crisis, what crisis? Arrogance and self-satisfaction among macroeconomists, by Simon Wren-Lewis: My recent post on economics teaching has clearly upset a number of bloggers. There I argued that the recent crisis has not led to a fundamental rethink of macroeconomics. Mainstream macroeconomics has not decided that the Great Recession implies that some chunk of what we used to teach is clearly wrong and should be jettisoned as a result. To some that seems self-satisfied, arrogant and profoundly wrong. ...
Let me be absolutely clear that I am not saying that macroeconomics has nothing to learn from the financial crisis. What I am suggesting is that when those lessons have been learnt, the basics of the macroeconomics we teach will still be there. For example, it may be that we need to endogenise the difference between the interest rate set by monetary policy and the interest rate actually paid by firms and consumers, relating it to asset prices that move with the cycle. But if that is the case, this will build on our current theories of the business cycle. Concepts like aggregate demand, and within the mainstream, the natural rate, will not disappear. We clearly need to take default risk more seriously, and this may lead to more use of models with multiple equilibria (as suggested by Chatelain and Ralf, for example). However, this must surely use the intertemporal optimising framework that is the heart of modern macro.
Why do I want to say this? Because what we already have in macro remains important, valid and useful. What I see happening today is a struggle between those who want to use what we have, and those that want to deny its applicability to the current crisis. What we already have was used (imperfectly, of course) when the financial crisis hit, and analysis clearly suggests this helped mitigate the recession. Since 2010 these positive responses have been reversed, with policymakers around the world using ideas that contradict basic macro theory, like expansionary austerity. In addition, monetary policy makers appear to be misunderstanding ideas that are part of that theory, like credibility. In this context, saying that macro is all wrong and we need to start again is not helpful.
I also think there is a danger in the idea that the financial crisis might have been avoided if only we had better technical tools at our disposal. (I should add that this is not a mistake most heterodox economists would make.) ... The financial crisis itself is not a deeply mysterious event. Look now at the data on leverage that we had at the time, but too few people looked at before the crisis, and the immediate reaction has to be that this cannot go on. So the interesting question for me is how those that did look at this data managed to convince themselves that, to use the title from Reinhart and Rogoff's book, this time was different.
One answer was that they were convinced by economic theory that turned out to be wrong. But it was not traditional macro theory – it was theories from financial economics. And I'm sure many financial economists would argue that those theories were misapplied. Like confusing new techniques for handling idiosyncratic risk with the problem of systemic risk, for example. Believing that evidence of arbitrage also meant that fundamentals were correctly perceived. In retrospect, we can see why those ideas were wrong using the economics toolkit we already have. So why was that not recognised at the time? I think the key to answering this does not lie in any exciting new technique from physics or elsewhere, but in political science.
To understand why regulators and others missed the crisis, I think we need to recognise the political environment at the time, which includes the influence of the financial sector itself. And I fear that the academic sector was not exactly innocent in this either. A simplistic take on economic theory (mostly micro theory rather than macro) became an excuse for rent seeking. The really big question of the day is not what is wrong with macro, but why has the financial sector grown so rapidly over the last decade or so. Did innovation and deregulation in that sector add to social welfare, or make it easier for that sector to extract surplus from the rest of the economy? And why are there so few economists trying to answer that question?
I have so many posts on the state of modern macro that it's hard to know where to begin, but here's a pretty good summary of my views on this particular topic:
I agree that the current macroeconomic models are unsatisfactory. The question is whether they can be fixed, or if it will be necessary to abandon them altogether. I am okay with seeing if they can be fixed before moving on. It's a step that's necessary in any case. People will resist moving on until they know this framework is a dead end, so the sooner we come to a conclusion about that, the better.
As just one example, modern macroeconomic models do not generally connect the real and the financial sectors. That is, in standard versions of the modern model linkages between the disintegration of financial intermediation and the real economy are missing. Since these linkages provide an important transmission mechanism whereby shocks in the financial sector can affect the real economy, and these are absent from models such as Eggertsson and Woodford, how much credence should I give the results? Even the financial accelerator models (which were largely abandoned because they did not appear to be empirically powerful, and hence were not part of the standard model) do not fully link these sectors in a satisfactory way, yet these connections are crucial in understanding why the crash caused such large economic effects, and how policy can be used to offset them. [e.g. see Woodford's comments, "recent events have made it clear that financial issues need to be integrated much more thoroughly into the basic framework for macroeconomic analysis with which students are provided."]
There are many technical difficulties with connecting the real and the financial sectors. Again, to highlight just one aspect of a much, much larger list of issues that will need to be addressed, modern models assume a representative agent. This assumption overcomes difficult problems associated with aggregating individual agents into macroeconomic aggregates. When this assumption is dropped it becomes very difficult to maintain adequate microeconomic foundations for macroeconomic models (setting aside the debate over the importance of doing this). But representative (single) agent models don't work very well as models of financial markets. Identical agents with identical information and identical outlooks have no motivation to trade financial assets (I sell because I think the price is going down, you buy because you think it's going up; with identical forecasts, the motivation to trade disappears). There needs to be some type of heterogeneity in the model, even if just over information sets, and that causes the technical difficulties associated with aggregation. However, with that said, there have already been important inroads into constructing these models (e.g. see Rajiv Sethi's discussion of John Geanakoplos' Leverage Cycles). So while I'm pessimistic, it's possible this and other problems will be overcome.
But there's no reason to wait until we know for sure if the current framework can be salvaged before starting the attempt to build a better model within an entirely different framework. Both can go on at the same time. What I hope will happen is that some macroeconomists will show more humility they've they've shown to date. That they will finally accept that the present model has large shortcomings that will need to be overcome before it will be as useful as we'd like. I hope that they will admit that it's not at all clear that we can fix the model's problems, and realize that some people have good reason to investigate alternatives to the standard model. The advancement of economics is best served when alternatives are developed and issued as challenges to the dominant theoretical framework, and there's no reason to deride those who choose to do this important work.
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.
This post of an email from Mark Gertler in July 2009 argues that modern macro has been mis-characterized:
The current crisis has naturally led to scrutiny of the economics profession. The intensity of this scrutiny ratcheted up a notch with the Economist's interesting cover story this week on the state of academic economics.
I think some of the criticism has been fair. The Great Moderation gave many in the profession the false sense that we had handled the problem of the business cycle as well as we could. Traditional applied macroeconomic research on booms and busts and macroeconomic policy fell into something of a second class status within the field in favor of more exotic topics.
At the same time, from the discussion thus far, I don't think the public is getting the full picture of what has been going on in the profession. From my vantage, there has been lots of high quality "middle ground" modern macroeconomic research that has been relevant to understanding and addressing the current crisis.
Here I think, though, that both the mainstream media and the blogosphere have been confusing a failure to anticipate the crisis with a failure to have the research available to comprehend it. Predicting the crisis would have required foreseeing the risks posed by the shadow banking system, which were missed not only by academic economists, but by just about everyone else on the planet (including the ratings agencies!).
But once the crisis hit, broadly speaking, policy-makers at the Federal Reserve made use of academic research on financial crises to help diagnose the situation and design the policy response. Research on monetary and fiscal policy when the nominal interest is at the zero lower bound has also been relevant. Quantitative macro models that incorporate financial factors, which existed well before the crisis, are rapidly being updated in light of new insights from the unfolding of recent events. Work on fiscal policy, which admittedly had been somewhat dormant, is now proceeding at a rapid pace.
Bottom line: As happened in both the wake of the Great Depression and the Great Stagflation, economic research is responding. In this case, the time lag will be much shorter given the existing base of work to build on. Revealed preference confirms that we still have something useful to offer: Demand for our services by the ultimate consumers of modern applied macro research – policy makers and staff at central banks – seems to be higher than ever.
Mark Gertler,
Henry and Lucy Moses Professor of Economics
New York University
[I ... also posted a link to his Mini-Course, "Incorporating Financial Factors Within Macroeconomic Modelling and Policy Analysis"... This course looks at recent work on integrating financial factors into macro modeling, and is a partial rebuttal to the assertion above that New Keynesian models do not have mechanisms built into them that can explain the financial crisis. ...]
Again, it wasn't the tools and techniques we use, we were asking the wrong questions. As I've argued many times, we were trying to explain normal times, the Great Moderation. Many (e.g. Lucas) thought the problem of depressions due to, say, a breakdown in the financial sector had been solved, so why waste time on those questions? Stabilization policy was passé, and we should focus on growth instead. So, I would agree with Simon Wren-Lewis that "we need to recognise the political environment at the time." But as I argued in The Economist, we also have to think about the sociology within the profession that worked against the pursuit of these ideas.
Perhaps Ricardo Cabellero says it better, so let me turn it over to him. From a post in late 2010:
Caballero says "we should be in "broad-exploration" mode." I can hardly disagree since that's what I meant when I said "While I think we should see if the current models and tools can be amended appropriately to capture financial crises such as the one we just had, I am not as sure as [Bernanke] is that this will be successful and I'd like to see [more] openness within the profession to a simultaneous investigation of alternatives."
Here's a bit more from the introduction to the paper:
The recent financial crisis has damaged the reputation of macroeconomics, largely for its inability to predict the impending financial and economic crisis. To be honest, this inability to predict does not concern me much. It is almost tautological that severe crises are essentially unpredictable, for otherwise they would not cause such a high degree of distress... What does concern me of my discipline, however, is that its current core—by which I mainly mean the so-called dynamic stochastic general equilibrium approach has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one. ...
To be fair to our field, an enormous amount of work at the intersection of macroeconomics and corporate finance has been chasing many of the issues that played a central role during the current crisis, including liquidity evaporation, collateral shortages, bubbles, crises, panics, fire sales, risk-shifting, contagion, and the like.1 However, much of this literature belongs to the periphery of macroeconomics rather than to its core. Is the solution then to replace the current core for the periphery? I am tempted—but I think this would address only some of our problems. The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer. In contrast, the periphery is not nearly as ambitious, and it provides mostly qualitative insights. So we are left with the tension between a type of answer to which we aspire but that has limited connection with reality (the core) and more sensible but incomplete answers (the periphery).
This distinction between core and periphery is not a matter of freshwater versus saltwater economics. Both the real business cycle approach and its New Keynesian counterpart belong to the core. ...
I cannot be sure that shifting resources from the current core to the periphery and focusing on the effects of (very) limited knowledge on our modeling strategy and on the actions of the economic agents we are supposed to model is the best next step. However, I am almost certain that if the goal of macroeconomics is to provide formal frameworks to address real economic problems rather than purely literature-driven ones, we better start trying something new rather soon. The alternative of segmenting, with academic macroeconomics playing its internal games and leaving the real world problems mostly to informal commentators and "policy" discussions, is not very attractive either, for the latter often suffer from an even deeper pretense-of-knowledge syndrome than do academic macroeconomists. ...
My main message is that yes, we need to push the DSGE structure as far as we can and see if it can be satisfactorily amended. Ask the right questions, and use the tools and techniques associated with modern macro to build the right models. But it's not at all clear that the DSGE methodology is up to the task, so let's not close our eyes -- or worse actively block -- the search for alternative theoretical structures.
Posted: 11 Jul 2012 07:29 AM PDT
Peter Dorman:
Pay for Oppression: Do Workers in Fairer or Safer Jobs Make Less Money?, econospeak: The debate over worker rights on the job has taken an interesting turn with Tyler Cowen's defense of the compensating wage differential argument. This, for those who have not run into it before, says that workers, whether through bargaining or the labor market, get a certain amount of utility. They can take that utility in money or in better working conditions, but the sum has to add up the same. This means that workers in safe jobs, all other things being equal, will make less than workers in dangerous ones; workers subject to bosses' sexual advances will make more than those in more respectful organizations; and so on. There are four conclusions you would draw from this if it were true:
1. Dangerous or demeaning work is not a problem per se. The workers in those jobs are just as well off as they would be in a better setting. Don't worry about them.
2. Employers have a financial incentive to make jobs better—safer, fairer, etc. That way they have to shell out less cash. In fact, they have just the right incentive, the amount of money workers are willing to give up in order to get better treatment.
3. Regulation can't make anything better, but it can make things worse by taking away an option that would otherwise be available to workers. Some workers would rather have the money and put up with the dangers and indignities of a crummy workplace.
4. You can measure the monetary value of such intangibles as the value of life, the value of not being harassed, of being able to pee when you want, etc. It's simply the difference in wages between jobs that offer more versus less, scaled by how much change in risk, harassment or whatever the worker is being compensated for.
Now it happens that this is a topic I know something about. In fact, I wrote the book on it. (OK, a book.) For the full story, read the book. Here I will make a few brief comments about the evidence.
I agree that fringe benefits that are essentially monetary in character, such as pensions and insurance, are subject to this sort of process. Workers really do trade money in one form for money in another, and unions bargain explicitly over this. It is also true that public insurance, like workers comp, is largely financed out of wages too, no matter how the laws are written.
It is not true that nonmonetary costs and benefits of work are compensated—not fully at any rate, and sometimes not at all. For an empirical demonstration, see this old study I wrote with Paul Hagstrom that has, to my knowledge, never been rebutted. Tyler links to a slightly less old Viscusi/Aldy lit review that cherry-picks shamelessly, not only in its selection of what counts as a valid study, but (especially) in which results of the authors they choose to report.
For those who don't want to go to the sources, here are the two fundamental empirical problems with studies that claim to show wage compensation for things like occupational safety: (1) They use industry-based measures of which jobs are risky, but they ignore all the other industry-level determinants of wages, like concentration, capital intensity and percent unionized (usually). (2) They show signs of potential publication bias, where specifications are selected that yield the "right" result. What signs? They don't provide summary data on the full range of specifications tested ("taking the con out of econometrics"), so that the reader can determine whether the reported specifications are outliers or not, and the results are hardly ever tested on subsamples. Think for a moment about this last point: if there are compensating wage differentials for women exposed to sexual harassment, this should apply to black women and white women, unionized women and nonunion women, women in blue collar jobs, white collar and pink collar, and other plausible breakouts. If not, you would need to have a story to explain why some get it and not others. (Or why some subsamples even have exacerbating differentials, which I found with occupational safety—the "sweatshop effect".) There is a small literature on subsamples in wage-risk regressions, and they are all over the map.
Note that the Joni Hersch study Tyler links to is vulnerable on all these counts. It doesn't consider interindustry differentials. There is only one specification reported. No subsamples. Not convincing.
The bottom line is that skeptics have every reason to remain skeptical. Actually, a world of compensating wage differentials would be better than the one we live in. Some jobs are unavoidably difficult, dangerous or unpleasant, and they should pay more. But there are also human rights, like freedom from abuse and freedom of expression, that shouldn't be for sale, even when you're on the clock.
Posted: 11 Jul 2012 07:11 AM PDT
Back to the troubles in Europe:
Europe's Divided Visionaries, by Barry Eichengreen, Commentary, Project Syndicate: ...Europe's leaders now agree on a vision of what the EU should become: an economic and monetary union complemented by a banking union, a fiscal union, and a political union. ...
Vision aplenty. The problem is that there are two diametrically opposed approaches to implementing it. One strategy assumes that Europe ... cannot wait to inject capital into the banks. It must take immediate steps toward debt mutualization. It needs either the ECB or an expanded European Stability Mechanism to purchase distressed governments' bonds today.   
Over time, according to this view, Europe could build the institutions needed to complement these policies. ... The other view is that to proceed with the new policies before the new institutions are in place would be reckless. ...
Europe has been here before – in the 1990's, when the decision was taken to establish the euro. At that time, there were two schools of thought. One camp argued that it would be reckless to create a monetary union before economic policies had converged and institutional reforms were complete.
The other school, by contrast, worried that the existing monetary system was rigid, brittle, and prone to crisis. ... At the slight risk of overgeneralization, one can say that the first camp was made up mainly of northern Europeans, while the second was dominated by the south.
The 1992 exchange-rate crisis then tipped the balance. Once Europe's exchange-rate system blew up, the southerners' argument that Europe could not afford to postpone creating the euro carried the day.
The consequences have not been happy. Monetary union without banking, fiscal, and political union has been a disaster. But not proceeding would also have been a disaster. The 1992 crisis proved that the existing system was unstable. ...
Not proceeding now with bank recapitalization and government bond purchases would similarly lead to disaster. Europe thus finds itself in a familiar bind. The only way out is to accelerate the institution-building process significantly. Doing so will not be easy. But disaster does not wait.
Posted: 11 Jul 2012 06:39 AM PDT
Floor time in Congress is scarce and valuable. So why is are House Republicans wasting time with a bill they know will be vetoed when there are much bigger problems to address, the unemployment crisis for example? You'd think that politics is more important than the struggles of the unemployed:
House GOP set for health care law repeal vote, but offering no alternatives, CBS News: House Republicans generally avoided talk of replacement measures on Tuesday as they mobilized for an election-season vote to repeal the health care law that stands as President Barack Obama's signature domestic accomplishment. ... [T]he repeal vote ... will lead to nothing as the Democratic Senate won't consider it, and even if the House and Senate somehow agreed to repeal the law, Mr. Obama has the ultimate say with his veto pen. ...
I understand what's going on, but it's still disappointing to see the unemployed fall by the wayside.
Posted: 11 Jul 2012 05:04 AM PDT
A puzzle from the NY Fed. Why do stock returns begin to rise prior to FOMC announcements? That is, "equity valuations tend to rise in the afternoon of the day before FOMC announcements and rise even more sharply on the morning of FOMC announcement days." The rise is substantial, enough to account for 80% of the equity premium puzzle. Thus, just before FOMC meetings investors seem, on average, to expect that the Fed will raise equity values. Is this related to (expectation of) the "Greenspan/Bernanke put?" Or is something else at work?:
The Puzzling Pre-FOMC Announcement "Drift", by David Lucca and Emanuel Moench, Liberty Street Economics: For many years, economists have struggled to explain the "equity premium puzzle"—the fact that the average return on stocks is larger than what would be expected to compensate for their riskiness. In this post, which draws on our recent New York Fed staff report, we deepen the puzzle further. We show that since 1994, more than 80 percent of the equity premium on U.S. stocks has been earned over the twenty-four hours preceding scheduled Federal Open Market Committee (FOMC) announcements (which occur only eight times a year)—a phenomenon we call the pre-FOMC announcement "drift."
The equity premium is usually measured as the difference between the average return on the stock market and the yield on short-term government bonds. Previous research on the size of the premium finds that it is too large for plausible levels of risk aversion (see Mehra [2008] for a review).
The Drift: A First Take
The pre-FOMC announcement drift is best summarized in the chart below, which provides two main takeaways:
  1. Since 1994, there has been a large and statistically significant excess return on equities on days of scheduled FOMC announcements.
  2. This return is earned ahead of the announcement, so it is not related to the immediate realization of monetary policy actions.
The chart shows average cumulative returns on the S&P 500 stock market index over different three-day windows. The solid black line displays the average cumulative return starting at the market's opening on the day before each scheduled FOMC announcement to the market's close on the day after each announcement. Our sample period starts in 1994, when the Federal Reserve began announcing its target for the federal funds rate regularly at around 2:15 p.m., and ends in 2011. (For a list of announcement dates, see the FOMC calendars.) The shaded blue area displays the 95 percent confidence intervals around the average cumulative returns—a measure of statistical uncertainty around the average return. We see from the chart that equity valuations tend to rise in the afternoon of the day before FOMC announcements and rise even more sharply on the morning of FOMC announcement days. The vertical red line indicates 2:15 p.m. Eastern time (ET), which is when the FOMC statement is typically released. Following the announcement, equity prices may fluctuate widely, but on balance, they end the day at about their 2 p.m. level, 50 basis points higher than when the market opened on the day before the FOMC announcement.
How do these returns compare with returns on all other days over the sample period? The dashed black line, which represents the average cumulative return over all other three-day windows, shows that returns hover around zero. This implies that since 1994, returns are essentially flat if the three-day windows around scheduled FOMC announcement days are excluded.
A Deeper Look through Regression Analysis
The previous chart showed stock returns without accounting for dividends or the return on riskless alternative investments. In the table below, we account for these factors in a regression analysis by considering the return, including dividends (in percent), on the S&P 500 index in excess of the daily yield on a one-month Treasury bill rate, which is a measure of a risk-free rate. We regress this "excess return" on a constant and on a "dummy" variable, equal to 1 on days of FOMC announcements.
The coefficient on the constant (second row) measures the average return on non-FOMC days, while the coefficient on the FOMC dummy (top row) is the differential mean return on FOMC days. In the first column, we regress close-to-close stock returns and see that excess returns on FOMC days average about 33 basis points, compared with an average excess return of about 1 basis point on all other days. As seen in the previous chart, this return is essentially earned ahead of the announcement—hence our label of a pre-FOMC announcement drift. Indeed, in the third column we see a return of about 49 basis points during a 2 p.m.-to-2 p.m. window, while the FOMC releases its statement at 2:15 p.m. ET. In the second column, we repeat the regression using the close-to-close returns from 1970 to 1993, which is prior to when the Fed released its policy decisions right after each meeting, and see that essentially no such premium exists. The bottom rows of the table decompose the annual excess return of the S&P 500 index over Treasury bills on the return earned on FOMC days and the return earned on all other days. As shown in the third column, the return on the twenty-four-hour period ahead of the FOMC announcement cumulated to about 3.9 percent per year, compared with only about 90 basis points on all other days. In other words, more than 80 percent of the annual equity premium has been earned over the twenty-four hours preceding scheduled FOMC announcements, which occur only eight times per year.

The chart below visualizes this return decomposition. It shows the S&P 500 index level along with an S&P 500 index that one would have obtained when excluding from the sample returns on all 2 p.m.-to-2 p.m. windows ahead of scheduled FOMC announcements. In a nutshell, the figure shows that in the sample period the bulk of the rise in U.S. stock prices has been earned in the twenty-four hours preceding scheduled U.S. monetary policy announcements.
An International Perspective
Does this striking result apply only to U.S. stocks? While we do not find similar responses of major international stock indexes ahead of their respective central bank monetary policy announcements, we observe that several indexes do display a pre-FOMC announcement drift, as the chart below shows. Cumulative returns rise for the British FTSE 100, German DAX, French CAC 40, Swiss SMI, Spanish IBEX, and Canadian TSE index when each exchange is open for trading over windows of time around each FOMC announcement in our sample.
Potential Explanations
One might expect similar patterns to be evident also in other major asset classes, such as short-and long-term fixed-income instruments and exchange rates. Surprisingly, though, we don't find any differential returns for these assets on FOMC days compared with other days. In other words, the pre-FOMC drift is restricted to equities. Further, we don't find analogous drifts ahead of other macroeconomic news releases, such as the employment report, GDP and initial claims, among many others. The effect is therefore restricted to FOMC, rather than other macroeconomic, announcements. In the Staff Report, we attempt to account for standard measures considered in the economic literature that proxy for different sources of risk, such as volatility and liquidity, but they also fail to explain the return. Finally, we consider alternative theories that feature political risk, investors with capacity constraints in processing information, as well as models where stock market participation varies over time. Although these theories can help qualitatively explain the existence of a price drift ahead of FOMC announcements, they are counterfactual in some dimension of the empirical evidence.
Our findings suggest that the pre-FOMC announcement drift may be key to understanding the equity premium puzzle since 1994. However, at this point, the drift remains a puzzle.

No comments: