Redirect


This site has moved to http://economistsview.typepad.com/
The posts below are backup copies from the new site.

October 12, 2011

Latest Posts from Economist's View

Latest Posts from Economist's View


"Benford's Law and the Decreasing Reliability of Accounting Data"

Posted: 12 Oct 2011 12:33 AM PDT

This is from Jialan Wang:

Benford's Law and the Decreasing Reliability of Accounting Data for US Firms, by Jialan Wang: ...[T]here are more numbers in the universe that begin with the digit 1 than 2, or 3, or 4, or 5, or 6, or 7, or 8, or 9. And more numbers that begin with 2 than 3, or 4, and so on. This relationship holds for the lengths of rivers, the populations of cities, molecular weights of chemicals, and any number of other categories. ...
This numerical regularity is known as Benford's Law, and specifically, it says that the probability of the first digit from a set of numbers is d is given by

In fact, Benford's law has been used in legal cases to detect corporate fraud, because deviations from the law can indicate that a company's books have been manipulated. Naturally, I was keen to see whether it applies to the large public firms that we commonly study in finance.
I downloaded quarterly accounting data for all firms in Compustat,... over 20,000 firms from SEC filings... (revenues, expenses, assets, liabilities, etc.).
And lo, it works! Here are the distribution of first digits vs. Benford's law's prediction for total assets...

Next, I looked at how adherence to Benford's law changed over time, using a measure of the sum of squared deviations of the empirical density from the Benford's prediction...
Deviations from Benford's law have increased substantially over time, such that today the empirical distribution of each digit is about 3 percentage points off from what Benford's law would predict. The deviation increased sharply between 1982-1986 before leveling off, then zoomed up again from 1998 to 2002.  Notably, the deviation from Benford dropped off very slightly in 2003-2004 after the enactment of Sarbanes-Oxley accounting reform act in 2002, but this was very tiny and the deviation resumed its increase up to an all-time peak in 2009.

So according to Benford's law, accounting statements are getting less and less representative of what's really going on inside of companies.The major reform that was passed after Enron and other major accounting standards barely made a dent.
Next, I looked at Benford's law for three industries: finance, information technology, and manufacturing. ... [shows graphs] ... While these time series don't prove anything decisively, deviations from Benford's law are compellingly correlated with known financial crises, bubbles, and fraud waves. And overall, the picture looks grim. Accounting data seem to be less and less related to the natural data-generating process that governs everything from rivers to molecules to cities. Since these data form the basis of most of our research in finance, Benford's law casts serious doubt on the reliability of our results. And it's just one more reason for investors to beware....

Rodrik: Milton Friedman’s Magical Thinking

Posted: 12 Oct 2011 12:24 AM PDT

Dani Rodrik argues that Milton Friedman leaves "an ambiguous and puzzling legacy":

Milton Friedman's Magical Thinking, by Dani Rodik, Commentary, Project Syndicate: Next year will mark the 100th anniversary of Milton Friedman's birth. Friedman ... will be remembered primarily as the visionary who provided the intellectual firepower for free-market enthusiasts..., and as the éminence grise behind the dramatic shift in the economic policies that took place after 1980.
At a time when skepticism about markets ran rampant, Friedman explained in clear, accessible language that private enterprise is the foundation of economic prosperity. ... He railed against government regulations that encumber entrepreneurship and restrict markets. ...
Inspired by Friedman's ideas, Ronald Reagan, Margaret Thatcher, and many other government leaders began to dismantle the government restrictions and regulations that had been built up over the preceding decades. China moved away from central planning and allowed markets to flourish... Latin America sharply reduced its trade barriers and privatized its state-owned firms. When the Berlin Wall fell in 1990, there was no doubt as to which direction the former command economies would take: towards free markets.
But Friedman also produced a less felicitous legacy. ... In effect, he presented government as the enemy of the market. He therefore blinded us to the evident reality that all successful economies are, in fact, mixed...
The Friedmanite perspective greatly underestimates the institutional prerequisites of markets. Let the government simply enforce property rights and contracts, and – presto! – markets can work their magic. In fact,... markets ... are not self-creating, self-regulating, self-stabilizing, or self-legitimizing. Governments must invest in transport and communication networks; counteract asymmetric information, externalities, and unequal bargaining power; moderate financial panics and recessions; and respond to popular demands for safety nets and social insurance. ... [And] Given China's economic success, it is hard to deny the contribution made by the government's industrialization policies.
Free-market enthusiasts' place in the history of economic thought will remain secure. But thinkers like Friedman leave an ambiguous and puzzling legacy, because it is the interventionists who have succeeded in economic history, where it really matters.

links for 2011-10-12

Posted: 12 Oct 2011 12:06 AM PDT

Christopher Sims and Tests for Causality

Posted: 11 Oct 2011 01:08 PM PDT

To tell the full story of Christopher Sims' contributions to causality, we need to go back to the state of the art in policy evaluation in the 1960s, in particular, to something known as the St. Louis equation:

Yt = c + a0Mt + a1Mt-1 + a3Mt-2 + b0Gt + b1Gt-1 + b2 Gt-2 + et

In this equation, output (Y) is regressed on current and lagged values of money (M) and government spending (G). The idea was to see how output responded historically to changes in money and government spending, and then use these estimates to guide policy. If we know how Y responds to M, then we can use that knowledge to set monetary policy optimally.

Now, there is a fundamental problem with this approach highlighted by the Lucas critique (the negative reaction to the other common approach, using large-scale structural models to evaluate policy, was discussed yesterday). If you change monetary policy you also change the values of the a and b coefficients so that the estimates are no longer reliable, and hence no longer a guide, but that criticism came later. At the time there was another worry.

The worry was something known as simultaneity bias. Consider the Mt term in the equation above. If Mt is "econometrically exogenous," i.e. if it doesn't depend upon Yt, then the estimated value of a0 will be unbiased. But if Mt depends upon Yt , perhaps through and equation such as Mt = h0 + h1Yt + ut, then the estimate of  will be biased and hence a poor guide to policy decisions.

The first use of causality tests was to test to see if h1 in the "policy equation" was equal to zero, and Sims was a key player in the development of these tests. Thus, Sims starts his 1972 AER paper with:

This study has two purposes. One is to examine the substantive question: Is there statistical evidence that money is "exogenous" in some sense in the money-income relationship? The other is to display in a simple example some time-series methodology not now in wide use. The main methodological novelty is the use of a direct test for the existence of unidirectional causality.

If there was unidirectional causality from M to Y, then the estimate would be unbiased. But if there was two-way causality, i.e. if Y causes M (h1 is not zero), then the estimate would be problematic.

Sims contributed greatly to this literature, and once this work was largely complete, it quickly became clear that these tests could be used to assess causality more generally, the method was not limited to checking for econometric exogeneity.

But there was also a problem. The basic technique (an F-test on a set of coefficients) to test for causality worked well on 2-variable systems, but it didn't work reliably for systems with three or more equations (the problem was that X can cause Y, and Y can then cause Z so that there is a causal path from X to Z, but the F-test approach will miss this).

Sims Second major paper on causality addresses this problem by providing two new tools to assess causality, impulse response functions and variance decompositions (along the way it was also shown that Sims and Granger causality are equivalent). Impulse response functions, which have since become a key analytical device in macroeconomics, trace out the response of the variables in the model to a shock to another variable in the system (identification restrictions are needed to ensure that the shock is actually a policy shock, see here). If the variable, say output, responds robustly to a shock to, say, the federal funds rate, then we say that the federal funds rate causes output. But if we shock the federal funds rate and output essentially flat-lines in response, then causality is absent.

However, even when there is causality according to the impulse responses, impulse response functions do not tell us how important one variable is in explaining the variation in another variable (the impulse response function could look impressive, but it may be that we are only explaining 1% of the total variation in the other variable so that the response we are seeing is not very important in explaining why the other variable fluctuates over time). Variance decompositions solve this problem. They don't tell you the sign/pattern of the response like impulse response functions do, but the do give an indication of how important one variable is in explaining the variation in another variable (e.g. if M explains 75% of the variance in output, that's impressive and notable, but if it's only 1% then money isn't very important in explaining why output changes over time).

Sims second paper also made another important point. In his first paper, he found that money causes output (so it could not be treated as econometrically exogenous as in the St. Louis equation). But that was in a two-variable system including only M and Y. In his second paper he adds interest rates (i) and prices (P) to get a four variable system, and he finds that this overturns the results in his first paper. Once i is added to the model, M no longer causes Y. Thus, the lesson is that if you leave important variables out of a VAR system, it can produce misleading results.

But Sims' main contributions were, initially, the F-tests for testing causality in bivariate systems, and the addition of IRFs and VDCs to assess causality in higher order systems. In addition, he also provided many of the common "pitfalls of causality testing," -- causality testing can be misleading in a number of ways. One is above, leaving a variable out of a system. If A causes B to change tomorrow, and C to change the next day, a system containing only B and C will look as though B causes C when in fact there is no causality at all, a third variable causes both. Other pitfalls can occur, for example, when there is optimal control or when expectations of future variables are in the model. Identifying the pitfalls of the methods he (and others) developed was also an important contribution to the literature.

Sims' work on causality was highlighted in the Nobel announcement, and I hope this provided some background on this topic. But there's a lot more to be said about Sims' work over and above his work on causality testing discussed above and his work on structural VARs I discussed yesterday, e.g. his recent papers on rational inattention, and I hope to write more about both Sims and Sargent when I can find the time.

Raise Taxes on the Wealthy: It’s the Fair Thing to Do

Posted: 11 Oct 2011 09:09 AM PDT

No comments: