Paul Collier says we should hold financial institutions responsible for reckless behavior if we want to temper their inclination to take excessive risk:
A law to tame wild bankers, by Paul Collier, Commentary, CIF: Deregulation of the banks was built on two intellectual pillars. One was that regulation was not necessary because banks would self-regulate in order to protect their reputation. Please stop laughing. The other was that regulation would not work because regulators would always be one step behind the bankers. And unfortunately we cannot laugh this one off. Indeed, the technical problems facing regulation are now compounded by political impediments. Green shoots, lobbying by the banks, and turf wars among the regulators have eroded the momentum for action. So if banks cannot effectively be regulated by the authorities, what can be done?
The Turner review came up with two solutions. One is radically to raise the capital requirements of banks so that shareholders have something to lose if management goes wrong. The other is to change incentive payments for managers so bonuses depend on the past three years of performance. The increase in capital requirements makes sense. But the three-year rule is weak. The inherent problem facing shareholders is that incentive payments cannot go negative. However much damage a manager inflicts, wiping out both shareholders and depositors, the consequences cannot be remotely commensurate. As a result, even bonuses with a three-year lag bias the system towards risk-taking. If you thought big bonuses were history you have missed BAB, the new banking mnemonic: yes, Bonuses Are Back.
So how can we avoid another Northern Rock? While shareholders cannot impose genuine penalties, governments can. Fear of jail would discourage excessive risk. Before bankers huff about blunting incentives, yes, I realise that without carrots, bankers will just sit and gaze at the office ceiling. Bankers, set your minds at rest: the introduction of penalties would permit BABEL: that is, the carrots for genuinely smart behaviour could be Even Larger.
The key problem with using the law against bankers has been the difficulty of getting a conviction: surely, the managers of Northern Rock did not intend to profit at our expense. We do not need to set the burden of proof that high. Intention misses the point. Faced with a corpse and a killer, police do not need to prove ill intent: manslaughter sets the hurdle lower than murder. It is enough to show the killer was irresponsible. That is the standard we need; we need a crime of managing a bank irresponsibly: in other words, bankslaughter.
On Turner's proposal a manager can still benefit from recklessness – as long as the bank does not blow up within three years. After that, if the bank crashes he can be off playing golf. With bankslaughter, when the bank blows up – even if it is a decade later – a criminal investigation traces back to determine whether crucial decisions were reckless. If a reasonable banker faced with the information available at the time would not have taken those risks, the person responsible is dragged off the golf course and jailed.
Once bankslaughter was on the books, bonuses would be less dangerous. Managers would have to weigh the balance between risk and return and take defensible decisions. I doubt hyper-caution would be a problem: the overly cautious would not get bonuses. Surely we can rely on our bankers to exhibit the necessary degree of greed.
Bankslaughter would target the wild fringe rather than the average banker. The wild fringe matters: sometimes it generates a crisis that becomes systemic. We now know that as early as 2004, the Bank of England anticipated that Northern Rock would implode. Its business model was so risky that other banks had not adopted it. But in the short term, reckless behaviour looks smart, and so wiser management teams were coming under pressure to emulate it. By the time of its demise, the Rock was doing a fifth of British mortgages.
By curtailing the wild fringe, bankslaughter would complement Turner's approach, which is to make the average bank behave better. Both are needed. Turner's concern about performance is manifestly necessary. But the crisis has revealed that some banks are more rotten than others. In Britain, the two Scottish banks and Northern Rock were pioneers of imprudence. In Ireland two banks run by an alliance of construction firms and politicians swept the country to ruin. Even if shareholder capital is at risk, some banks are likely to suffer because of poor corporate governance.
Had bankslaughter been on the books, the management of Northern Rock would now perhaps be in the dock. But, vengeful as we feel, the point of criminal sanction would not be to punish reckless behaviour but to discourage it. If this law had existed, would our financial knights have been so errant?
Are tipping points "mythological"?:
The Tipping Point: Fascinating but Mythological?, by William Easterly: The "tipping point" is a popular concept covering a whole range of phenomena (and a best-selling book by Malcolm Gladwell) where individual behavior depends on the behavior of the herd.
Its original application was to racial segregation. Nobel Laureate Thomas Schelling developed a beautifully simple model for this. Suppose that whites have different degrees of racism – some would "tolerate" higher shares of nonwhites than others. Schelling showed that the less racist whites would still wind up exiting during tipping because of a chain reaction. At first only the most extreme racist whites exit. But their departure causes the white share to go down, making the second most extreme racist whites uncomfortable, so they also exit. The white share goes down some more, and so now even less racist whites will be uncomfortable being a white minority, and they will wind up exiting too. So the remarkable prediction of the tipping point model is that just a little bit of integration that directly bothered only the most racist whites wound up causing ALL of the whites to exit. So even if the typical white was perfectly happy with integrated neighborhoods, these neighborhoods would be so unstable that the final outcome would be extreme racial segregation. ...
It's easy to imagine development applications for the tipping point idea. Suppose that people decide to get highly educated based on what is the share of highly educated people in the population. After all, it's only worthwhile being educated if you can talk to and work with a lot of other highly educated people. If the share of educated people falls below a tipping point, a lot of people will stop getting highly educated, which decreases even further the incentive to get highly educated, and we get the same kind of chain reaction... Assuming that low education causes poverty, this is a "poverty trap" story of low education and underdevelopment.
The tipping point stories are fascinating, but do we observe them in the real world? I got intrigued with this question a while ago, and eventually published a paper testing the predictions of the tipping point story (ungated version here) for its original application: racial segregation of US neighborhoods (reminder to self: my job is not only to blog, also to be a full time academic researcher that must "publish or perish"). The basic prediction is that mixed neighborhoods are unstable but segregated neighborhoods are stable. Data on American neighborhoods from 1970 to 2000 rejected these predictions – it was the segregated neighborhoods that were unstable. There was as much "white flight" out of all-white neighborhoods as there was out of mixed neighborhoods, and there was a white influx into segregated nonwhite neighborhoods. Neighborhoods are still very segregated in the year 2000, but not because of tipping. ...
Of course, this is only one test of the tipping point for racial segregation over one time period. Maybe the tipping point is real in other contexts. But think twice and check for evidence before you accept popular stories like the Tipping Point.
David Beckworth says Brad DeLong can quit wondering, Greenspan's "low interest rate policy in the early-to-mid 2000s was truly a mistake" [Update: see Brad Delong for more]:
Yes Brad, the Fed's Low Interest Rate Policy Was a Mistake, by David Beckworth: Brad Delong is wondering whether the Federal Reserves' low interest rate policy in the early-to-mid 2000s was truly a mistake:
There is, however, active debate over whether there was a fourth mistake: whether Alan Greenspan's decision in 2001-2004 to push and keep nominal interest rates on Treasury securities very very low in order to try to keep the economy near full employment was a fourth mistake...I am genuinely not sure which side I come down on in this debate.
Brad's uncertainty is understandable given he invokes the entire 2001-2004 time frame. For during this period there was a time when the U.S. economic recovery was sputtering along (2001-2002) and a time when the recovery began to take hold (2003-2004). It was during this latter period that Fed's low interest rates were a big mistake. But even for that period I think Brad is misreading the data:
People claim that the Greenspan Federal Reserve "aggressively pushed the interest rate below its natural level."... [T]he market interest rate[, however,] was if anything above the natural interest rate in the early 2000s... You ... cannot argue that he aggressively pushed the interest rate below its natural level. The low interest rate was at its natural level.
I think the evidence shows the opposite. The natural interest rate is a function of individual's time preferences, productivity, and the population growth rate. Of these three components, the one that changed the most in 2003-2004 was productivity as can be seen in the figure...
Here we see productivity growth soaring just as the real federal funds rate is being pushed into negative territory. Normally, a rise in productivity growth should lead to a rise in the natural interest rate and ultimately, a rise in the federal funds rate for monetary policy to stay neutral. However, this latter development did not happen. It seems, then, the Fed did push its policy rate below the natural rate and in the process created a huge Wicksellian-type disequilibria. This interpretation of events has been borne out more rigorously in this ECB paper. One a more practical level, this disequilbria comes through in the Taylor rule which similarly shows the federal funds rate was below the neutral rate during this time.
It is also worth noting that these same rapid productivity gains were the source of the deflationary pressures in 2003 that Brad mentions. Thus, these deflationary pressures did not indicate a weakening economy. In fact, aggregated demand (AD) was growing at at rapid rate in 2003-2004 which, if anything, indicated an overheating economy. The figure below shows a measure of AD, final sales to domestic purchasers, relative to the federal funds rate and has the period 2003-2004 marked off by the dotted lines...
The productivity gains, apparently, were offsetting the upward pressure on prices being created by the robust growth in AD at this time. There simply was no real deflationary threat in 2003. By way of contrast, this figure shows for 2008-2009 what a real AD-induced deflationary threat looks like. ...
The final data issue is the weak employment growth coming out of the 2001 recession. Given the above discussion, the best interpretation of this development is there was less demand for labor in the recovery given the productivity gains. In fact, this was common explanation given at the time. One could also argue that the Fed's low interest rate policy may have pushed some firms to inordinately substitute out of labor to capital.
Here is the bottom line: there is enough evidence for Brad DeLong to conclude that Federal Reserve's low interest rate policy was a mistake.
This may help Brad DeLong settle his inner conflict over whether Greenspan made an error by not moving interest rates to limit the housing boom. Guillermo Calvo and Rudy Loo-Kung argue that the benefits of bubbles almost always outweigh their costs (and thus there's no need for regulation to prevent them).
I think the authors are correct to point out that distributional issues are omitted from the analysis. Also, the assumption that social welfare depends only upon consumption is important as it rules out any utility costs associated with losing a home, a job, changing schools, etc. over and above the loss of consumption. In addition, using the aggregate consumption level of a composite commodity to index social welfare doesn't capture the costs associated with producing the subotimal mix of goods (e.g. too much housing, not enough of other goods), all that matters is the total quantity that is produced and consumed. Finally, I was surprised that the downturn and upturn phases of the cycle were assumed to be of equal length as I thought a slower return to normal growth (as compared to the downturn) - something that would increase the costs of the collapse - was the normal scenario:
'Tis better to have loved and lost, Than never to have loved at all. Tennyson, 1850.
In times of systemic financial distress, hunting for culprits becomes a popular sport. The Madoffs of this world are easy targets because crisis makes crookery harder to conceal. While there is no question that crooks should be sent to jail, increasing financial regulation is a different issue and requires careful analysis. Rushing to impose tighter regulations may hamper recovery and growth. Empirical evidence strongly supports the view that growth and financial development go hand in hand (Demirgüç-Kunt and Levine 2008). Although it is much harder to establish that financial development causes growth, few would doubt that, at least temporarily, financial deregulation could promote higher growth. A genuine concern, however, is that the financial sector is prone to crises, which are typically associated with serious effects on output and employment.
We cannot reach definite conclusions about the desirability of risky financial arrangements in a short column. Our objective is much more modest. We examine the welfare implications of financial deregulations that result in higher growth but end in tears and perform the exercise in the context of a benchmark case in which consumption is the ultimate source of welfare, ignoring possibly relevant behavioural finance and political economy considerations. We base our analysis on estimates of the costs of financial crises in emerging market economies (since the 1980s), a cauldron of financial crises in the last thirty years. Our results support deregulation even under those dire circumstances.1
A model of growth, collapse, and welfare
More specifically, suppose that financial deregulation is implemented at time 0 and that, as a result, consumption grows at rate gH (where H stands for "high"); after T periods, there is a crisis that produces a (symmetric) collapse-recovery recession phase in consumption, resembling those observed in the 1990s' Emerging Economies crises (see Figure 1) . That is, we assume that, starting at time T consumption decreases for a while and then begins to recover. The recession phase takes DT periods. During the first half of this phase, i.e., for DT/2 periods after time T, consumption declines at the rate g*; and then, for the next DT/2 periods, consumption resumes growth at the same rate g*. By construction, at time T + DT (end of the recession phase) consumption reaches its pre-crisis level (i.e., the level prevailing at time T). Afterwards, we assume that consumption grows at a lower rate gL (where L stands for "low"). We assume that gL is also the growth rate that would prevail if no financial deregulation had been implemented. Thus, this corresponds to a financial deregulation experiment in which when crisis hits authorities get cold feet and meekly go back to the old, low-growth, financial system forever. This extremely pessimistic scenario will allow us to make a stronger case for deregulation.
Figure 1. Consumption paths under alternative regimes for the average emerging economy
Note: The consumption path associated with financial innovation shows the calibrated collapse-recovery phase for the average emerging economy and the calculated break-even T using a degree of risk aversion (σ) equal to 4.
To calibrate DT and g*, we focus on average output collapse and recovery patterns (the recession phase) observed in emerging markets during times of systemic financial turmoil throughout the period 1980-2004, discussed in Calvo, Izquierdo and Talvi (2006).2 More specifically, we set DT equal to the time that it took for average output to recover its pre-crisis level. The growth rate g* is calibrated to match accumulated output loss, which is defined as the sum of the differences between the pre-crisis peak GDP and observed GDP within the recession phase. This procedure suggests setting g* = 3.11% per year and DT = 3.43 years.
Moreover, we set gH equal to the average GDP growth rate observed in emerging markets during 1992-97, a period in which many countries opened up to capital inflows. The low growth rate gL is set equal to the average growth rate observed in the previous ten years (1982-91). This leads us to set gH = 4.7% and gL = 2.7% per year.3
We focus on the following question: How long should the bonanza or high-growth period T last for financial deregulation to be socially desirable? To answer that question, we examine the benchmark case in which welfare can be expressed as the present discounted value of a utility index which depends on aggregate consumption.4
We define the break-even T as the number of bonanza years that would make deregulation welfare equivalent to not deregulating at all and generating low growth, gL, at all times. If the bonanza period exceeds break-even T, then financial deregulation is preferable to doing nothing, even though it results in a painful crisis. Table 1 and Figure 1 summarise the results (parameter σ is the coefficient of relative risk aversion).5
Emerging market episodes lasted 5 to 6 years on average, implying that the experiments were socially beneficial despite ending in large recessions. Admittedly, the boom-bust episodes are not identical across economies. To test for robustness, we perform the same exercise for two polar episodes in Latin American, namely, Argentina's and Chile's, for which the bonanza period was 4 and 13 years, respectively.6 In both cases, results point in favour of financial liberalisation for σ = 4. However, in the case of Argentina (and σ=1), the methodology yields borderline results (Chile passes the test with flying colours).7
Two points are worth making: (1) support for deregulation is stronger if the coefficient of relative risk aversion is more realistically set at 4, and (2) break-even T is the same if one assumes that the cycle is repeated as many times as desired (high growth-bust-high growth), and only after the last cycle the economy resumes low growth.8 This is more realistic because emerging markets returned to exhibiting high growth during 2003-2007.
The analysis abstracts from the important issues of poverty and income distribution, which might alter our assessment of past deregulation episodes, but that does not make our analysis less relevant looking forward. For example, for the type of social welfare function considered here, if income distribution remains fairly constant, one would reach the same pro-deregulation conclusions even if one entirely focused on the welfare of the poor, à la Rawls. This shows that financial deregulation would be desirable under the Rawlsian criterion if one can find suitable social protection mechanisms, and that the effectiveness of those mechanisms should be explored as part of the grand design of new financial regulations – especially before enacting new regulations that would stifle the dynamism of the financial sector.
Our analysis in this column may help explain why policymakers are hesitant to prick the bubble when it starts – they may simply be trying to maximise social welfare and realise that a potential crisis is not strong enough reason to prevent the bubble from developing (Tennyson's verses ringing in their ears?). Of course, no policymaker likes crises. When crises strike, much of the discussion focuses on how to avoid them or lessen their impact in the future. This is quite understandable. However, this does not insure that "they are not going to fall in love again." Therefore, the policy debate should give equal time to discussing what to do when crises happen and to developing institutions that help to assuage their blow.
In closing, we would like to point out that even though this note gives some support to financial deregulation, it does not rule out the existence of financial arrangements that are far superior to the ones currently available. A case in point would be the creation of a global lender of last resort. Central banks have successfully filled that role at the local level and likely prevented many serious self-fulfilling banking crises in the last seventy years. However, there is no equivalent to a lender of last resort at the global level. Its absence was clearly felt in emerging markets in the aftermath of the Russian August 1998 crisis. Even the subprime crisis suffered from the absence of a fully effective lender of last resort. To be sure, central banks stepped up to the plate early on in the current episode, but their coverage was and still is quite limited. Many central financial institutions were left without a safety net, or the net was stretched out after they hit the ground. We feel that the issue of a global lender of last resort should be given more weight in the current debate (see Calvo 2009).
Baldacci, Emanuele , Luiz de Mello, and Gabriela Inchauste (2002) "Financial Crises, Poverty, and Income Distribution" IMF Working Paper 02/4. Barro, Robert (2006) "Rare disaster and Asset Markets in the Twentieth Century", Quarterly Journal of Economics, 121(3). Calvo, Guillermo (2009) "Lender of Last Resort: Put it on the agenda!", VoxEU column, 23 March Calvo, Guillermo, Alejandro Izquierdo and Ernesto Talvi (2006) "Phoenix Miracles in Emerging Markets: Recovering Without Credit from Systemic Financial Crises," National Bureau of Economic Research, Working Paper 1201, March. Demirgüç-Kunt, Asli and Ross Levine (2008), "Finance, Financial Sector Policies, and Long-Run Growth," Commission on Growth and Development, Working Paper No. 11, World Bank, Washington, DC Rancière, Romain, Aaron Tornell and Frank Westermann (2008) "Systemic Crises and Growth," Quarterly Journal of Economics, pp. 359-406.
 Our results, thus, give further support to the line of research advanced by Aaron Tornell and Frank Westermann since 2002, which is inspired by the conjecture that financial liberalisation may be socially desirable despite the booms and busts that it may generate. See Rancière, Tornell and Westermann (2008) and their recent VoxEU column.  The paper focuses on episodes in which GDP peak-to-trough contraction is greater than the median fall in the sample. Note that including only the most severe collapses in the calibration constitutes a more difficult test for the case of financial deregulation.  Countries included are those tracked by the J.P. Morgan's EMBI Global Index: Argentina, Belize, Brazil, Bulgaria, Chile, China, Colombia, Côte d'Ivoire, Dominican Republic, Ecuador, Egypt, El Salvador, Gabon, Ghana, Hungary, Indonesia, Iraq, Jamaica, Lebanon, Malaysia, Mexico, Morocco, Pakistan, Panama, Peru, Philippines, Poland, Romania, South Africa, Sri Lanka, Thailand, Trinidad and Tobago, Tunisia, Turkey, Uruguay, Venezuela, and Vietnam.  More concretely, we assume that the utility index exhibits constant relative risk aversion, σ, and the instantaneous rate of discount equals 3% per year.  If parameters are calibrated on the basis of GDP per capita (instead of its level) yields similar results, due to the high correlation between the two series.  In both cases, we set gL to average GDP growth rates during 1951-1970. The parameter gH is set to the average GDP growth rates during 1991-94 for Argentina and 1984-97 in the case of Chile; The values and DT and g* are calibrated to match the characteristics of the Argentine crisis of 2002 and the Chilean crisis of 1998.  In an exercise in which the collapse in growth is modeled as a stochastic event with constant probability, following Barro (2006), we also find support for financial deregulation. In both cases, the break-even expected frequency of these events is lower than the ones observed in the data  It follows that T will be the same if the cycle is repeated an infinite number of times.  The empirical work of Baldacci, de Mello, and Inchauste (2002) suggests that the financial crises that struck developing countries between 1960 and 1998 had severe effects on poverty and, in some cases, income inequality.
No disagreement with this. The failure to have dissolution plans for systemically important institutions on the shelf and ready to go turned out to be costly, so credible dissolution plans are certainly needed. However, the argument seems to assume that too big and too interconnected firms cannot be avoided, something I'm not ready to concede:
A sound funeral plan can prolong a bank's life, by Anil Kashyap, Commentary, Financial Times: Buried within the 88-page Obama administration proposal to overhaul financial regulation is an overlooked option called a "rapid resolution plan". It mandates that systemically important financial companies be required regularly to file a "funeral plan": a set of instructions for how the institution could be quickly dismantled should the need to do so arise. ... It could be implemented now, without the need for legislative action. Regulators should do so immediately.
The first benefit is that regulators would gain a stronger negotiating position with a dying institution. Throughout this crisis the authorities have had to intervene without knowing exactly what hidden traps might emerge if a bank were to be closed down. The bankers know this and can exploit the fear of the unknown to press for bail-outs.
It is remarkable that such rules do not already exist. ... The crisis has shown us that the sudden unwinding of a large, complex financial institution is terrifying for the financial system. ...
A second immediate benefit would be to force bank managers to think much more carefully about the complex financial structures they have created. If bankers had to explain every single step needed (and the associated consequences) to shut down their subsidiaries in all the various jurisdictions in which they operate, they would have a big incentive to simplify their organisations. ...
Over the medium term, there would be additional benefits. The headline component of the plan would be the requirement for banks to estimate the number of days it would take to shut down. Banks that require longer to close would have to hold more capital. This would place management under serious pressure to improve their plans...
Senior members of the management team and the board would have to understand the funeral plan. Crucially, they would be forced to sign off on its accuracy. This might also lead to closer scrutiny of new products or lines of business if they jeopardised an orderly unwinding. ...
This proposal is far from a cure-all. One big problem is that resolution rules themselves, especially when multiple legal systems are involved, are quite complicated. But the plan has an extremely high benefit-to-cost ratio and could be put in place right away. ...
- More Signs of the Demise of Behavioral Economics? - Cheap Talk
- Young Japanese Raise Their Voices Over Economy - NYTimes.com