Redirect


This site has moved to http://economistsview.typepad.com/
The posts below are backup copies from the new site.

February 27, 2007

Japan's Food Security

Is Japan's protection of its agricultural industry justified by the fact that it is an island nation, or should Japan drop its worries about food security and end the subsidies it gives to domestic farmers? First, here's Malcolm Cook of the Lowy Institute for International Policy in Australia writing for Project Syndicate. He's hopeful Japan's protectionist tendencies in agriculture are subsiding, and that Japan will lead the way for others to relax their agricultural protections. An editorial from the Japan Times follows and gives additional perspective: 

Japan is showing the way forward for agricultural free trade, by Malcom Cook, Project Syndicate: Last year was a bad one for free trade. The Doha Round was supposed to make agriculture the centerpiece of negotiations... But instead of breathing life into free trade in food, rural protectionism in rich countries seems to have killed the Doha Round... Most galling, agriculture is a small and declining part of these "rich club" economies...

The practical challenge comes from agriculture's two advantages that insulate the rural sector... First, farming is geographically concentrated and farmers vote on agricultural policy above everything else, greatly enhancing the power of their votes...

Second, protectionists have developed populist but logically questionable arguments that agricultural staples cannot be treated as tradable commodities subject to competition. Domestic ... farming is presented as analogous to the military. Just as no government should outsource national security to untrustworthy foreigners, nor should any government permit the national food supply to rely on the supposed vagaries of foreign production. ...

Japan has long been the paragon of rich-country agricultural protectionism. Its electoral system heavily favors rural voters. Farmers are well organized politically, and the Ministry of Agriculture, Forestry, and Fisheries (MAFF) has been a fierce defender of agricultural protectionism. Food security arguments resonate well in Japan, owing to memories of shortages during World War II and its aftermath.

Ironically, Japan now offers a seed of hope for agricultural liberalization. The country's declining number of voters are lining up in favor of cheaper, imported food. Japan's demographic crisis is particularly acute in rural areas, where the average age of farmers is surpassing the retirement age...

Despite decades of government support, the rural sector cannot aspire to feed its declining population. ... The rapid aging and decline of Japan's rural population is fundamentally shifting Japan's approach to food. The MAFF is wistfully abandoning the cherished goal of food autarky. Its latest strategic plan calls for a self-sufficiency ratio of 45 percent by 2015 and focuses instead on "securing the stability of food imports"...

Even more galling for the MAFF's traditionalists, their food security argument is being turned on its head. Leveraging Japan's inability to feed itself, trade negotiators now argue that Japan needs to open up to imports or face being shut out of global food markets by fast-growing giants like China. ... While the logic of this argument is shaky, it taps into deep Japanese concerns about China's rise.

The rich countries face a similar demographic challenge... Japan, due to its advanced demographic decline, is the bellwether, yet other traditional rural protectionists like France and South Korea are not far behind. France now has half the number of farmers it had 20 years ago.

That is good news for farmers and consumers around the world. Rich and aging countries may finally become promoters, rather than opponents, of free trade in food.

Next, from a domestic viewpoint, here's a recent editorial from the Japan Times on this topic:

A viable farming sector,, Editorial, Japan Times: This year will be important for Japan in developing policy for creating a viable agricultural sector without inviting criticism of protectionism from abroad. Among the reasons for tackling this issue is that, although the Doha Round of World Trade Organization negotiations has stalled, the liberalization of agricultural trade is inevitable.

Japan and Australia have agreed to start negotiations to work out a bilateral economic partnership agreement (EPA). Since Australia is a major agricultural producer, abolition of tariffs on farm products under the EPA would put Japanese farmers at a competitive disadvantage in terms of prices.

The Japanese people worry about the nation's low food self-sufficiency rate... Agriculture is likely to become a big issue during the campaign for the Upper House election... This is because agricultural villages exist in many of the 29 single-seat constituencies with election races...

The EPA between Japan and Australia would help Japan secure supplies of natural resources from Australia, such as coal and iron ore, and facilitate export of Japanese manufactured goods such as cars and machinery. But it could have a serious effect on Japan's farmers. ... The ministry says that in order to cushion the damage to Japanese farmers, the government would have to spend an annual 430 billion yen in offsets for price differences between Japanese and Australian products. ...

While Japan faces the possibility that large quantities of foreign agricultural products will penetrate its markets in the future, there is a strong public outcry for raising the nation's food self-sufficiency rate. In fiscal 1965, Japan's calorie-based self-sufficiency rate was 73 percent. By 1998, however, it dropped to 40 percent and remained at the level ... through fiscal 2005. The government seeks to raise the rate to 45 percent by the end of fiscal 2015.

In a December government poll, about 80 percent of those surveyed expressed worries about Japan's food supply in the future because of possible changes in the world situation.... The largest segment of those surveyed, 49 percent, put the desirable self-sufficiency rate at 60 to 80 percent.

To help increase the size of agricultural production units in Japan and thus strengthen overall production efficiency, the government will introduce a new subsidy policy in fiscal 2007. ...

The opposition Democratic Party of Japan calls for attainment of full self-sufficiency in the supply of staple foods. It proposes giving subsidies to every farming household... The Liberal Democratic Party characterizes the DPJ policy as "scattering money," while the DPJ criticizes the government for forsaking smaller farming households and their distinctive traditions. ...

Is the Wide, Wide World of Economics Too Wide?

Is there anything economists won't study? Should there be?:

Is an Economist Qualified To Solve Puzzle of Autism?, by Mark Whitehouse, WSJ: In the spring of 2005, Cornell University economist Michael Waldman noticed a strange correlation in Washington, Oregon and California. The more it rained or snowed, the more likely children were to be diagnosed with autism. ...

[This] soon led Prof. Waldman to conclude that something children do more during rain or snow -- perhaps watching television -- must influence autism. Last October, Cornell announced the resulting paper in a news release headlined, "Early childhood TV viewing may trigger autism, data analysis suggests."

Prof. Waldman's willingness to hazard an opinion on a delicate matter of science reflects the growing ambition of economists -- and also their growing hubris, in the view of critics. Academic economists are increasingly venturing beyond their traditional stomping ground, a wanderlust that has produced some powerful results but also has raised concerns about whether they're sometimes going too far. ...

Such debates are likely to grow as economists delve into issues in education, politics, history and even epidemiology. Prof. Waldman's use of precipitation illustrates one of the tools that has emboldened them: the instrumental variable, a statistical method that, by introducing some random or natural influence, helps economists sort out questions of cause and effect. Using the technique, they can create "natural experiments" that seek to approximate the rigor of randomized trials -- the traditional gold standard of ... research. ...

But as enthusiasm for the approach has grown, so too have questions. One concern: When economists use one variable as a proxy for another -- rainfall patterns instead of TV viewing, for example -- it's not always clear what the results actually measure. Also, the experiments on their own offer little insight into why one thing affects another.

"There's a saying that ignorance is bliss," says James Heckman ... at the University of Chicago who won a Nobel Prize in 2000... "I think that characterizes a lot of the enthusiasm for these instruments." Says MIT economist Jerry Hausman, "If your instruments aren't perfect, you could go seriously wrong." ...

In principle, the best way to figure out whether television triggers autism would be to do what medical researchers do: randomly select a group of susceptible babies at birth to refrain from television, then compare their autism rate to a similar control group that watched normal amounts of TV. If the abstaining group proved less likely to develop autism, that would point to TV as a culprit.

Economists usually ...[cannot] perform that kind of experiment. ... Instead, economists look for instruments -- natural forces or government policies that do the random selection for them. First developed in the 1920s, the technique helps them separate cause and effect. Establishing whether A causes B can be difficult, because often it could go either way. If television watching were shown to be unusually prevalent among autistic children, it could mean either that television makes them autistic or that something about being autistic makes them more interested in TV. ...

Prof. Waldman and his colleagues had such [techniques]... in mind when they approached autism and TV. By putting together weather data and government time-use studies, they found that children tended to spend more time in front of the television when it rained or snowed. Precipitation became the group's instrumental variable, because it randomly selected some children to watch more TV than others.

The researchers looked at detailed precipitation and autism data from Washington, Oregon and California -- states where rain and snowfall tend to vary a lot. They found that children who grew up during periods of unusually high precipitation proved more likely to be diagnosed with autism. A second instrument for TV-watching, the percentage of households that subscribe to cable, produced a similar result. Prof. Waldman's group concluded that TV-watching could be a cause of autism.

Criticism quickly arose, illustrating some of the perils of the economists' approach. For one, instruments are often too blunt. As Prof. Waldman concedes, precipitation could be linked to a lot of factors other than TV-watching -- such as household mold -- that could be imagined to trigger autism. ... "It is just too much of a stretch to tie this to television-watching," says Joseph Piven, director of the Neurodevelopmental Disorders Research Center at the University of North Carolina. "Why not tie it to carrying umbrellas?"

Also, Prof. Waldman's findings do nothing to explain the mechanism by which television would influence autism, a gap that instrumental variables are inherently unable to fill. That's one reason many autism researchers think he shouldn't have publicized his results or made recommendations to parents. "I think this is irresponsible," says Dr. Klin of Yale. "We should not provide clinical advice unless there is scientific evidence to substantiate it." ...

David Card, a professor at the University of California, Berkeley, who has done influential work on the minimum wage, fears that the fascination with the instrumental-variables technique "leads to interest in topics that economists are not particularly well-trained to study."

Those who favor the method say it's just one tool among many -- all of which have flaws -- and is intended to help fill in the picture. ... Prof. Waldman welcomes the scrutiny, saying he hopes his work will also provoke autism researchers to conduct clinical trials. "Obviously this is an unusual thing for an economist to be looking at," says Prof. Waldman. "Maybe I was overconfident. We'll see."

John Taylor: The Iraq Currency Plan Was a Big Success

John Taylor says the plan to ship U.S. currency into Iraq after the fall of Saddam Hussein was a big success:

Billions Over Baghdad, by John B. Taylor, Commentary, NY Times: Earlier this month, the House Committee on Oversight and Government Reform held a hearing that criticized the decision to ship American currency into Iraq just after Saddam Hussein’s government fell. As the committee’s chairman, Henry Waxman of California, put it .., “Who in their right mind would send 360 tons of cash into a war zone?” His criticism attracted wide attention, feeding antiwar sentiment and even providing material for comedians. But a careful investigation ... paints a far different picture.

The currency that was shipped into Iraq in the days after the fall of Saddam Hussein’s government was part of a successful financial operation that had been carefully planned months before the invasion. Its aims were to prevent a financial collapse in Iraq, put the financial system on a firm footing and pave the way for a new Iraqi currency. ...

The plan ... had two stages ... designed to work for Iraq’s cash economy, in which checks or electronic funds transfers were virtually unknown and shipments of tons of cash were commonplace.

In the first stage, the United States would pay Iraqi government employees and pensioners in American dollars. These were obtained from Saddam Hussein’s accounts in American banks, which ... amounted to about $1.7 billion. Since the dollar is a strong and reliable currency, paying in dollars would create financial stability until a new Iraqi governing body was established... The second stage of the plan was to print a new Iraqi currency for which Iraqis could exchange their old dinars.

The final details of the plan were reviewed in the White House Situation Room by President Bush and the National Security Council on March 12, 2003. I attended that meeting. Treasury Secretary John Snow opened the presentation with a series of slides. ... [A] slide indicated that we could ship $100 million in small denominations to Baghdad on one week’s notice. President Bush approved the plan...

To carry out the first stage of the plan, President Bush issued an executive order on March 20, 2003, instructing United States banks to relinquish Mr. Hussein’s frozen dollars. From that money, 237.3 tons in $1, $5, $10 and $20 bills were sent to Iraq. During April, United States Treasury officials in Baghdad worked with the military and the Iraqi Finance Ministry officials ... to make sure the right people were paid. The Iraqis supplied extensive documentation of each recipient of a pension or paycheck. ...

On April 29, Jay Garner ... reported to Washington that the payments had lifted the mood of people in Baghdad during those first few confusing days. Even more important, a collapse of the financial system was avoided.

This success paved the way for the second stage of the plan. In only a few months, 27 planeloads (in 747 jumbo jets) of new Iraqi currency were flown into Iraq from seven printing plants around the world. Armed convoys delivered the currency to 240 sites around the country. From there, it was distributed to 25 million Iraqis in exchange for their old dinars, which were then dyed, collected into trucks, shipped to incinerators and burned or simply buried.

The new currency proved to be very popular. It provided a sound underpinning for the financial system and remains strong, appreciating against the dollar even in the past few months. Hence, the second part of the currency plan was also a success. ...

One of the most successful and carefully planned operations of the war has been held up in this hearing for criticism and even ridicule. As these facts show, praise rather than ridicule is appropriate: praise for the brave experts in the United States Treasury who went to Iraq in April 2003 and established a working Finance Ministry and central bank, praise for the Iraqis in the Finance Ministry who carefully preserved payment records in the face of looting..., and yes, even praise for planning and follow-through back in the United States.

Inflation and Unemployment II

This post from earlier today noted that there is evidence that the relationship between inflation and measures of real activity such as the unemployment rate has weakened in the last twenty-five years. I want to return to this topic because I think two different questions are being mixed up in some of the discussion surrounding this result.

There are two types of causality we can examine with respect to the relationship between unemployment and inflation (there are others as well). The first is how a monetary shock that impacts inflation affects unemployment and other measures of real activity. It's possible that unemployment reacts less to a change in monetary policy than it did in the past. The second type of causality is how a shock to unemployment or output, perhaps from a supply-shock such as a change in productivity or a real cost shock, affects inflation in subsequent periods. The first is a demand driven change in the economy while the second originates on the supply side.

Causality from Inflation to Unemployment

The causality here runs from monetary policy shocks to inflation to unemployment. It's much easier to tell a story about increased stability, I think, with respect to causality in this direction, so let's start here. I am going to use an expectations augmented Phillips curve framework to illustrate ideas. It's not the most up-to-date framework, but the ideas carry through to more modern theoretical structures and it has the advantage of being a simple and familiar structure.

The following figure shows two short-run Phillips curves, one for an expected inflation rate of p0 and one for an expected inflation rate of p1. Consider the PC labeled PC(pe = p0). This curve shows that if the inflation rate is equal to its expected value of p0, then the unemployment rate will be at the natural rate, U*. However, if the inflation rate is different than what is expected, then the economy will slide along the short-run Phillips curve and output will differ from its natural rate.

Pc22607

For example, suppose that agents expect an inflation rate of  p0, but the actual inflation rate turns out to be, unexpectedly, the lower rate of p1. Then instead of ending up at the natural rate rate, i.e. at point A, the economy will move to point B and the unemployment rate will increase. Thus, any unexpected inflation causes deviations from the natural rate of output.

If the lower inflation is permanent, over time expectations will adjust to reflect the lower actual inflation rate. That is, over time, the expected inflation rate will drop from  pe = p0  to  pe = p1 and the Philips curve will shift downward as shown in the diagram. As the Phillips curve shifts down, the economy will move from point B to point C as the unemployment rate returns to the natural rate.

Notice that the path of unemployment in response to an unexpected shock is from point A to B to C as unemployment first rises, then falls back to its initial level. But what if the change in policy was expected rather than unexpected? In this case, the fall in unemployment is less costly.

If the change in inflation is anticipated in advance, then the PC will shift as the inflation rate changes and the economy can move from point A to point C without as much (or with perfect flexibility without any) increase in unemployment. That is, with some degree of price and wage rigidity present, an unexpected change in inflation will move the economy from A to B to C, while an expected change will move more directly from A to C as shown, for example, by the dashed line in the diagram. Thus, importantly, anticipated changes lead to much less variation in unemployment than unanticipated changes.

This means that one explanation for increased stability (when causality is from policy to real activity) is that policy is easier to anticipate than in the past. Has this happened in the last twenty five years, i.e. is this a realistic assumption? It seems so for at least three reasons, the use of interest rate rules that have stabilized both actual and expected inflation making them more predictable, transparency about how the interest rule is implemented, and increased credibility.

Since the 1980s, the Fed has followed an interest rate rule and it has, at the same time, attempted to be much more transparent and communicative about its policy procedures. To the extent that this has allowed policy to be predicted more accurately, the economy should be more stable.

Increased credibility since the 1970s is also important. If the monetary authority says it is committed to a policy of lowering inflation and announces such intentions, but then allows inflation to continue to creep upward so that it loses credibility (as the Fed did in the 1970s), then there will be much more uncertainty about Fed policy and therefore much less predictability. All of these factors - committing to a rule, increasing transparency, and enhancing credibility allow policy to be predicted with less error and, according to the model above as well as more modern versions of it, the economy ought to stabilize. With a more stable economy, a given change in inflation will be associated with smaller changes in real activity.

Causality from Unemployment to Inflation

I think I'll save this for another time. For now I'll just note that the answer is hard to find in New Keynesian models. In these models, at least in their most basic form, an increase in the unemployment rate should lead to an increase in the future inflation rate, a result that is not supported in U.S. data (e.g. see Estrella and Fuhrer 2002).

How Should Changes in Inequality Be Measured and Assessed?

My latest at Cato Unbound:

How Should Changes in Inequality Be Measured and Assessed?, by Mark Thoma, Cato Unbound: I am pleased to see that Alan Reynolds is finally taking a closer look at some of the evidence that works against his claim that inequality has been stagnant in recent decades, though he predictably dismisses it. I will not convince him the evidence is valid, and he most certainly has not convinced me that it isn't, so I encourage anyone who is still puzzled about the evidence that profits have been mismeasured and that it matters for assessing changes in inequality in recent years to look at the research and draw their own conclusions. I have no doubt that a fair reading of the evidence will lead to the conclusion that inequality may in fact be worse than we thought which runs opposite of Reynolds' claims. The essence is fairly simple, if we're mismeasuring real investment, then we are also mismeasuring profits. Given the concentration of corporate ownership at higher incomes, and the extent of the mismeasurement, this correction matters.

One additional note on Reynolds' responses since my last post, then I'd like to move on to other issues, particularly those raised in the contributions others have made to this debate. Given Reynolds wholly unsubstantiated and uncalled for attack on the ethics of Piketty and Saez in a commentary on the opinion pages of the Wall Street Journal where he accuses them of fabricating results in academic journals to support an agenda, and given other things he has said at other times, it made me chuckle to see him say in his latest response that "all we have seen so far" are "comments about my temperment or assumed policy agenda" as though no evidence rebutting his stance has been presented, just personal attacks on his character or charges that he's pursuing an ideological agenda. Reynolds says there's not a strand of evidence that he's wrong ("no evidence has yet been presented to show any significant and sustained increase in inequality"). Not a strand he'll acknowledge anyway, but as has been pointed out in previous posts here, and has been documented elsewhere in many different ways, there's overwhelming evidence against his claims.

I want to follow up on the post from Dirk Krueger and Fabrizio Perri ("Inequality in What?") because I think they bring something important to the discussion, the academic underpinnings of how we approach the measurement and assessment of inequality changes. In their introduction they say:

Inequality is a fascinating subject, one that provokes discussion and makes it hard to settle the apparently simple question of whether income inequality in the US has increased since 1988. .... Our main point ... is to argue that to focus only on the evolution of current income inequality is insufficient if one is interested in the evolution of the distribution of living standards in the U.S.

They then go on to explain, correctly, that the academic literature does not support looking at current income to measure inequality, a broader lifetime measure of consumption and leisure opportunities must be considered, i.e. some concept of the present value of expected lifetime utility is needed (the data on current consumption Reynolds uses in some of his arguments is one proxy for lifetime resources under permanent income stories of consumption, but it is an imperfect proxy with acknowledged problems making any results from these data difficult to interpret reliably). Their conclusion is worth repeating:

One conclusion we would ... like the readers to take home is ... that understanding the welfare effects of changes in measured inequality, and possibly the appropriate policy measures to deal with it, is a complex task that involves more than reporting the distribution of current resources. Ideally one should understand and measure the distribution of lifetime resources. In order to understand how lifetime resources translate into observable indicators, and what these indicators are, it is crucial to have a thorough understanding of how and to what extent households can transfer resources through time and across states of the world using financial markets. Our own previous work has highlighted the importance of using consumption as an indicator, but recent exciting work is being done by leading researchers in the economics community stressing the role of inequality and dispersion in other variables, too, such as labor effort or wealth, and assessing their impact on incentives, the allocation of resources, and the distribution of welfare.

This is the point I'd like to follow up on because it gets at the essence of Reynolds' point, measurement issues. What does the academic literature tell us about measuring inequality and has the debate as presented here conformed to those standards?

There is a considerable body of work on measuring inequality, so I will only scratch the surface and point to a couple of surveys on the topic and highlight some of the key results that relate to the discussion here. Perhaps as others respond they can add links to additional resources people can use to learn more about what the academic literature says about measuring changes in inequality.

One such resource is Some New Methods for Measuring and Describing Economic Inequality, by R. L. Basmann, K. J. Hayes, and D. J. Slottje which was reviewed by John A. Bishop in the Journal of Economic Literature in June, 1995. Since much of the evidence presented by Reynolds and by Richard Burkhauser in this debate has been in the form of Gini coefficients, let me quote one passage from the review (e.g. Burkhauser's argument that I am wrong about the persuasiveness of the overall evidence relies on Gini coefficients):

First, the authors stress the shortcomings of cardinal evaluations of inequality that rely upon a single index such as the Gini coefficient. They argue that there is no one best inequality index --each contains an implicit set of distributional weights and the precise weights underlying many familiar indices, if clearly understood, would probably not be widely accepted by policy makers.

Another summary of techniques for measuring inequality can be found in Frank Cowell's Measuring Income Inequality, 1995. Two more collections of the work on measuring inequality are given in The Handbook of Income Inequality Measurement, by Jacques Silber, 2000,  which is reviewed by Charles Beach in The Journal of Economic Literature, and in The Handbook of Income Distribution, edited by A.B. Atkinson and F. Bourguignon, 2000. Let me also be sure to recommend my colleague Peter Lambert's The Distribution and Redistribution of Income, 2002. These surveys and texts discuss topics such as using stochastic dominance techniques along with Lorenz curves to assess inequality, equivalence scaling, parametric and non-parametric approaches to measurement, welfare comparisons, allowing for different family sizes and compositions, horizontal versus vertical inequality measurement, intertemporal measurement of inequality, and all sorts of other important theoretical and measurement issues you won't generally find in popular discussions of the topic and that have not, for the most part, been a part of the evidence and discussion Reynolds has presented.

There has been quite a bit more work since these overviews were published, but they constitute a good introduction to many of the important theoretical and empirical issues in the debate over inequality. But as an example of more recent work, another good source of information on this topic is the Journal of Economic Inequality which, coincidentally, has a paper in its latest issue about an issue Reynolds is worried about - robust estimation is the presence of contamination of data in the tails of distributions (note that Reynolds is critical of Krugman's efforts to look at data contamination at the top of the income distribution that follow roughly along the lines suggested in approach (2) of this paper):

Robust stochastic dominance: A semi-parametric approach, by Frank A. Cowell & Maria-Pia Victoria-Feser, Journal of Economic Inequality, April 2007: Abstract Lorenz curves and second-order dominance criteria, the fundamental tools for stochastic dominance, are known to be sensitive to data contamination in the tails of the distribution. We propose two ways of dealing with the problem: (1) Estimate Lorenz curves using parametric models and (2) combine empirical estimation with a parametric (robust) estimation of the upper tail of the distribution using the Pareto model. Approach (2) is preferred because of its flexibility. Using simulations we show the dramatic effect of a few contaminated data on the Lorenz ranking and the performance of the robust semi-parametric approach (2). ...

And that's just one paper in the most recent issue on a single journal, there is a considerable volume of work on these issues. The important thing to realize from all of this is that no single measure of inequality is perfect. Thus, looking at a variety of measurements using a variety of data sets and state of the art techniques, and being fully aware of and acknowledging the shortcomings of the process at every step along the way so that results can be interpreted properly is important in establishing how inequality has changed through time, and in presenting a balanced overview of the results. When researchers go through these exercises carefully and weigh the evidence objectively they conclude, with few exceptions, that inequality has been rising in recent years.

Feel free to comment as you wish, I certainly expect disagreement with some of the positions I've taken, but I do have a question. Does anyone know of additional resources on the measurement issues surrounding the inequality question?

Here's one response to my post:

Additional Reflections, by Richard Burkhauser, Cato Unbound: It appears from Mark Thoma's last posting that he is inching his way into the consensus of Reynolds, Burtless, and Burkhauser, which says that it is hard to find much of an increase in household size adjusted income among the bottom 98 or 99 percent of the United States population since the 1980s using standard Gini measures and consistently top coded data from both the public use and internal restricted access Current Population Survey. And, though he doesn't say so, it also appears he is inching his way toward the view that the gains from economic growth were more equally distributed over the last major business cycle (1989-2000) than the previous one (1979-1989) within this population.

Thoma has posted some very valuable references to the more technical economics literature, which uses alternative measures of income inequality and struggles to extract the most information about the entire distribution from imperfect data by making certain assumptions about the shape of the entire distribution given limited data at the top. He especially notes the problem that outliers at the top of the distribution cause in such efforts. Hence it appears he now recognizes that you sometimes need to be a weatherman (or at least need to carefully listen to several of them) to know which way the wind is blowing.

But instead of then agreeing that the literature on how exactly to capture this very high end of the distribution (the top 1 or 2 percent) is still developing — as is the literature on how exactly to use such measures to tell us about what has happened to income in this very high income population over the last thirty years — he concludes with a most amazing non-sequitur: "When researchers go through these exercises carefully and weigh the evidence objectively they conclude, with few exceptions, that inequality has been rising in recent years."

A careful reading of the most recent article Thoma provides us makes no such claim:
"Robust stochastic dominance: A semi-parametric approach," by Frank A. Cowell & Maria-Pia Victoria-Feser, Journal of Economic Inequality, April 2007. Rather, the article is a cautionary tale of the difficulties of making such judgments, and the sensitivity of such finding to outliers at the top of the distribution. While the paper uses British data, similar problems are likely to be found using the CPS and, I suspect, the other data sets we have discussed over the course of our conversation. That is why much of the income inequality literature using the CPS has used "trimming" or consistent top coding to avoid the problem of outliers and instead talks about the bottom 98-99 percent of the income distribution. Pinning down what has happened to the top 1 or 2 percent of the income distribution is the hard work that remains to be done before we can state definitively what has been happening there and how it impacts overall income distribution.

Again, relying on any single paper or measure (e.g. "standard Gini measures") is a mistake, it's the overall picture that is important. There are also Gini coefficient results that come to opposite conclusions, so I don't think I am inching anywhere other than to point out that much of the evidence that has been used to counter the idea that inequality is increasing has relied on very narrow measures and samples, or has not conformed to state of the art practices. A broader view of the inequality evidence says that inequality has been increasing. [In addition, I'll note yet again that Krugman showed that the top-coding issue mattered - and supported increasing inequality, the opposite of Reynolds' claim - when he adjusted for the top-coding problem using a Pareto distribution for top incomes.]

I realize that those with a point to make, or evidence to the contrary they are heavily invested in, object to my calling this the "broader view" or the "consensus." But my reading of the evidence and the discussion surrounding it by people working in the area is that, in fact, the vast majority conclude that inequality has been increasing in recent years once they have waded through the large amount of accumulated evidence on this topic. So I have no reason, and haven't come across any reasons in this debate, to alter that view.

February 26, 2007

Paul Krugman: Substance Over Image

Paul Krugman explains the similarity between presidential elections and high school popularity contests and the trouble that similarity has caused us. In an attempt to avoid such problems this time around, he poses some questions for Democratic candidates that are intended to cut through their superficial, well-crafted images, test their knowledge of the issues, and gauge their "judgment, seriousness and courage":

Substance Over Image, by Paul Krugman, Commentary, NY Times: Six years ago a man unsuited both by intellect and by temperament for high office somehow ended up running the country.

How did that happen? First, he got the Republican nomination by locking up the big money early.

Then, he got within chad-and-butterfly range of the White House because the public, enthusiastically encouraged by many in the news media, treated the presidential election like a high school popularity contest. The successful candidate received kid-gloves treatment — and a free pass on the fuzzy math of his policy proposals — because he seemed like a fun guy to hang out with, while the unsuccessful candidate was subjected to sniggering mockery over his clothing and his mannerisms.

Today, with thousands of Americans and tens of thousands of Iraqis dead thanks to presidential folly, with Al Qaeda resurgent and Afghanistan on the brink, you’d think we would have learned a lesson. But the early signs aren’t encouraging. ...

Enough already. Let’s make this election about the issues. Let’s demand that presidential candidates explain what they propose doing about the real problems facing the nation, and judge them by how they respond. ... So here are some questions for the Democratic hopefuls. (I’ll talk about the Republicans another time.)

First, what do they propose doing about the health care crisis? All the leading Democratic candidates say they’re for universal care, but only John Edwards has come out with a specific proposal. The others have offered only vague generalities...

Second, what do they propose doing about the budget deficit? There’s a serious debate within the Democratic Party between deficit hawks, who point out how well the economy did in the Clinton years, and those who, having watched Republicans squander Bill Clinton’s hard-won surplus on tax cuts for the wealthy and a feckless war, would give other things — such as universal health care — higher priority than deficit reduction.

Mr. Edwards has come down on the anti-hawk side. But which side are Mrs. Clinton and Mr. Obama on? I have no idea.

Third, what will candidates do about taxes? Many of the Bush tax cuts are scheduled to expire at the end of 2010. Should they be extended, in whole or in part? And what ... about the alternative minimum tax...?

Fourth, how do the candidates propose getting America’s position in the world out of the hole the Bush administration has dug? All the Democrats seem to be more or less in favor of withdrawing from Iraq. But what do they think we should do about Al Qaeda’s sanctuary in Pakistan? And what will they do if the lame-duck administration starts bombing Iran?

The point of these questions isn’t to pose an ideological litmus test. The point is, instead, to gauge candidates’ judgment, seriousness and courage. How they answer is as important as what they answer.

I should also say that although today’s column focuses on the Democrats, Republican candidates shouldn’t be let off the hook. In particular, someone needs to make Rudy Giuliani, who seems to have become the Republican front-runner, stop running exclusively on what he did on 9/11.

Over the last six years we’ve witnessed the damage done by a president nominated because he had the big bucks behind him, and elected (sort of) because he came across well on camera. We need to pick the next president on the basis of substance, not image.

_________________________
Previous (2/23) column: Paul Krugman: Colorless Green Ideas

The Weakened Link Between Unemployment and Inflation

Greg Ip reports that the relationship between inflation and unemployment seems to have shifted over time. Variations in output and unemployment in the current time period appear to have less influence over future inflation than they did twenty five years ago:

Policy Makers At Fed Rethink Inflation's Roots, by Greg Ip, Wall Street Journal: For decades, a simple rule has governed how the Federal Reserve views the nation's economy: When unemployment falls too low, inflation goes up, and vice versa.

But Fed officials have rethought that notion. They believe it takes a far bigger change in unemployment to affect inflation today than it did 25 years ago. Now, when inflation fluctuates, they are far more likely to blame temporary factors, such as changes in oil prices or rents...

One explanation for why inflation is influenced less by changes in unemployment is that the American public has come to expect inflation to remain stable. When inflation moves up or down, it is less likely to get stuck at the new level because companies and workers don't factor the change into their expectations... Another explanation is that the Fed is better at adjusting interest rates in anticipation of swings in unemployment before those swings can affect inflation.

This ... doesn't mean unemployment can be ignored. But it does mean that in the short run, a period of high or low joblessness would be less likely to alter the Fed's view of inflation and trigger an immediate change in interest rates. A shift in public expectations of inflation, however, would carry more weight in the Fed's calculations. ...

Though the trend has been under way for 25 years, only recently has intensive research by Fed economists and others incorporated it into mainstream thinking. ...

"Among the feet-on-the-ground Fed inflation forecasters, who do this for a living, there's been a lot of concern for the last 10 years about whether there's...less of a relationship between output and future inflation," says Harvard University economist James Stock. "The accumulation of evidence occurs at a snail's pace. The evidence now is a lot stronger."

Mr. Stock and fellow economist Mark Watson at Princeton University presented evidence ... in late 2005 that inflation's long-term trend has varied little since 1984, and that most fluctuations were the result of temporary disturbances, such as a change in energy prices. ...

While a given drop in unemployment is less likely to spark inflation, the potential is still there. The Fed's staff estimates it takes up to twice as much additional unemployment to achieve a percentage drop in inflation as it did before 1984. ...

What Stock and Watson have shown (e.g. see  “Has Inflation Become Harder to Forecast?") is that inflation is both harder and easier to forecast at the same time. It's easier because it is more stable. Thus, the root mean square error for even fairly naive forecasts has fallen over time as inflation has stabilized. However, inflation has become harder to forecast because changes in real activity such as unemployment or output have less predictive power for future inflation than in the past.

What is the source of the change in the relationship? At the end of the paper Stock and Watson say:

One thing this paper has not done is to attempt to link these changes in time series properties to more fundamental changes in the economy. The obvious explanation is that these changes stem from changes in the conduct of monetary policy in the post-1984 era, moving from a reactive to a forward-looking stance... But obvious explanations are not always the right ones, and there are other possibilities. To a considerable extent, these other possibilities are similar to the ones raised in the context of the discussion of the great moderation, including changes in the structure of the real economy, the deepening of financial markets, and possible changes in the nature of the structural shocks hitting the economy. We do not attempt to sort through these explanations here, but simply raise them to point out that the question of deeper causes for these changes merits further discussion.

I think it's a combination of technological change (in particular, computers and digital technology that improved business processes, e.g. inventory management) and better monetary policy. Monetary policy has improved by reducing unexpected policy changes through transparency and enhanced credibility, and the implementation of inflation targeting rules that have stabilized both expected and actual inflation.

Larry Summers: History Lessons for China

Larry Summers looks to history for lessons China can use to limit the chances that it follows Japan into a "lost decade of deflation and considerable deterioration in its international relations":

History holds lessons for China and its partners, by Lawrence Summers, Financial Times (free): A rising Asian power has emerged as an export powerhouse and enjoys rapid, export-led growth fuelled by extraordinarily high savings and investment rates. Its technological capacity is upgraded at prodigious rates and its businesses threaten an ever greater swathe of industry in Europe and the US. Its high level of central bank reserves and burgeoning current account surplus lead to claims that its exchange rate is being unfairly manipulated.... Its financial system is ... heavily regulated in ways that favour domestic institutions and has close ties to government and industry. Rapid productivity growth holds down product prices but asset price inflation is rampant.

US congressional leaders demand radical action to contain the economic threat. Delegations of senior US economic officials engage in “dialogue” ..., warning of the congressional demons who stand ready to act if “results” are not achieved quickly.

All of this describes what is happening in and with China today. It also describes the Japanese economy in the late 1980s and early 1990s before its lost decade of deflation and considerable deterioration in its international relations. While there are obvious differences, notably China’s much lower level of development, the similarities are striking enough to invite an effort to draw some lessons for China and its partners from the earlier Japanese experience. [...read more...]

After discussing the lessons to be learned from the causes of Japan's difficulties, including policy errors made by Japanese officials, he notes that pressure on other countries to reform their economic systems and policies can be counter-productive:

These lessons contrast sharply with those drawn by some observers in and out of China, who attribute Japan’s deflation and consequent poor performance to its willingness to accede to US pressure for exchange rate appreciation. ...

[There is a] need for modesty regarding economic policy dialogues that seek to create pressure for change. Events and national and political decisions, not international communiqu├ęs, shape economic outcomes. The impact of events beyond the control of governments – the collapse of Japan’s asset markets, information technology’s spur to US productivity growth, the Asian financial crisis – dwarfed the issues debated in economic dialogues.

Even where government policies might have significant impact, there is no evidence that Japan in the 1980s and 1990s made any changes in domestically sensitive structural policy areas such as housing finance, social security or retail regulation in response to the US Structural Impediments Initiative or its successors. Policy in areas of this kind is shaped by domestic politics; if heavy-handed pressure makes it easier for special interests to invoke nationalism as they resist change, high-profile dialogues can be counter-productive. In a world where goodwill is scarce, heavy-handed dialogues engender resentments that spill over into other spheres. ...

Republicans Could Have Diffused the "AMT Bomb," But Didn't

Linda Beale of ataxingmanner takes on a recent Wall Street Journal editorial on "Bill Clinton's AMT Bomb":

Wall Street Journal AMT Editorial, by Linda Beale: The Wall Street Journal is an important source of financial news, but people should not expect to read its editorial page without their spin antennae turned on. Today's editorial on the AMT is a good example of the way the Journal does partisan (and misleading) spin. It's titled "Bill Clinton's AMT Bomb," Wall Street Journal, Feb. 23, 2007...

What's wrong with it?

First, it lays the continuing downward creep of the AMT at Clinton's feet, in spite of the fact that the AMT downward creep is directly related to two things--the lack of indexation (which has not yet been passed by any Congress or pushed by any president) and the nature of the Bush tax cuts (they lowered top rates for the regular tax so much that it made many more taxpayers subject to the AMT, and they intentionally did not lower the AMT rates).

The Journal blames Clinton for the AMT because the Clinton administration did the sensible thing--when top rates were raised, the AMT rates were raised as well so that the AMT could continue to function parallel to the regular system the way it was intended to. (Clinton also increased the AMT exemption--permanently, unlike the Bush Congress.) If the Bush administration had applied the same logic that the Clinton administration applied, it would have lowered the AMT rates (and again increased the exemption permanently, because of inflation) when it lowered the regular tax rates, so that the AMT would have continued to function parallel to the regular system in the way it was intended to.

That would have prevented any problem of the AMT slipping down into the middle class other than from the lack of indexation (which nobody has yet really dealt with). But that would have also forced the Bush administration to acknowledge that the Bush tax cuts were far deeper revenue reductions (and far more beneficial to wealthy Americans) than it apparently wanted to admit. So it didn't do the aboveboard thing and instead decided to argue that it could take care of the AMT later.....

Second, the Journal blames Clinton for not indexing the AMT exemption to inflation. That's like the pot calling the kettle black. At some point, someone should decide just how far down the AMT is targeted, set the exemption appropriately, and then index it for inflation. But the Bush Congress didn't do anything but a year-by-year "fix" to the exemption amount, and even then only when it was pushed to do so. Why does the Bush-supporting and Clinton-bashing Journal pick out the failure of the 1993 changes to index the amount as the time it didn't get indexed, instead of the 2001, 2003, and other changes during the Bush administration?

Third, it suggests that "average middle-class families" got about a $2000 per family deduction from the Bush tax cuts. That's a misuse of averages...

Fourth, the Journal calls the AMT a "liberal monster that was created in the name of soaking the rich but has now come back to swallow the middle class." Again, another liberal-bashing spin. The AMT was never intended to "soak the rich"--unless (like the Wall Street Journal, apparently) you happen to think that having wealthy people pay at least some small percentage of their income in tax every year is "soaking" them. The AMT was originally meant to make sure that very rich people couldn't arrange the type and timing of income in such a way as to pay no tax at all, and over time Congress saw that it also provided a way to ensure that people with considerable income couldn't over-use various incentives built into the tax system (certain kinds of tax exempt income, ability to defer wage income received in the form of stock option grants, etc.) that, taken in the aggregate, left them paying almost no tax. In other words, the AMT does fairly well the purpose for which it has been intended for decades, ever since Congress specifically enlarged its scope beyond just getting the rich who weren't paying any regular tax at all. (It would do even better in achieving its purpose if the capital gains preferential rate were again made an AMT preference.) ...

Just a note. The Journal sarcastically refers to proposals to raise taxes on CEOs in order to pay for adjusting the AMT to protect the (true) middle class. (Those officers ..., remember, average about 400 times the annual income of workers in their companies.) But the same editorial board thought it was completely reasonable for businesses (and a lot of them not so small) to receive even more tax cuts ... in return for their cost to implement a tiny, long-delayed, and much needed increase in the minimum wage (which in too many cases was zero, since state wage laws or local markets already require wages over the proposed minimum). It illustrates a point I've made several times... When one talks about taxes and money, changes will almost always be redistributive, but the directionality is what matters. Redistribution upwards, in fact, is the norm, as in the home mortgage interest deduction and many other tax expenditures in the Code that favor those in the higher income brackets, though most who support those kinds of changes talk a lot about the "free" market. Redistribution downwards, in favor of those who don't have much, is hard to do in an economy that is so dominated by huge multinational enterprises with enormous power and in which populist and progressive sentiments are often treated as naive and sentimental.

For a good review of the AMT issues, you can also turn to the February 22, 2007 entry "White House May Be Negotiating With Itself on Alternative Minimum Tax," on Talking Taxes, the tax blog connected with Citizens for Tax Justice. (Regrettably, these entries are all set to the general URL for the blog, so after today you'll have to go to the right hand column and click on the title to pull up this entry directly.) ...

February 24, 2007

Maximizing Flexicurity: Finding the Optimal Mix of Flex and Security

Louis Uchitelle of the NY Times says there is reason to doubt the claim that labor market protections in Europe and Japan explain most of the differences in output growth and other measures of economic performance between Europe, Japan, and the U.S.:

Job Security, Too, May Have a Happy Medium, by Louis Uchitelle, NY Times: For more than a decade, many American economists have pointed to Europe and Japan as prima facie evidence that layoffs in the United States are a good thing. The economies in those countries were not nearly as robust as this country’s. And the reason? Too much job security...

American employers, in sharp contrast, have operated with much more “flexibility.” Hiring and firing at will, they shift labor from where it is not needed to where it is needed. ...

This shuffling out of one job and into another shows up in the statistics as nearly full employment. Never mind that the shuffling does not work as efficiently as the description implies or that many of the laid-off workers find themselves earning less in their next jobs, an income roller coaster that is absent in Europe and Japan. A dynamic economy leaves no alternative, or so the reasoning goes among mainstream economists.

“Trying to prevent this creative destruction from happening is a recipe for less economic growth and less productivity,” said Barry Eichengreen ... at the University of California, Berkeley.

Starting in the mid-1990s, Europe and Japan did wallow in recession or weak growth while the American economy expanded at a spectacular clip. But no longer. Growth is slowing in the United States just as it speeds up in the 25-nation European Union and in Japan. Unemployment rates in those countries are also beginning to come down...

As the gaps close, does that mean that job security, in the European and Japanese style, is the right way to go after all? The question would be easier to answer if the European Union countries and Japan had stuck to their orthodox job security. They have not. On their way to revival, they adopted some of America’s practices.

“A number of countries have found ways to make their labor markets more flexible, without sacrificing their greater commitment to a government role in equalizing incomes,” said Paul Swaim, a senior economist at the Organization for Economic Cooperation and Development in Paris.

So the old dichotomy — insecurity versus security — is gradually giving way to a new debate. “It is obviously the right mix of security and insecurity that has to be achieved,” said Richard B. Freeman ... at Harvard...

The guideposts in this search for the right mix should not be just economic growth rates and unemployment levels. These are too often affected by business cycles. Many American economists, bent on demonstrating the payoff from layoffs, paid relatively little attention to the cyclical reasons for the underperformance of Japan and Europe. “Sometimes we forget these cyclical forces,” said Sanford M. Jacoby ... at the University of California, Los Angeles. ...

Cycles count. But so do labor policies.

In some European countries, employers are using temporary and part-time workers much more than they did in the past. That gives them leeway to expand and contract their work forces without having to add full-timers who are protected against layoffs. ...Japan ... also relies for “flexibility” on part-timers and temps.

If cost-cutting is necessary in Japan, there is a pecking order, says Yoshi Tsurumi, an economist at Baruch College in Manhattan... Dividends are cut first, then salaries — starting at the top. Finally, there are layoffs — if attrition is not enough to shrink staff. “The matter of flexibility is important,” Mr. Tsurumi said, “but the Japanese notion is to retrain and transfer people within an organization.”

Elsewhere, France and Germany have eased job protection for employees of small businesses. ... And the Danish model is getting a lot of attention. Employers in Denmark are relatively free to lay off workers, but the state then steps in with benefits that replace 70 percent of the lost income for four years. Government also finances retraining and education, pressuring the unemployed to participate and then insisting that they accept reasonable job offers or risk cuts in their benefits.

The Danish government devotes 3 percent of the nation’s gross domestic product to retraining, compared with less than 1 percent in the United States. And, of course, everywhere in Europe, the state pays for health insurance and for pensions that often encourage early retirement by replacing big percentages of preretirement income.

“What the Europeans and the Japanese understand is that modern economies can sustain social protections without killing the golden goose,” said Jared Bernstein, a senior economist at the Economic Policy Institute in Washington.

That is an understanding that perhaps will take root among American economists and policy makers, deprived as they now are of their long-running contention that job security resulted in weak economic growth in Europe and Japan.

Here's a pretty lukewarm endorsement of the flexicurity model in a working paper from Jianping Zhou at the IMF. The paper makes a point similar to a point made above, that part of Denmark's success has been from reforms since the 1980s giving people an increased incentive to train for and take new jobs. In particular, eligibility for programs has been tightened while the time people are allowed to be on some programs has been shortened. Still, relative to many countries, the programs offer substantial income protection:

Danish for All? Balancing Flexibility with Security: The Flexicurity Model, by Jianping Zhou, February 2007: ...V. Concluding Remarks The Danish flexicurity model has been widely praised for its association with a low unemployment rate and a high standard of social security for the unemployed. The model combines a high degree of labor market flexibility with a high level of social protection. While most European countries are facing chronically high unemployment rates and the needed labor market reforms often face strong political opposition, the flexicurity model looks increasingly attractive to policymakers in Europe.

However, whether the Danish model should and can be adopted by other European countries to reduce unemployment is not obvious. First, Denmark has traditionally had a combination of a flexible labor market and a high level of income protection. Economic performance under this system has varied, as demonstrated by the economic crisis during the early 1980s and the remarkable labor market performance in recent years. Second, other countries have been able to reduce their high unemployment rates to low levels with rather different social models (e.g., Ireland, Sweden, and the United Kingdom). Finally, generous unemployment benefits often raise moral hazard issues that might hinder effective implementation of the Danish model. In this regard, a strict job search requirement and tight eligibility criteria for unemployment benefits are key.

The Danish model is costly. The tax burden in Denmark is heavy because of the need to finance the country’s high spending on labor market programs and unemployment benefits. As most countries that are tempted to adopt the Danish model will typically start from a high unemployment level, a move toward the Danish model will, in the short run, trigger a sharp increase in the cost of unemployment benefits and active labor market policies, thereby widening the tax wedge, with an adverse impact on labor demand and supply. This implies that the Danish model may not be suitable for countries facing high unemployment and budgetary difficulties. Using a calibrated model for France, the paper finds that implementation of the flexicurity model could be costly, and reduction in structural unemployment during the first few years might be limited.

Nonetheless, certain key aspects of the Danish model could usefully be studied and considered by other countries. Among others, they include the various relationships between the population’s willingness to accept labor market flexibility, its confidence in a well functioning social safety net, and the accompanying need to develop effective labor market policies in order to avoid high costs and perverse incentives. The Danish government’s constant awareness and analysis of the challenges facing the flexicurity model and its ability to respond to them with policy actions are noteworthy in this regard. For instance, since the economic crisis in the early 1980s, reforms have been implemented to shorten the maximum period for participation in active labor market programs and tighten the eligibility criteria for unemployment benefits.

Bill Gates: Open the Doors to More High-Skill Immigration

Bill Gates continues his crusade to allow more high-skilled immigrants into the U.S.:

How to Keep the U.S. Competitive, by Bill Gates, Commentary, Washington Post: ...Innovation is the source of U.S. economic leadership and the foundation for our competitiveness in the global economy. Government investment in research, strong intellectual property laws and efficient capital markets are among the reasons that America has for decades been best at transforming new ideas into successful businesses.

The most important factor is our workforce. Scientists and engineers trained in U.S. universities -- the world's best -- have pioneered key technologies such as the microprocessor, creating industries and generating millions of high-paying jobs.

But our status as the world's center for new ideas cannot be taken for granted. Other governments are waking up to the vital role innovation plays in competitiveness. ...

Two steps are critical. First, we must demand strong schools so that young Americans enter the workforce with the math, science and problem-solving skills they need to succeed in the knowledge economy. We must also make it easier for foreign-born scientists and engineers to work for U.S. companies. ...

Our schools can do better. Last year, I visited High Tech High in San Diego; it's an amazing school where educators have augmented traditional teaching methods with a rigorous, project-centered curriculum. Students there know they're expected to go on to college. This combination is working: 100 percent of High Tech High graduates are accepted into college, and 29 percent major in math or science, compared with the national average of 17 percent.

To remain competitive in the global economy, we must build on the success of such schools...

American competitiveness also requires immigration reforms that reflect the importance of highly skilled foreign-born employees. Demand for specialized technical skills has long exceeded the supply of native-born workers with advanced degrees, and scientists and engineers from other countries fill this gap.

This issue has reached a crisis point. Computer science employment is growing by nearly 100,000 jobs annually. But at the same time studies show that there is a dramatic decline in the number of students graduating with computer science degrees.

The United States provides 65,000 temporary H-1B visas each year to make up this shortfall -- not nearly enough to fill open technical positions.

Permanent residency regulations compound this problem. Temporary employees wait five years or longer for a green card. During that time they can't change jobs, which limits their opportunities to contribute to their employer's success and overall economic growth.

Last year, reform on this issue stalled as Congress struggled to address border security and undocumented immigration. As lawmakers grapple with those important issues once again, I urge them to support changes to the H-1B visa program that allow American businesses to hire foreign-born scientists and engineers when they can't find the homegrown talent they need. This program has strong wage protections for U.S. workers: Like other companies, Microsoft pays H-1B and U.S. employees the same high levels...

Reforming the green card program to make it easier to retain highly skilled professionals is also necessary. These employees are vital to U.S. competitiveness, and we should welcome their contribution to U.S. economic growth.

We should also encourage foreign students to stay here after they graduate. Half of this country's doctoral candidates in computer science come from abroad. It's not in our national interest to educate them here but send them home...

During the past 30 years, U.S. innovation has been the catalyst for the digital information revolution. If the United States is to remain a global economic leader, we must foster an environment that enables a new generation to dream up innovations, regardless of where they were born. Talent in this country is not the problem -- the issue is political will.

On High Tech, the fact that more graduates major in math and science in college than at other schools (29% versus 17%) is not, in and of itself, evidence that these schools work since a high degree of selectivity bias is likely present (those who like math and science are more likely to enroll in a "High Tech High" than other students, the web site says they get 3,000 applications for 300 slots). I agree completely with the message on education, but worry that instead of building upon what works, we are too ready to tear it all down and start over. We have a Gates Foundation small schools initiative here in Eugene that broke an existing high school into three smaller specialty schools (an International High School, a school specializing in Invention, Design, Engineering, Arts, & Science, and North Eugene Academy of Arts). If it works, great, but these are kids lives we are playing with and if it doesn't work and outcomes deteriorate, the price of innovation, the risk, becomes very localized and very steep for those students who participate in the failed experiments (and it's not always voluntary). I wish there was a better way to spread the risk of these experiments across the population rather than localizing it in schools that are already, for the most part, having troubles.

As for immigration, I am generally supportive of open door policies. However, I do want to point out that there is another solution for Gates and others. They believe that there is plenty of talent in the U.S., that's not the problem, it's just that workers lack the training they need. Microsoft could provide the training itself instead of free-riding on the educational system. It takes a little longer and costs more, of course, but consistent with advocates of privatization and efficient markets, it forces Microsoft to internalize the costs of training its workers, particularly specialized training. But I can't blame Microsoft for wanting to avoid these costs if it can, and for wanting to increase the supply of labor as much as possible by opening the borders to more high-skill immigration.

The shortage of U.S. graduates in this area may be because students have no certainty that specialized skills in these areas will retain their value in the future, a consequence of changes in technology that undermine existing skills over time, digital technology that allows collaborative work to be performed outside of the U.S., and the prospect of more temporary visas being issued in the future.

My observation is that there is a large set of talented students who respond strongly to expected employment prospects when they choose a major, though there is, of course, a time-delay between the appearance of shortages and surpluses in particular areas and changes in the number of majors. But the effect is there. If U.S. students perceive that an investment in computer science training relative to investing their time elsewhere will have the largest long-run payoff, any shortage will take care of itself. [And, as noted in comments, access to education may not be equal so that another way to increase supply is to increase educational opportunities within the U.S.]

In the long-run, due to technology and globalization and to comparative advantage, trying to close doors to high-skilled workers is, for the most part, a losing battle. We can create artificial barriers to foreign competition and steer our students in particular directions but there is a danger that in doing so, we set them up for a bigger fall later. If the walls keeping out foreign competition cannot be maintained in a digital age, and if we artificially direct students to particular occupations, once the walls do come down people employed in these areas will be very exposed and in danger of a large fall in income and employment prospects due to the increased competition. For that reason, I think we are better off letting the walls come down now, within reason of course, and allowing prices direct our students to the places they will, so far as markets can predict, be most highly valued in the future.

Permalink: http://economistsview.typepad.com/economistsview/2007/02/bill_gates_open.html

Manufacturing Employment and Extended Mass Layoffs

Here are two Economic Trends articles from the Cleveland Fed. The first looks at parallels between declines in manufacturing and agricultural employment, and the second characterizes extended mass layoffs since 2000:

Is Manufacturing Going the Way of Agriculture?, by Ed Nosal and Michael Shenk, Economic Trends, Cleveland Fed: On average, employment increased by 186,917 workers each month in 2006. The vast majority of this employment growth came from the service sector; manufacturing registered a small monthly decline, while the remainder of the goods-producing sector experienced a small increase. The total employment numbers for 2006 seem to dwarf the average monthly job growth seen since the start of this century. The average numbers for the 2000-2006 period are “small” owing to the March 2001-November 2001 recession and the so-called jobless recovery which followed, where employment growth actually remained negative for nine of the first ten months after the recession's official end. Since the beginning of 2000, the loss in manufacturing jobs has been significant, averaging 37,524 per month.

The sluggish growth in manufacturing employment, however, is not a recent phenomenon.  The level of employment in manufacturing today is about the same as it was in 1947, while the U.S. population has more than doubled over the same period. Employment in manufacturing did experience growth during the 1960s; after that, employment growth was essentially zero until 2000, after which it became negative.

Because the population and, hence, the labor force has grown, the share of manufacturing employment (to total employment) has been steadily falling since the Korean War. Approximately one in every three workers was employed in manufacturing after the Second World War; today, that number is about one in ten. Although the share of manufacturing employment has steadily fallen over time, the share of manufacturing output (to total output) has been remarkably stable over the same period. Labor productivity growth in manufacturing over this period can explain the falling employment share and the constant output share.

Changes in manufacturing employment during that last half of the twentieth century are remarkably similar to those in agriculture during the first half of the twentieth century. About a third of U.S. workers were employed in agriculture at the beginning of the century, but by 1950 that number was only a tenth. As with manufacturing, agriculture's share of employment consistently fell from 1947 into the 1980s, at which point it leveled off, but its share of output remained relatively constant. 

Here's the second article:

Extended Mass Layoffs, by Murat Tasci and Cara Stepanczuk, Economic Trends, Cleveland Fed: When 50 or more new claims for unemployment benefits are received from one establishment in a given month, government statisticians call it a mass layoff. If the layoff lasts more than 31 days, it is designated an extended mass layoff. There were 1,444 such layoffs in the fourth quarter of 2006, according to preliminary estimates from the Department of Labor’s Bureau of Labor Statistics, and they caused the separation of 255,886 workers from their jobs. These numbers indicate a slight increase over the fourth quarter of 2005. Among those employers who reported extended layoffs, 57 percent indicated that they were expecting to recall some of the workers. This was the lowest proportion for any fourth quarter since 2002.

The distribution of extended layoffs by the size of the layoff shows an interesting picture, too. A mere 1.8 percent of the layoffs caused almost 20 percent of the separations that occurred in the fourth quarter of 2006; such layoffs were of course large, each involving more than 1000 separations. On the other hand, many more mass layoffs (42.5 percent) involved fewer workers (50-99); however, these smaller mass layoffs accounted for only 16.8 percent of the total number of worker separations occurring during the quarter.

Layoff Activity by Size of Layoff
(October-December 2006)

Size of layoff Layoffs Separations
  Number Percent Number Percent
50-99 614 42.5 43,022 16.8
100-149 340 23.5 39,961 15.6
150-199 158 10.9 26,022 10.2
200-299 193 13.4 44,162 17.3
300-499 80 5.5 28,872 11.3
500-999 33 2.3 22,826 8.9
1000 or more 26 1.8 51,021     19.9
Total 1444 100 255,886 100
*Data for the third and fourth quarters of 2006 are preliminary.
Source: Department of Labor, Bureau of Labor Statistics.

Extended mass layoffs constitute a major source of job separations, especially during recessions, when the need for major employment adjustment is widespread. For instance, both extended mass layoffs and resulting separations peaked in 2001, in the midst of the most recent recession.

However, extended layoffs are not an atypical feature of a healthy economy. The completion of seasonal work caused 42 percent of the extended layoffs in the fourth quarter of 2006, generating 45 percent of separations. Contract completion follows seasonal work as a major reason for extended mass layoffs. These two factors, on average, have accounted for 44 percent of extended mass layoffs and 43 percent of separations annually since 2000. Deviations from this pattern do occur, as in 2001, when poor economic conditions forced businesses to initiate mass layoffs, and the fraction of extended mass layoffs accounted for by the completion of seasonal work and contracts declined to 27 percent.

The geographical distribution of extended mass layoffs shows that nearly a third of them (31.8 percent) have occurred in the Midwest since 2000, causing 31.6 percent of the separations that have resulted from such events. The Midwest separations were also responsible for 32 percent of all the U.S. unemployment claims that have been initiated on account of extended mass layoffs. (Initiating a claim has a very specific meaning at the Bureau of Labor Statistics: A person who files any notice of unemployment to initiate a request either for a determination of entitlement to and eligibility for compensation, or for a subsequent period of unemployment within a benefit year or period of eligibility is defined as Initial claimant.)

Regional Shares of Extended Mass Layoffs
  Annual average 2000-2006 (percent)
  Total layoffs Separations Initial claims
Northeast
19.8
18.0
21.1
South
24.3
23.1
23.6
Midwest
31.8
31.6
32.0
West
24.1
27.2
23.3
*Data for third and fourth quarters of 2006 are preliminary.
Source: Department of Labor, Bureau of Labor Statistics.

Child Labor

This NBER paper by Eric Edmonds gives an overview of the recent empirical literature on child labor. Here's the conclusion:

Child Labor, by Eric V. Edmonds, NBER WP 12926, February 2007: ... 6. Conclusion The recent boom in empirical work on child labor has substantially improved our understanding of why children work and what the consequences of that work might be. This survey aims to assess what we currently know about child labor and to highlight what important questions still require attention.

Child labor research needs to carefully define exactly what measures of time allocation are being considered. Studies that consider too narrow a scope of activities are apt to generate misleading conclusions. Children are active in a wide variety of tasks and appear to substitute between them easily. Thus, if a child is observed working less in one task (like wage work), one cannot assume that she is working less. Moreover, though wage work appears less likely to be associated with simultaneous schooling, differences in schooling associated with variation in hours worked are much greater than those associated with location of work. Work is typically classified as market work or domestic work. Domestic work (often labeled "chores") is too often ignored in child time allocation studies. For a given number of hours worked, domestic work appears as likely as work in the farm or family business to trade off with school. Hence, studies of child labor need to consider as wide a range of activities as the data permit. There is considerable scope for learning about total labor supply or schooling changes by looking at changes in participation in various disaggregate activities.

Policy interest in child labor in today's rich countries arose during the late 19th century because of what Zelizer (1994) terms the "sacralization" of children's lives. She writes: "The term sacralization is used in the sense of objects being invested with sentimental or religious meaning" (p. 11). This view is behind much of policy's and the public's interest in child labor in developing countries today. This issue arises within economics because of concern about whether child labor is driven by agency problems –do parents fully consider the tradeoffs and costs of work when sending their children to work? However, despite some suggestive evidence, the primacy of agency problems in determining child labor supply has yet to be established.

Instead, most contemporary research in economics on child labor is interested because of the impact of work on human capital accumulation. There are a finite number of hours in a day, so at some margin, there must be a tradeoff between work and schooling. However, work and schooling are simultaneous outcomes of a single decision-making process. Identifying a causal relationship between the two seems likely to be an uninformative exercise. Moreover, work is not the residual claimant on child time outside of school, and the incidence of children who neither work nor attend school appears highest where schooling is the lowest. Consequently, it is somewhat problematic to motivate interest in child labor out of a concern for schooling. Studies of schooling should consider child labor supply in attempts to understand schooling variation, but the existing evidence is insufficient to motivate studying of child labor alone without considering schooling if human capital is the researcher's only concern. Researchers have considered several other consequences of child labor that might go beyond the child's time constraint and agency problems such as whether there are health consequences, externalities, effects on attitudes and values, occupation choice, fertility, or local labor markets. Much of this work is in its infancy.

The interconnection of child labor and poverty seems intuitive, but evidence has been more difficult to establish. This is because the assertion that child labor stems from poverty is often taken to imply that the only reason children work is because of high marginal utility of income. The data are inconsistent with this extreme view in general.

In fact, a more general description of the child labor problem is that the child works when the utility from working today is greater than the utility associated with not working. This raises several issues that the literature has considered about why children work. Perhaps the most important issue is the least researched: who makes child labor decisions –that is, whose marginal utility matters?

There is some evidence that child time allocation is influenced by the net return to schooling. While estimating the return to schooling is a challenge, there is suggestive evidence that it influence child time allocation. Several studies document a correlation between the employment opportunities open to children inside and outside their household and child time allocation. Hence, there should be situations when work is the most efficient use of child time, and there is nothing in the literature which precludes this.

The fact that work can be optimal does not exclude the possibility that child labor's prevalence owes less to its efficiency but more to the family's need for the child's contribution to the household. There appears to be a fairly broad consensus that credit constraints force families to make child labor decisions without fully considering future returns to education, and several studies document that declining poverty is associated with rapid declines in the fraction of children who are working, especially in market work. For this to be true, there needs to be both credit constraints among the very poor and substantive changes in the marginal utility of the child's contribution as the family exits poverty. However, while transitioning out of poverty may be associated with declining economic activity levels, higher income households are apt to have more employment opportunities both outside and inside the household. This creates a difficult econometric problem for researchers if both labor supply and labor demand change in opposite ways with rising income. A failure to understand this has caused many to assert that there is little link between poverty and child labor. Fortunately, as research progresses, there has been increasing attention to all of the different factors that can influence child labor.

While the quantity and quality of research on child labor has been increasing dramatically in recent years, there are several omissions in the literature that need to be resolved (beyond the agency issues we have already mentioned). Policy appears to be largely operating in a vacuum from research. Namely, rhetoric is increasingly directed against "worst forms of child labor," but I am not aware of any current empirical work on why children select into worst forms that has survived peer review in a contemporary mainstream economics journal. Moreover, outside of conditional cash transfer programs, policies targeted at these worst forms and more common forms of child labor are not being evaluated in a scientific way as far as I can find. This is unfortunate. Not only could more effective policies be designed but fundamental questions about why children work could be answered in the process. Hopefully, future work on child labor will aim to combine rigorous research on these unanswered questions with formal evaluation of child labor policy. [Link to Edmonds' papers on child labor, link to this paper]

Paul Krugman: Colorless Green Ideas

Now that the scientific debate over global warming is all but over, Paul Krugman looks at what we can do limit greenhouse gas emissions:

Colorless Green Ideas, by Paul Krugman, Commentary, NY Times: The factual debate about whether global warming is real is, or at least should be, over. The question now is what to do about it.

Aside from a few dead-enders on the political right, climate change skeptics seem to be making a seamless transition from denial to fatalism. In the past, they rejected the science. Now, with the scientific evidence pretty much irrefutable, they insist that it doesn’t matter because any serious attempt to curb greenhouse gas emissions is politically and economically impossible.

Behind this claim lies the assumption, ... that any substantial cut in energy use would require a drastic change in the way we live. To be fair, some people in the conservation movement seem to share that assumption.

But the assumption is false. Let me tell you about ... an advanced economy that has managed to combine rising living standards with a substantial decline in per capita energy consumption, and managed to keep total carbon dioxide emissions more or less flat for two decades, even as both its economy and its population grew rapidly. And it achieved all this without fundamentally changing a lifestyle centered on automobiles and single-family houses.

The name of the economy? California.

There’s nothing heroic about California’s energy policy... [T]he state has adopted ... conservation measures that are ... the kind of drab, colorless stuff that excites only real policy wonks. Yet the cumulative effect has been impressive...

The energy divergence between California and the rest of the United States dates from the 1970s. Both the nation and the state initially engaged in significant energy conservation after that decade’s energy crisis. But conservation in most of America soon stalled...

In California, by contrast, the state continued to push policies designed to encourage conservation, especially of electricity. And these policies worked.

People in California have always used a bit less energy ... because of the mild climate. But the difference has grown much larger since the 1970s. Today, the average Californian uses about a third less total energy than the average American, uses less than 60 percent as much electricity, and ... emit[s] only about 55 percent as much carbon dioxide.

How did the state do it? In some cases conservation was mandated directly, through energy efficiency standards for appliances and rules governing new construction. Also, regulated power companies were given new incentives to promote conservation...

And yes, a variety of state actions had the effect of raising energy prices. In the early 1970s, the price of electricity in California was close to the national average. Today, it’s about 50 percent higher. ... As the higher price of power indicates, conservation didn’t come free. Still, it’s striking how invisible California’s energy policy remains...

So is California a role model for climate policy? No and yes. Even if America as a whole had matched California..., we’d still be emitting about as much carbon dioxide now as we were in 1990. That’s too much.

But California’s experience shows that serious conservation is a lot less disruptive, imposes much less of a burden, than the skeptics would have it. And the fact that a state government, with far more limited powers than those at Washington’s disposal, has been able to achieve so much is a good omen for our ability to do a lot to limit climate change, if and when we find the political will.

_________________________
Previous (2/19) column: Paul Krugman: Wrong is Right