Redirect


This site has moved to http://economistsview.typepad.com/
The posts below are backup copies from the new site.

December 9, 2007

Economist's View - 6 new articles

Menu Invariant Neurons and Transitivity

Interesting. Transitivity may be hard-wired:

Neurons in the frontal lobe may be responsible for rational decision-making, EurekAlert: You study the menu at a restaurant and decide to order the steak rather than the salmon. But when the waiter tells you about the lobster special, you decide lobster trumps steak. Without reconsidering the salmon, you place your order—all because of a trait called "transitivity."

"Transitivity is the hallmark of rational economic choice," says Camillo Padoa-Schioppa, a postdoctoral researcher in HMS Professor of Neurobiology John Assad's lab. According to transitivity, if you prefer A to B and B to C, then you ought to prefer A to C. Or, if you prefer lobster to steak, and steak to salmon, then you will prefer lobster to salmon.

Padoa-Schioppa is lead author on a paper that suggests this trait might be encoded at the level of individual neurons. The study, which appears online Dec. 9 in Nature Neuroscience, shows that some neurons in a part of the brain called the orbitofrontal cortex encode economic value in a "menu invariant" way. That is, the neurons respond the same to steak regardless if it's offered against salmon or lobster.

"People make choices by assigning values to different options. If the values are menu invariant preferences will be transitive. The activity of these neurons does not vary with the menu options, suggesting that these neurons could be responsible for transitivity," Padoa-Schioppa explains.

"This study provides a key insight into the biology of our frontal lobes and the neural circuits that underlie decision-making," Assad adds. "Despite the maxim, we in fact can compare apples to oranges, and we do it all the time. Camillo's research sheds light on how we make these types of choices." ...

The new study builds on an April 2006 Nature paper in which Padoa-Schioppa and Assad identified neurons that encode the value macaque monkeys assign to juice they choose independent of its type, providing a common currency of comparison for the brain.

In that study, the scientists found that although monkeys generally prefer grape juice to apple juice, sometimes they choose the latter, if it is offered in large amounts. When presented with 3 units of apple juice and 1 unit of grape juice, for example, a monkey might take the grape juice only 50 percent of the time. This indicates that the value of the grape juice is 3 times that of the apple juice. A particular group of neurons in the orbitofrontal cortex fire at roughly the same rate, regardless of the monkey's decision because the animal values both choices equally. These neurons also fire at the same rate if the monkey chooses 6 units of apple juice or 2 units of grape juice. Thus, these neurons encode the value the monkey receives in each trial.

Now, by adding a third juice to the mix, the team has tested whether these neurons reflect transitivity. The three juices were offered to a monkey in pairs dozens of times over the course of a session, the quantity of each juice varying from trial to trial.

In general, monkeys preferred 1 unit of juice A to 1 unit of juice B, 1B to 1C, and 1A to 1C. During each session, Padoa-Schioppa recorded the activity of a handful of neurons in the orbitofrontal cortex, and he discovered their firing rate did not depend on whether B was offered against A or against C, indicating that these neurons respond in a menu invariant way.

"The stability of these neurons could help to explain why we make decisions that are consistent over the short term," Padoa-Schioppa says. "In our study, the neural circuit was not influenced by the short-term behavioral context."

Padoa-Schioppa is now examining the possibility that value-encoding neurons may adapt to different value scales over longer periods of time.

"The Trouble with Paulson's Plan"

Clive Crook argues that while it may seem like the Paulson plan for subprime borrowers is a political winner for the administration, that may not prove to be the case if problems continue to mount:

The trouble with the Paulson plan, by Clive Crook, Commentary, Financial Times:

"This is a private sector effort, involving no government money," Hank Paulson, US Treasury secretary, said ... announcing the deal he had just brokered ... to freeze interest-rate resets on some loans. He emphasised that the compact was voluntary. ... In short, he said, it is a "market-based approach".

Give the man some credit for using that term without laughing. ... Alphonso Jackson, secretary of housing and urban development, was less circumspect than Mr Paulson. "Today's announcement is of national and global import," he began. "As we move into midwinter, when the chill of the season often silences the hope in our hearts, these actions today will warm the hearts of Americans caught in the swirling subprime crisis." That is how you sell a policy. ...

What does this heart-warming, globally significant, limited and strictly voluntary agreement to serve the interests of investors in mortgage-backed securities actually do? ... The complex deal proposes to freeze the resets on some ... loans, and offers help for some borrowers in switching to more affordable (often FHA guaranteed) loans.

This could have been done, and to some degree would have been, without Treasury involvement... The real point of the agreement is to lay out a standard approach to modifications that would have happened piecemeal – a template that can be widely applied, easing some of the administrative burden that case-by-case renegotiation... Crucially, the consensual aspect of the plan is intended to minimise the litigation risk...

An evidently reluctant Mr Paulson took fright at the gathering storm and decided that he had to act. The unavoidable consequence is that the administration now owns the problem in a way it did not before. As the housing slump worsens, as it seems bound to, and a chill once more silences the hope in voters' hearts, the measures announced so far will be deemed (even more than they have been already) unfair and inadequate. ...

From now on, every mortgage foreclosure will be seen as proof of the policy's failure – and partly the administration's fault. ...

Expectation Validation, Stability, and Commitment to a Monetary Policy Rule

I've been thinking about the way in which the Fed has been validating the expectations of financial markets when it makes rate decisions recently, or at least has appeared to do so, and wondering about the features of such a policy (whether or not expectation validation actually characterizes the Fed's behavior).

I've been asking around, and I don't know of a model that actually shows this, but it seems like continuous validation of financial market expectations could lead to unstable paths for the economy, at least over some time frame. If that is in fact a worry, and I think it is, then the question becomes how to avoid it.

The argument is that the Fed cannot risk doing something unexpected - particularly if the unexpected move is to tighten - without the risk of severe disruption in financial markets and the overall economy. Even if the risk is small, a risk management strategy forces the Fed to move in the direction that avoids a dire outcome as might occur if the Fed does something different than financial markets expect it to do.

This may be where commitment to a policy rule such as some version of a Taylor rule could be important. With commitment to a rule, and the credibility to back it up, when the rule says to move in a direction that is different than markets expect, say to hold or tighten, the Fed can convincingly point to the rule in its communication with the public and alter expectations to coincide with its intentions (thus avoiding the dilemma).

But I haven't fully worked this out (and really should get to grading I have to do), so I'd be curious to hear (a) if you think the Fed is trapped in this box of being forced to validate market expectations to avoid the possibility of sending the economy into a tailspin, and (b) if commitment to a rule is a means of avoiding potentially unstable paths that might result from such an expectation validation policy.

Update: Angus at Kids Prefer Cheese has more.

Speed of Convergence

I found myself doing this for no good reason other than curiosity and procrastination -- not sure it will be of much interest as it is about how quickly estimated probability distributions converge to the true distribution as the sample size increases, but guess I'll post it anyway. The curiosity began, in part, with Brad DeLong's recent (re)post showing the rate of convergence of estimated probabilities from a coin flip to 50-50 as the number of flips increases (in response to a dumb statement from Don Luskin about how well samples can represent the underlying population), and the procrastination is driven by the fact that I have exams to grade.

This is a very simple exercise. To construct one of the lines in the first graph shown below, first draw 200 observations from a standard normal distribution (mean of zero, variance of one), then do a frequency distribution for the draws. That is, to determine the frequencies, find the fraction of the observations that are less than -4.0 (simply count the number of draws in this range and divide by 200), the fraction in the range -4.0 to -3.8, the fraction in the range -3.8 to -3.6, ... etc., etc., ... the fraction in the range 3.6 to 3.8, the fraction in the range 3.8 to 4.0, and the fraction greater than 4.0 (including the endpoints, there are 42 points or "nodes" in increments of .2, the graphs show the midpoints of the interior ranges, i.e. the graphs show the frequencies at -4.1, -3.9, -3.7, ..., 3.7, 3.9., and 4.1). This gives one line on the graph (consisting of 42 points estimated from the draw of 200 observation). Repeat this four more times (for a total of five) to complete the graph.

Thus, the first graph shows five estimates of a Normal(0,1), each based upon 200 observations and 42 nodes (note how low the ratio of observations to nodes is in this case - a smaller number of nodes would be estimated more precisely, but with less nodes the distribution is not resolved as clearly - I didn't make any attempt to try different levels of resolution, i.e. a smaller or larger number of nodes, or to try a non-uniform distribution of nodes rather than having them equally spaced at .2 apart). The graph also shows a standard normal for reference.

To construct the second graph, do exactly the same thing, but draw five samples of 400 observations instead of 200. Similarly, the remaining graphs show the outcome for five samples of sizes 400, 600, 800, 1000, and 2000.

Even with 42 points to estimate, the underlying distribution is revealed fairly quickly as the number of observations is increased, i.e. with 800 observations, about 20 per node (though the 800 observations aren't, of course, distributed equally across the nodes), the distribution is resolved pretty clearly.

Normal100
Normal200
Normal400
Normal600
Normal800
Normal1000
Normal2000

Nobel Prize Lectures in Economics

These are links to videos of the Nobel lectures for 2007:

Leonid Hurwicz But Who Will Guard the Guardians? by Leonid Hurwicz
Eric S. Maskin Mechanism Design: How to Implement Social Goals by Eric S. Maskin
Roger B. Myerson Perspectives on Mechanism Design in Economic Theory by Roger B. Myerson

links for 2007-12-09

No comments: