31 May 2012

Hooters Girls are not Hicks Neutral

This post is a continuation of what will eventually be a long series of discussions of innovation in the economy, with a longer-term objective of gaining some clarity on how innovation is understood, directed and regulated in the economy. This post is also part of a argument that asserts that the discipline of economics has failed to provide an account of innovation in the economy that helps to guide 21st century policy making, and more fundamentally, that economics is simply incapable of providing such an account. A big claim, sure to raise the hackles of card carry economists. So fasten your seatbelt.

A first step is to define what I mean by “innovation.” Economics in the Schumpeterian tradition defines innovation as one component of the so-called “linear model of innovation” (see Godin, here in PDF) which lays out technological change as a process of
Invention ---> Innovation ---> Diffusion
Much has been written about this model (PDF), which I will review at another time. However it has been used in the economic literature over the past half-century, Schumpeter himself defined innovation very precisely (and did not follow the so0called linear model):
. . . any “doing things differently” in the realm of economic life - all these are instances of what we shall refer to by the term Innovation. It should be noticed at once that that concept not synonymous with “invention”. . . It is entirely immaterial whether an innovation implies scientific novelty or not. Although most innovations can be traced to some conquest in realm of either theoretical or practical knowledge, there are many which cannot. Innovation is possible without anything we should identify as invention and invention does not necessarily induce innovation, but produces of itself no economically relevant effect at all. . .
In a nutshell:
We will now define innovation more rigorously by means of the production function previously introduced. This function describes the way in which quantity of product [outputs] varies if quantities of factors [inputs] vary. If, instead of quantities of factors, we vary the form of the function, we have an innovation. ... We will simply define innovation as the setting up of a new production function. This covers the case of a new commodity, as well as those of a new form of organization. ... Innovation combines factors in a new way.
I will return to the technical aspects of the so-called “production function” (illustrated below) in a later post. For now, it is only important to understand that an innovation changes the relationship between inputs and outputs in the economy. For instance, a restaurant takes employees (labor), food, appliances (and other capital) and combines them to make meals. A change in the relationship between the various inputs and meals served would be an innovation, which would be measured by a change in the rate of productivity.
What Schumpeter called “innovation” seems to have been characterized as “technology” by many economists in the years since. For instance, Daron Acemoglu in his widely read textbook on economic growth explains:
Economists normally use the shorthand expression “technology” to capture factors other than physical and human capital that affect economic growth and performance. It is therefore important to remember that variations in technology across countries include not only differences in production techniques and in the quality of machines used in production but also disparities in productive efficiency.
Which of course brings us to Hooters girls, the hot young women in skimpy clothes who serve chicken wings and other delicacies at the US restaurant chain called Hooters. The company explains its innovative approach to the restaurant business as follows:
The first Hooters opened October 4, 1983, in Clearwater, Florida. During its history, the Hooters concept has undergone very little change. The current logo, uniform, menu and ambiance are all very similar to what existed in the original store. This lack of change is understandable given the tremendous success the Hooters concept has enjoyed. Hooters has continued to rank high amongst the industry's growth leaders. Hooters has proven successful in small-town America, major metropolitan areas and internationally. . .

The element of female sex appeal is prevalent in the restaurants, and the company believes the Hooters Girl is as socially acceptable as a Dallas Cowboy cheerleader, Sports Illustrated swimsuit model, or a Radio City Rockette. The Hooters system employs over 25,000 people - over 17,000 of which are Hooters Girls. The "nearly world famous" Hooters Girls are the cornerstone of the Hooters concept, and as part of their job, these all-American cheerleaders make promotional and charitable appearances in their respective communities. Hooters hires women who best fit the image of a Hooters Girl to work in this capacity. The chain hires both males and females to work in management and host, staff, service bar, and kitchen positions. The Hooters Girl uniform consists of orange shorts and a white tank top. Pantyhose and bras are required.
The company explains its name and innovative approach:
The chain acknowledges that many consider "Hooters" a slang term for a portion of the female anatomy. Hooters does have an owl inside its logo and uses an owl theme sufficiently to allow debate to occur over the meaning's intent. The chain enjoys and benefits from this debate. In the end, we hope Hooters means a great place to eat. . .

Sex appeal is legal and it sells. Newspapers, magazines, daytime talk shows, and local television affiliates consistently emphasize a variety of sexual topics to boost sales. Hooters marketing, emphasizing the Hooters Girl and her sex appeal, along with its commitment to quality operations continues to build and contributes to the chain's success. Hooters' business motto sums it up, "You can sell the sizzle, but you have to deliver the steak.”
So what Hooters has done, in Schumpeterian innovation terms, is to combine input factors in a new way. In this case the company has carefully selected its labor in a precise manner intended to increase the demand for its product. Presumably, the underlying assumption is that a different labor pool would result in a lower demand. So while all of the other inputs (food, appliances, etc.) could have remained the same as any other restaurant chain, the Hooters innovation led to a restaurant chain that (they claim) “has continued to rank high amongst the industry's growth leaders.”

The discipline of economics does have a terminology for the type of innovation represented by the Hooters girl – “technical change that is Hicks biased.” What "Hicks biased" technical change means is that unlike – say -- a new chicken wing fryer that can produce more wings per employee, but otherwise leaves labor and capital unchanged, some changes in the input-output relationship that result from "technology" are not independent of labor or capital (an example of the latter would be the substitution of low-sulfur Wyoming coal for West Virginia coal to reduce air pollution, but I digress).

It is of course hard to think of the Hooters girls as any sort of "biased technical change to the production function," and the tortured language begins to shed some light on the limits of economics in the context of Schumpeterian innovation.

Economics does well in providing a framework for understanding the effects on productivity of innovations that result from new chick-wing fryers, but runs into troubles in providing a framework for understanding Hooters girls as innovations. This I think, helps to explain why economics has focused on “technology” rather than “innovation” (in the original Schumpeterian sense). Make no mistake, economics is critically important, but to understand innovation -- how it happens, and crucially, how it is directed and regulated -- requires more than what economics can offer.

The Taxation Distraction

Martin Wolf, the consistently excellent FT economic columnist, has an excellent blog post up which explains why a focus on taxation as a focus of economic policy debates is misguided.  he writes:
The focus of US economic policy discussion at present is almost entirely on fiscal deficits and the level of taxes. My view is that these are second or even third order issues. What matters far more is the capacity of the economy to offer satisfactory lives for the citizenry. This depends on far more fundamental forces than deficits and taxes, such as innovation, jobs and incomes. Evidently, I am arguing that taxes and deficits do not determine these outcomes. I am suggesting this because they do not.

So I want to address two widely held, but mistaken, views. The first is that lower taxes are the principal route to better economic performance. The second is that the financial crisis is a crisis of western welfare states.
He demonstrates this argument using the figure shown at the top of this post. Wolf lays out the full argument in his blog post, which deserves a full read.

In the US the debate over the role of government takes many forms (and it seems that blog posts on any policy subject eventually arrive there no matter what the starting point or initial direction of travel). The form of the debate at present is framed in terms of a Hamiltonian vs. Jeffersonian approach to government. Snore. It is old wine in new bottle s(in this case the bottles too are old) .

From a policy perspective, the important questions are not simply on how high are taxes and how much does government spend, but how it is spent and what that spending means for growth in GDP per capita. Moving discussion from the former to the latter is, to put it mildly, a challenge.

30 May 2012

Two Summer Book Recommendations

Two of my long-time colleagues have had intriguing new books just come out, just in time for the summer reading season.
Mike Smith has written short analysis of the Joplin tornado from just over one year ago -- When the Sirens Were Silent: How the Warning System Failed a Community. Anyone who enjoyed Mike's first book, Warnings: The True Story of How Science Tamed the Weather, will also enjoy Mike's eye-opening and insight look at what happened at Joplin last year.

This is from the book description:
What if the warning system failed to provide a clear, timely notice of a major storm? Tragically, that scenario played out in Joplin, Missouri, on May 22, 2011. As a wedding, a high school graduation, and shopping trips were in progress, an invisible monster storm was developing west of the city. When it arrived, many were caught unaware. One hundred sixty-one perished and one thousand were injured. "When the Sirens Were Silent" is the gripping story of the Joplin tornado. It recounts that horrible day with a goal of insuring this does not happen again.
Have a look and while you are at it, check out Warnings as well.
John McAneney has written a crime thriller -- Shifting Sands -- set against the backdrop of New Zealand science policy of the late 1990s. The story is a great read but even more fun for a wonk like me because of the backdrop of science politics, big monied commercial interests and petty academic squabbles. I read it on the plane last week and it was a great escape.

This is from the book description:
New Zealand 1997. As a serial killer traumatises the country, Caspian is re-evaluating his career. Science has been his life, but an unorthodox approach to problem solving is out of favour with the new corporate ethos that sees science as a business. There are other pressures too as Caspian’s beautiful French wife, Marie-Claire, is becoming increasingly disenchanted with life at a remote beach on Maori tribal lands, an environment and lifestyle that Caspian is reluctant to give up. But when the body of a colleague is washed up on the sand and another, Robert, is arrested on suspicion of murder, nothing can ever be quite the same again. As he seeks to help his Maori detective friend establish Robert’s innocence, Caspian stumbles across a scam involving illegal genetic engineering experiments with a money trail leading to an international pharmaceutical company. As the suspense grows, the dirty underbelly of science for profit is revealed as a culture of corruption where the truth no longer holds any currency.
 What fun!

Congrats Mike and John;-)

Hedersdoktor at LiU

Here is me last week at the Linköping University commencement ceremony where I received an honorary doctorate. I had a great time and was deeply honored to receive the award. The commencement ceremony was a special occasion.

I look forward to continued collaborations with all of my friends and colleagues at Linköpings universitet. I'd especially like to get started a student exchange program, as there is a lot of complementary work going on between CU and LiU.

Thanks LiU!!

29 May 2012

Hot Hands and Guranteed Winners

In a 2009 paper I laid out an argument that explored what happens when "the guaranteed winner scam meets the hot hand fallacy" (PDF). It went as follows, drawing upon two dynamics:
The first of these dynamics is what might be called the ‘guaranteed winner scam’. It works like this: select 65,536 people and tell them that you have developed a methodology that allows for 100 per cent accurate prediction of the winner of next weekend’s big football game. You split the group of 65,536 into equal halves and send one half a guaranteed prediction of victory for one team, and the other half a guaranteed win on the other team. You have ensured that your prediction will be viewed as correct by 32,768 people. Each week you can proceed in this fashion. By the time eight weeks have gone by there will be 256 people anxiously waiting for your next week’s selection because you have demonstrated remarkable predictive capabilities, having provided them with eight perfect picks. Presumably they will now be ready to pay a handsome price for the predictions you offer in week nine.
The second,
. . . is the ‘hot hand fallacy’ which was coined to describe how people misinterpret random sequences, based on how they view the tendency of basketball players to be ‘streak shooters’ or have the ‘hot hand’ (Gilovich et al., 1985). The ‘hot hand fallacy’ holds that the probability in a random process of a ‘hit’ (i.e. a made basket or a successful hurricane landfall forecast) is higher after a ‘hit’ than the baseline probability.9 In other words, people often see patterns in random signals that they then use, incorrectly, to ascribe information about the future.
In the paper I used the dynamics to explain why there is not likely to be convergence on the skill of hurricane landfall forecasts anytime soon. The existence of (essentially) an infinite number of models of hurricane landfalls coupled with the certainty that unfolding experience will closely approximate a subset of available models creates a context ripe for seeing spurious relationships and chasing randomness. However, the basic argument has much more general applicability.

A new paper is just out by Nattavudh Powdthavee and Yohanes E. Riyanto from the Institute for the Syudy of Labor in Bonn, Germany provides some empirical support for this argument. The paper --titled, "Why Do People Pay for Useless Advice? Implications of Gambler’s and Hot-Hand Fallacies in False-Expert Setting --  looks "experimentally whether people can be induced to believe in a non-existent expert, and subsequently pay for what can only be described as transparently useless advice about future chance events."

In the study they authors operationalized the dynamics of the "the guaranteed winner scam meets the hot hand fallacy" using coin flips, while going to great lengths to ensure that the participants were aware that the coin being flipped was fair (i.e., the flips were random), even going so far as to have the participants furnish the coin.

They found that upon receiving an accurate "prediction" of the subsequent coin flip, many participants were willing to abandon any assumption of randomness and pay for a prediction of the next toss:
On average, the probability of buying the prediction in Round 2 for people who received a correct prediction in Round 1 was 5 percentage points higher than those who previously received an incorrect prediction in Round 1 (P=0.046). The effect is monotonic and well-defined; probabilities of buying were 15 percentage points (P=0.000), 19 percentage points (P=0.000), and 28 percentage points (P=0.000) higher in Rounds 3, 4, and 5 . . .
 The authors identify two interesting results:
The first is that observations of a short streak of successful predictions of a truly random event are sufficient to generate a significant belief in the hot hand of an agent; the perception which also did not revert even in the final round of coin flip. By contrast, the equally unlikely streak of incorrect predictions also generated a relatively weak belief in the existent of an “unlucky” agent whose luck was perceived to be likely to revert before the game finishes; evidence which was reflected in an increase in the subject’s propensity to buy in the final round of coin flip.
The study also looked at whether characteristics of the participants might be related to their behavior, finding: "there is no statistical evidence that some people are systematically more (or less) susceptible to the measured effects."

What does this study mean for how we thing about science in decision making?

While the authors focus on "false" experts, the findings have much broader relevance in the context of "true" experts. The simple reason for this is that the distribution of legitimate scientific findings about many complex subjects covers an enormous range of possible outcomes. Not all of these outcomes can simultaneously be correct -- whether they are looking at the past, at causality or offering projections of the future.

In the example that use from my paper cited above, I explain how a single scientific paper on hurricane landfalls provides 20 scientifically legitimate predictions of how many hurricanes would hit the US over the subsequent 5 years:
Consider, for example, Jewson et al. (2009) which presents a suite of 20 different models that lead to predictions of 2007–2012 landfall activity to be from more than 8 per cent below the 1900–2006 mean to 43 per cent above that mean, with 18 values falling in between. Over the next five years it is virtually certain that one or more of these models will have provided a prediction that will be more accurate than the long-term historical baseline (i.e. will be skilful). A broader review of the literature beyond this one paper would show an even wider range of predictions. The user of these predictions has no way of knowing whether the skill was the result of true predictive skill or just chance, given a very wide range of available predictions. And because the scientific community is constantly introducing new methods of prediction the ‘guaranteed winner scam’ can go on forever with little hope for certainty.8
Such models are of far more than academic interest -- they guide hundreds of billions of dollars in investment and financial decisions related to insurance and resinsurance. What if such decisions rest on an intellectual house of cards? How would we know?

The general issue is that a bigger problem than discerning legitimate from illegitimate expertise is figuring out how to use all of the legitimate expertise at our disposal. The dynamics of the "guaranteed winner scam meets the hot hand fallacy" also presents a challenge for experts themselves in interpreting the results of research in light of evolving experience. As experts are people too, they will be subject to the same incentives in and obstacles to interpreting information as were found by Powdthavee and Riyanto.
The dominant strategies in political discourse used to deal with this situation of too much legitimate science are to argue that there is one true perspective (the argument from consensus) or that experts can be judged according to their non-expert characteristics (argument by association). My experiences over the past decade or so related to extreme events and climate change provides a great example how such strategies play out in practice, among both experts and activists.

As we have learned, neither strategy is actually a good substitute for evaluating knowledge claims and understanding that uncertainty and ignorance are often irreducible, and decisions must be made accordingly.

26 May 2012

Some Items Not Blogged Last Week

I've been away this past week so I had little time to blog a few blog-worthy items that crossed my desk. Below is a quick round up of some of the most interesting ones.

Above is a music video featuring some local (to where I am now) talent. Normal service returns after the holiday and we see how well I perform in the Bolder Boulder with jet lag;-)
Coming next week on this blog ... an extended series of posts on economics, innovation, technology. Stay tuned!

23 May 2012

UK GM Wheat War: Not Really About Science

In the UK there is a battle brewing over a scientific trial involving genetically modified wheat. Last weekend a protester attempted to vandalize the trial, and a larger civil action is expected on May 27.  The ongoing battle, and its close cousin in the climate wars, tell us something about what can happen to science when it becomes the central battleground over politics and technology. Unfortunately, the scientific community itself has contributed to such tactics.

Plant scientists at Rothamsted Research, a complex of buildings and fields in Hertfordshire, UK, that prides itself on being the longest-running agricultural research station in the world, have spent years preparing for their latest experiment — which will attempt to prove the usefulness of a genetically modified (GM) wheat that emits an aphid alarm pheromone, potentially reducing aphid infestation.

Yet instead of looking forward to watching their crop grow, the Rothamsted scientists are nervously counting the days until 27 May, when protesters against GM crops have promised to turn up in force and destroy the experimental plots.

The protest group, it must be acknowledged, has a great name — Take the Flour Back. And it no doubt believes that it has the sympathy of the public. The reputation of GM crops and food in Britain, and in much of mainland Europe, has yet to recover from the battering it took in the late 1990s. In Germany, the routine destruction of crops by protesters has meant that scientists there simply don't bother to conduct GM experiments any more.

The Rothamsted scientists have also attempted to win over the public, with a media campaign that explains what they are trying to do and why. After the protesters announced their plans to “decontaminate” the research site, the scientists tried to engage with their opponents, and pleaded with them to “reconsider before it is too late, and before years of work to which we have devoted our lives are destroyed forever”. The researchers say that in this case they are the true environmentalists. The modified crop, if it works, would lower the demand for environmentally damaging insecticides.
It would be a mistake to conclude that the protesters are in some way anti-science or fearful that the genetically modified crops might fail to work as advertised (though surely some protesters do have these views). Their main concern is that the crops will perform exactly as advertised, and lead to further gains in agricultural productivity.

It is not science that they fear, but the implications of scientific advances for economic and political outcomes. The organization leading the UK protests calls itself Take the Flour Back, and clearly explains its rationale as follows:
Our current political system chooses to deal with world hunger through the model of “food security”, arguing that there is not enough food to go around and that we need techno-fixes to solve this. This approach ignores the fact that there is a global food surplus – many people just can’t afford to buy food. This problem is being amplified by land grabs- communities that used to grow food for themselves are being forced out of their ancestral homes, often by corporations expanding cash crop production.

The industrial food system throws away (in the journey from farms to traders, food processors and supermarkets), between a third and a half of all the food that it produces – enough to feed the world’s hungry six times over. (2)

Free trade policies imposed by the International Monetary Fund make it much harder for governments to protect small and family farmers from big multinationals. With the expansion of free-market capitalism, agricultural systems in many countries in the global south have become focused on producing cash crops for export to rich western nations. At the same time, their markets have been opened to food imports, including imports from US and EU companies at less than the cost of production. US farmers benefit from billions of dollars in subsidies which make up as much as 40% of US net farm income. This means they can afford to export their crops at well below production cost. (3) This is ruining the livelihoods of small farmers in the global south.
This is not the statement of a group concerned primarily with the potential unanticipated risks of GM crops to the environment or people, but rather, it is the manifesto of a group concerned that GM crops will perform exactly as intended.

Like many issues where science and politics intersect, those opposed to the productivity gains made possible by agricultural innovation have sought to use science as a basis for realizing political ends. A primary strategy in such efforts is typically to argue that the science compels a particular political outcome.  In the case of GM crops, opponents of the technology (mainly in Europe) have argued that the techniques are unproven or risky. However, such tactics have not succeeded. So the next step beyond waging a political battle over science is now direct action against the technology of concern.

This situation is of course in many respects parallel to the climate debate. Efforts to compel emissions reductions through invocations that science compels certain political outcomes have borne little fruit, so some activists have taken it upon themselves to directly attack the technologies at the focus of their concern.

One difference between the climate wars and the GM wars is that some prominent scientists are participating in the direct action against technology (such as James Hansen and IPCC contributor Marc Jaccard). Another important difference is that in the case of GM crops, it is research itself being targeted, and the scientific community objects.

One argument invoked by scientists in support of GM technology is that the world needs more food. But the world needs more energy too. In condoning direct attacks on energy technologies, the scientific community may have opened the door to tactics that it does not much like when they are applied closer to home.

21 May 2012

Beyond Manna from Heaven

Writing at The Breakthrough Journal blog, Ted Nordhaus and Michael Shellenberger argue that conventional economics is not up to the task of offering sound policy advice for the 21st century.  They write
In the 70 years that have passed since Joseph Schumpeter coined the term "creative destruction," economists have struggled awkwardly with how to think about growth and innovation. Born of the low-growth agricultural economies of 18th Century Europe, the dismal science to this day remains focused on the question of how to most efficiently distribute scarce resources, not on how to create new ones -- this despite two centuries of rapid economic growth driven by disruptive technologies, from the steam engine to electricity to the Internet.

There are some important, if qualified, exceptions. Sixty years ago, Nobelist Robert Solow and colleagues calculated that more than 80 percent of long-term growth derives from technological change. But neither Solow nor most other economists offered much explanation beyond that. Technological change was, in the words of one of Solow's contemporaries, "manna from heaven."
Where does that "manna from heaven" originate? In pricing incentives, of course, derived from economic theory. But once you take a closer look at both practice and the theoretical origins, you find that economics explains far less than we've been led to believe.

Nordhaus and Shellenberger revisit the climate issue to illustrate how far conventional economics has led us astray. They provide an overview of a debate that they engaged in with an economist from the Environmental Defense Fund, Gernot Wagner, who argues against evidence and common sense that by creating the right pricing incentives, drastic emissions reductions goals can be met in the near term:
 "[W]e can achieve US emissions reduction goals for 2020 and possibly even 2030 through deployment of existing technologies. . . Price goes up, demand goes down. Economists typically call it the 'law of demand'--one of the very few laws we've got."
The theory is sound, its application is not -- a point that readers of this blog and The Climate Fix will well understand.

The good news is that many are beginning to move beyond the precepts of economic theory and take a look at the simple mathematics of the real world. For instance,  Ulrich Hoffmann, an economist at the UN Conference on Trade and Development, has done the math, which is illustrated in the figure below showing how much the world would need to decarbonize its economic activity in order to stabilize carbon dioxide at 450 ppm.

Based on these straightforward mathematics he concludes (Hoffmann has a more in-depth analysis here in PDF):
The arithmetic of economic and population growth, efficiency limits related to the rebound effect, as well as systemic issues call into question the hopes of decoupling economic growth from GHG growth. One should not deceive oneself into believing that such an evolutionary (and often reductionist) approach would be sufficient to cope with the complexities of climate change. “Green growth” proponents need to scrutinise the historical macro- (not micro-) economic evidence, in particular the arithmetic of economic and population growth, as well as the significant influence of the rebound effect.
Such messages are not well-received by conventional economists. In their exchange, Wagner explains to Nordhaus and Shellenberger that economic theory trumps real world evidence, and this means that debate over such issues is not necessary:
The main points from climate economics are no longer up for debate: carbon is a pollutant; we need make polluters pay, either through a cap or a price. Virtually all economists agree--from Holtz-Eakin, Laffer and Mankiw on one side to Stiglitz, Sachs, and Krugman on the other.

Once again, this one is not up for debate. You can argue that politically we can't get there, so we need to do other things in the short term, but it's not up for debate whether this is the economically correct solution.
Like most debates on climate this one ends predictably, with Wagner appealing to the motives of Shellenberger and Nordhaus:
[Y]our entire operation seems to be geared toward propagating contrarian-sounding views that once in a while get you some attention and get picked up by an editor somewhere, but otherwise are just that: contrarian for the sake of wanting to be different from the pack
Snore. But the larger point here is that there are articles of faith in the discipline of economics which are viewed as taboo to challenge, even when they fail to represent themselves in practice with the simplicity and elegance of theory.

However, a closer look at economic theory finds a much shakier foundation than is represented within the discipline. Writing at Slate, Konstantin Kakaes has a great piece that sums up how economics went astray when it comes to innovation:
Robert Solow, winner of the 1987 Nobel Memorial Prize in Economic Sciences, is famous for, in the recent words of a high-ranking State Department official, “showing that technological innovation was responsible for over 80 percent of economic growth in the United States between 1909 and 1949.”. . Typically, technical or technological progress isn’t explicitly defined by those invoking Solow, but people take it to mean new gadgets.

However, Solow meant something much broader. On the first page of “Technical Change and the Aggregate Production Function,” the second of his two major papers, he wrote: “I am using the phrase ‘technical change’ as a shorthand expression for any kind of shift in the production function. Thus slowdowns, speedups, improvements in the education of the labor force, and all sorts of things will appear as ‘technical change.’ ” But his willfully inclusive definition tends to be forgotten.

Solow was constructing a simple mathematical model of how economic growth takes place. On one side was output. On the other side was capital and labor. Classical economists going back to Adam Smith and David Ricardo had defined the “production function”—how much stuff you got out of the economy—in terms of capital and labor (as well as land). Solow’s point was that other factors besides capital, labor, and land were important. But he knew his limitations: He wasn’t clear on what those factors were. This is why he defined “technical change” as any kind of shift (the italics are his) in the production function. He wasn’t proving that technology was important, as economists in recent years have taken to saying he did. All Solow was saying is that the sources of economic growth are poorly understood.
Instead of "technology" Solow was really talking about "innovation." That innovation need not be understood because it was "manna from heaven" is characteristic of many arguments from conventional economists and is particularly endemic in the climate debate. From such a perspective, of course anyone asking about where innovation comes from -- other than from the magic of the invisible hand -- must either be ignorant or malign.

But as Hoffmann's essay explains, once you actually do the math of energy innovation in the context of real-world social and political forces, you see that understanding processes of innovation requires more than simply understanding the "law of demand."

Kakaes contines:
The cautionary tale of Solow is emblematic of how economists get science and technology wrong. One economist creates a highly idealized mathematical model. The model’s creator is, as Solow was, honest about its limitations. But it quickly gets passed through the mill and acquires authority by means of citation. A few years after Solow’s paper came out, Kenneth Arrow, another Nobel Prize winner, would write that Solow proved the “overwhelming importance [of technological change in economic growth] relative to capital formation.” It’s a sort of idea laundering: Solow said that we don’t know where growth in economic output comes from and, for want of a better alternative, termed that missing information “technical change.” But his admission of ignorance morphed into a supposed proof that new technologies drive economic growth.
Remarkably, in the 21st century, our policy debates reflect the fact that we do not have a good idea where innovation comes from, how it is directed and how we prepare for its inevitable downsides. Too often conventional economics presents an an obstacle to debating and discussing this topic.

As Nordhaus and Shellenberger conclude,
Over the next century, global energy demand will double, and perhaps triple. But even were energy consumption to stay flat, significantly reducing emissions from today's levels will require the creation of disruptive new technologies. It's a task for which a doctrine focused on the efficient allocation of scarce resources could hardly be more ill-suited.
Read the three essays discussed here in full at The Breakthrough Journal blog, at Bridges Trade BioRes review, and at Slate.

16 May 2012

Updated: The US Intense Hurricane Drought

In preparation for an upcoming talk, I have updated the figure above to the start of the 2012 hurricane season, which will begin with a record-long stretch of no intense hurricane landfalls still continuing. (In most browsers you can click on the figure for a larger view.) The long stretch with no intense hurricane landfalls has surely shaped expectations, setting the stage for all sorts of animal spirits to be in play. Oh, to be a commodities trader this summer.

Below is a figure showing the intra-seasonal distribution of US intense hurricane landfalls. About 8% have occurred in June and July, and more than 75% in August in September. (UPDATE: This figure has been updated from the original posted to fix an error in the median date.)

14 May 2012

Reducing Unemployment: Manufacturing vs. Services

The graph above answers a question I've wondered about for a bit:
By how much would the services and manufacturing sectors each have to grow employment in order to reduce the current rate of unemployment by 1%?
The answer is 1.3% growth in employment in the services sector and 12.9% in the manufacturing sector. (Data is for April, 2012, and can be found here for employment and unemployment, and here for employment in the service sand manufacturing sectors).

This graph explains that while the current uptick in manufacturing employment is worth noting and welcoming, growth in manufacturing employment is not going to be the primary long-term solution to bringing down unemployment. That said, even though manufacturing and services are distinct categories of economic accounts, they are of course inter-related within the broader economy. However, claims that special treatment for manufacturing will reduce unemployment have a high hill to climb in terms of the simple math that follows from the small portion of the economy that manufacturing employment currently comprises. Productivity gains make that hill even steeper.

And of course, do not forget that from the perspective of employment rather than economic sectors, all jobs are service jobs.

11 May 2012

What I Learned this Week

My tiff with Joe Romm and the Center for American Progress this week taught me a few lessons and put a finer points on ones that I have already known:
  • Political debate is rough business, politics ain't beanbag
  • There are people and institutions whose business is to try to tear people down, to savage their reputation in order to avoid a debate on policy substance
  • No appeal to reason, honor or dignity matters to such people
  • They will lie to your face and to everyone else without batting an eye if they think they can get away with it
  • When caught in an obvious fabrication they will pretend to make it go away and that it never happened, while doing everything possible to spread the lie far and wide
What can one do in such situations?
  • Recognize that your character is being attacked because they feel that they cannot engage you on substance, an implicit argument that they see your arguments as better or unimpeachable
  • Recognize that their megaphone is bigger than yours, always will be
  • Recognize that because they are more influential and connected than you are, that non-combatants will generally (often silently, but sometimes overtly) side with the more powerful -- this is a fact of life and no use complaining about
  • Recognize that appeals to reason, honor, dignity will be ignored or used as a means to further inflict damage on you
  • Recognize that you have choices -- to drop out of debate, ignore, joke, quietly correct or return fire-with-fire
I have tried a lot of these strategies over the past years to respond to efforts at character assassination, of which I have had my fair share of;-) I don't really like any of the responses, as they all have positive and negative outcomes. But, if one can't take the heat, one should get out of the kitchen.  And if one stays in the kitchen, then that means learning to cook with gas.

This week I respond to Romm and CAP's buckets of mud with an aggressive and loud response. Sometimes bullies should be stood up to.

Was it the right response? I am not sure, but it certainly was effective. Romm's dozen or more updates and changes to his post full of lies were undignified and embarrassing, and certainly an admission of wrong-doing. CAP's outright lie to me that they stood by their reporting even as they were erasing evidence of their fabrications was laughable.

For anyone paying attention -- and I don't blame anyone for not -- this week's episode shows clearly the moral bankruptcy of the vicious element of US political discourse, which exists on both sides of political debate. One organization used a billboard campaign to associate their opponents with mass murderers -- character assassination to be sure. Another organization then used lies to try to associate me with the first organization -- more character assassination. Apparently, appreciation of irony is in short supply these days.

I do not expect to have to engage in open warfare with Romm or CAP in the future, given the degree to which they have embarrassed themselves this week. However, should they continue to try to savage my reputation and harm my career, I will not take it lightly.

For my part, I will continue to call things like I see them, and will welcome debate with (and especially with) those with whom I generally disagree with, ideologically, politically or substantively -- and that means even with vile organizations like CAP and Heartland.

As I told Joe Romm and CAP, I am in the business of sharing ideas with people who I may have many disagreements with and interaction/exchange/discussion is a virtue in my world. Disagreement and conflict are to be expected. Malicious lying and bullying is not.

10 May 2012

Robert Reich on Civility in Politics

Sports Governance Interlude

Please visit my sports blog -- The Least Thing -- where I have a two part article (with original reporting) that tells the tragic story of Mario Goijman, and the personal consequences to one man of failures in international sport governance.

The Tragedy of Mario Goijman: Part I and Part II


09 May 2012

Joe Romm is a Liar

FINAL UPDATE 5/10: After what must be a dozen updates and corrections, Joe Romm's post is an incoherent mess (not that that distinguishes it from his normal fare;-). I wrote to CAP pointing out obvious untruths and the results were unannounced, quiet changes to Romm's post, plus a misleading response from CAP that they "stand by" their reporting. If anyone wants to know why American political discourse is so vitriolic, just look at both Heartland and CAP, two sides of the same coin.

UPDATE: Joe Romm obviously understands that he went way over the line on this one, as he has retitled his post and added some confusing words of update. He maintains his assertion that I have some sort of "official" relationship with Heartland, writing, "How anyone could guess this isn’t official is, well, Pielke-esque." Sorry, Joe, but that is still a lie. 

Sorry to say it so bluntly, but it is true. Joe Romm has falsely claimed that I am an "official expert for Heartland." This is a lie. I have absolutely no relationship with Heartland -- never have, never will. Period.

Romm's efforts to smear by association are ironic given the lashing that Heartland just got for doing exactly the same thing. But irony has never registered high on Joe's awareness-meter. There is no lower form of "debate" than trying to sully someone's character by outright lying. And it is not the first time Joe has lied about me.

Joe Romm and the Center for American Progress should be ashamed of themselves for engaging in such tactics. Can't they engage a policy debate on its merits?  Apparently not.

Sarewitz on Bias in Science

In the current issue of Nature, Dan Sarewitz has a column about the threat posed by bias to scientific research.  (The image above is a screenshot of a paper cited by Sarewitz, which is by J. Ioannidis, 2005, "Why Most Published Research Findings are False").

Sarewitz explains the systemic findings of bias in clinical trials as follows:
Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve.
Why are such findings of bias turning up in clinical trials?
A biased scientific result is no different from a useless one. Neither can be turned into a real-world application. So it is not surprising that the cracks in the edifice are showing up first in the biomedical realm, because research results are constantly put to the practical test of improving human health. Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry’s profits depend on the results of basic biomedical science to help guide drug-development choices.
Is the problem of bias limited to clinical studies?
It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality).
Read the whole thing here.

07 May 2012

Ignore the Gloss at Some Risk

Writing in the NYT yesterday, Jack Hitt discusses what Jerry Ravetz would call the "extended peer community" in the evaluation of knowledge claims. In the process Hitt produces a concise description of why blogging and other forms of commentary are vitally important:
These days, the comments section of any engaging article is almost as necessary a read as the piece itself — if you want to know how insider experts received the article and how those outsiders processed the news (and maybe to enjoy some nasty snark from the trolls).

Should this part of every contemporary article be curated and edited, almost like the piece itself? Should it have a name? Should it be formally linked to the original article or summarized at the top? By now, readers understand that the definitive “copy” of any article is no longer the one on paper but the online copy, precisely because it’s the version that’s been read and mauled and annotated by readers. (If a book isn’t read until it’s written in — as I was always told — then maybe an article is not published until it’s been commented upon.) Writers know this already. The print edition of any article is little more than a trophy version, the equivalent of a diploma or certificate of merit — suitable for framing, not much else.

We call the fallout to any article the “comments,” but since they are often filled with solid arguments, smart corrections and new facts, the thing needs a nobler name. Maybe “gloss.” In the Middle Ages, students often wrote notes in the margins of well-regarded manuscripts. These glosses, along with other forms of marginalia, took on a life of their own, becoming their own form of knowledge, as important as, say, midrash is to Jewish scriptures. The best glosses were compiled into, of course, glossaries and later published — serving as some of the very first dictionaries in Europe.
Hitt uses the case of the ivory billed woodpecker to explain that authoritative knowledge claims are often subject to scrutiny from those with knowledge and expertise outside the academy. The ivory billed woodpecker was thought to have been seen in a video in 2004 -- the first sighting in more than 50 years. The sighting was subsequently written up and published in Science. It turns out that the sighting was false, it was not an ivory billed woodpecker. The paper in Science however stands uncorrected.
This has resulted in a situation where the authoritative claims are widely known to be incorrect but not recognized as such.  Hitt writes:
Already, among scientists, there is pushback, fear that incorporating critiques outside of professional peer review will open the floodgates to cranks. Not necessarily. The popular rejection last year of the discovery of a microbe that can live on arsenic was mercifully swift precisely because it was executed by online outsiders. Not acknowledging that crowd-checking and amateur commentary have created a different world poses its own dangers.

Take the case of the ivory-bill. The article in Science has never been retracted. Cornell still stands by its video. The federal Fish and Wildlife Service acted as though the ivory-bill existed, and, in 2008, it asked for $27 million to support recovery efforts. Here’s the thing: The ivory-billed woodpecker is the Schrödinger’s cat of contemporary media — dead to those who’ve looked inside Tom Nelson’s blog but alive to the professionals who can’t bear to.

Some may fear that recognizing the commentary of every article will turn every subject into an endless postmodern discussion. But actually, the opposite is true. Recognizing the gloss allows us to pause in the seemingly unending back and forth of contemporary free speech and free inquiry to say, well, for now, this much is true — the ivory-bill still hasn’t been definitively seen since World War II, climate change is happening and caused by mankind, natural selection is the best description of nature’s creative force. Et cetera.

It seems that the greatest challenge associated with blogging and other forms of "gloss" is not one of knowledge claims but of the accountability of authoritative institutions.
For instance, various claims made by the IPCC and US government as related to disasters and climate change are widely known to be false and Steve McIntyre has thoroughly documented many instances of questionable scientific practices in the so-called hockey stick literature. In both cases the relevant institutions have so far decided that it is easier to ignore the gloss than to deal with its consequences. But anyone who is paying attention can see what is going on. There is no ivory billed woodpecker.

Hitt explains that such false knowledge serves instrumental purposes, again citing the case of the ivory billed woodpecker:
The weirdest part of the ivory-bill’s resurrection is that if you look back through the past four decades, it turns out the bird has come back to life many times before. The ivory-bill seems to rise like a phoenix at times of environmental anxiety. And each time the sighting has been debunked, and then afterward some great section of wilderness has been declared protected and everyone feels better for a while.

After a 1966 disputed sighting in Texas, 84,550 acres became the Big Thicket National Preserve. When the ivory-bill was sighted/not sighted in a South Carolina swamp in 1971, the outcome was the creation of Congaree National Park. Alex Sanders, who as a member of South Carolina’s House of Representatives fought to preserve the land, told me that when people ask him where the ivory-bill is, he says, “I don’t know where he is now, but I know where he was when we needed him.”

Nice line. But you have to wonder: if we’d cinched our sense of reality with just a bit more reason instead of mythology, maybe we’d still be seeing the ivory-bill for real.
Ignoring the gloss may serve the interests of short-term expediency, but the longer-term consequences may not be so welcomed, especially within those bodies whose assertion of authority rests in knowledge claims.

05 May 2012

FA Cup Says Expect More Hurricane Damage this Year

UPDATE: The predictive power of the FA Cup is more extensive that I have thought -- British elections. H/T Steve Cook.

In 2009 academic paper, I documented a little-known but remarkable relationship (here in PDF):
Upon seeing efforts to establish relationships between various climate variables and NATL hurricane activity one is tempted to quote John von Neumann who said of fitting relationships with various parameters, ‘with four parameters I can fit an elephant, and with five I can make him wiggle his trunk’ (as related in Dyson, 2004). Indeed, my own research shows a correlation of 0.33 between the total score in the UK Football Association’s (FA’s) annual Cup Championship game and the subsequent hurricane season’s damage, without even controlling for SSTs, ENSO or the Premier League tables. Years in which the FA Cup championship game has a total of three or more goals have an average of 1.8 landfalling hurricanes and USD11.7 billion in damage, whereas championships with a total of one or two goals have had an average of only 1.3 storms and USD6.7 billion in damage.
Based on this relationship and the results of today's FA Cup final, we can expect an above-average damage year for hurricanes in the United States. Scoff at this, you may ... but it as as good a predictor (if not better) than any other. Let's plan on looking back in December and see how the FA Cup Hurricane Damage Prediction actually performed.

Caveat emptor.

03 May 2012

Reuters on Tax Incentives for Manufacturing

A very well-done piece from Reuters on the perverse incentives created by special tax breaks for a poorly defined segment of the economy - manufacturing.

H/T @gregmankiwblog.

UPDATE:  On this very theme, at The Atlantic Derek Thompson has a neat piece on Big Macs and the global economy.  Here is an excerpt:
McDonald's is a restaurant, but it functions much like a factory. Labor is supported by a deep well of technological innovation, such as vacuum packing, exceptional preservatives, deep freezing, vibrant artificial flavors, and high-speed microwaves. Workers assemble specific parts at great speed to deliver dependable and replicable products. "[McDonald's doesn't] put something on the menu until it can be produced at the speed of McDonald's," CEO James Skinner said in 2010, sounding not unlike Henry Ford from a century earlier.
Thompson's essay draws upon research in this paper on wages in the global economy.

02 May 2012

Le Débat 2012

I have a soft spot for democracy in all of its flavors. Above is a report on the debate between Nicolas Sarkozy and François Hollande last night. Hollande leads in the polls and a victory later this week will shake up European politics significantly.

Here are the latest polling results, courtesy of the WSJ:

New Paper: Energiewende and Nichtwissen

I have a new paper coming out in the journal Nature and Culture titled "Post-Normal Science in a German Landscape." The paper is about energy transitions and ignorance, viewed through the lens of brown coal and new lakes created from open pit mines in eastern Germany, near Leipzig (see video above). The paper was motivated by a workshop held a year ago at the University of Hamburg on the work of Jerry Ravetz and the notion of post-normal science.

Here is the abstract:
Post-Normal Science in a German Landscape

This essay explores the management and creation of ignorance via an exploration of the landscape of eastern Germany, which has seen profound social, political, and technological changes over the past several decades. Like in many places around the world decision makers in eastern Germany are seeking to reach a future state where seemingly conflicting outcomes related to the economy and the environment are simultaneously realized. The management of ignorance is an important but often overlooked consideration in decision making that the concept of “post-normal science” places into our focus of attention.
It is a bit more conceptual and philosophical than much of my normal fare, but still grounded in real-world policy. It was a fun paper to write, and I learned a lot.

It will be online soon, but if you'd like an advance copy, just send me an email rpielkejr@gmail.com

01 May 2012

The NYT Puts the Hit On

UPDATE 5/3: At Dot Earth Andy Revkin has this to say about the article discussed in this post:
The Lindzen focus is a distraction, to my mind. Anyone thinking that the erosion of Lindzen’s credibility will somehow build societal enthusiasm for cutting greenhouse gas emissions is probably overly optimistic.
Spot on.

The New York Times has an article today ostensibly about clouds but which is really an extended hit piece on Richard Lindzen, a professor at MIT, member of the US National Academy of Sciences and well known climate skeptic.

Below I have excerpted a laundry list of phrases in the article used to describe Lindzen:
  • Leading proponent of the view that clouds will save the day
  • Has drawn withering criticism
  • Errors in his papers
  • Proof is lacking
  • Obliged [politicians] by assuring them that they are running no risks by refusing to enact emissions limits
  • Contrarian scientist
  • Gone beyond any reasonable reading of the evidence to provide a dangerous alibi for inaction
  • Wrong science
  • [Not] intellectually honest at all
  • Contrarian scientist
  • Methods he had used to analyze data were flawed
  • His theory made assumptions that were inconsistent with known facts
  • Most mainstream researchers consider Dr. Lindzen’s theory discredited
  • He routinely misrepresents the work of other researchers
  • Dr. Lindzen offers little hint of how thin the published science supporting his position is
  • He makes what many colleagues see as an unwarranted leap of logic
  • Deeply unprofessional and irresponsible
This is "advocacy journalism" -- it is not reporting, as there is absolutely no news in the piece. Two years ago the Boston Globe did a very similar story on Lindzen for its Lifestyle section, which covered the same ground, but as a profile rather than as hit job.

Whatever one thinks about the climate change debate or Richard Lindzen, is it a good idea for the New York Times to engage in an over-the-top attack on a member of the National Academy of Sciences? Journalists, what do you think?

A Primer on the French Election