14 October 2012

Gordon vs. Gordon on Growth

This post is a follow up to the discussion on this blog a few weeks ago of economist Robert Gordon's provocative paper which asked, "Is US economic growth over?". Gordon's paper has been much discussed in the pages and blogs of the New York Times, Washington Post, Financial Times and elsewhere. Remarkably, the wide-ranging discussion took place with no one figuring out that Gordon's claims are in error and based on a selective and non-representative parsing of the data. (The FT did publish a letter of mine, however.)

Let's review. Gordon claims that US per capita economic growth has "steadily" declined from 1950 to 2007 (prior to the recent economic crisis) and uses this history as the basis for projecting a continued decline to 2100. Gordon goes further into the past to suggest that the economic growth of the past century was a "one time event."

As I am in the early stages of working on a book on innovation and growth, upon reading Gordon's analysis I sought to replicate it as the claims being made were counter to data that I have been working with. So I wanted to understand why. Gordon explained in his paper that he arrived at a stair step decline in US per capita GDP growth as follows:
Each of the successive periods after 1950 exhibits a downward step in per-capita real GDP growth, with steps downward marked at 1964, 1972, and 1987.
Gordon confirmed these breakpoints in an email response to a query, explaining that they represented periods of "normal" business cycle utilization. However, when I sought to replicate this analysis I came up with these numbers for per capita GDP growth rates:
1950 to 1964 - 2.1%
1964 to 1972 - 2.9%
1972 to 1987 - 2.1%
1987 to 2006 - 1.9%
Clearly, this was not a stair step decline. In fact, aside from a brief increase in growth rates in the period 1964 to 1972, there was exceedingly little change in per capita growth rates since 1950 (and in fact, since 1870).

In a further email exchange, which included Martin Wolf of the FT, Gordon admitted that his paper had an error. There was in fact no breakpoint in 1964. He had instead combined the first two periods in his figure, which, when appended to his pessimistic forecast for 2007-2027 give the appearance of a stair-step decline, even though the data from 1950 to 2007 shows exceedingly little decline. The error in the paper leads to a replication which counters the claims being made -- unfortunate but true.

After waiting more than a week, I asked Gordon if he was going to correct the error in his paper and he replied that it was not worth making a fuss about. Academics can be prickly about critique, especially from outside the home discipline, but it is standard practice to acknowledge and correct errors, especially in work that is widely discussed. In Gordon's case, correcting the error would likely require an explanation why the decline only exists with the choice of certain breakpoints and not others -- which takes Gordon's analysis down a difficult, perhaps unrecoverable path.

On this issue I will let a very prominent and widely respected economist have the last word. The graph and conclusions below are perfectly in sync with my own critique of Gordon (2012). They come from a 2004 paper titled,"Two Centuries of Economic Growth: Europe Chasing the American Frontier." There this respected economist wrote (emphases added):
Plotted on a log scale for selected years that are ʺneutralʺ to the business cycle, i.e., excluding the influence of recessions, depressions, and wars, the most striking fact displayed in Figure 1 is the steady and inexorable advance of U. S. real GDP per capita. While the growth rate is slower in the first 50 years, between 1820 and 1870, subsequent to 1870 there is surprisingly little variation around the 1870-2000 average growth rate of 1.81 per year, a rate sufficient to double the standard of living every 38 years. The major acceleration above the long-run trend comes, surprisingly, not in the early postwar years but in the decade 1963-73, followed by a retardation back to trend in 1987-94.
That prominent economist? Robert Gordon.

11 comments:

  1. Hi Roger,

    Good stuff. If you're interested in a world-wide analyis, I plotted per-capita GDP growth based on Brad DeLong's analysis of world GDP from "1 million BC" to 1998:

    http://holtz.org/Library/Social%20Science/Economics/Estimating%20World%20GDP%20by%20DeLong/Estimating%20World%20GDP.htm

    The values I got for each decade from 1900 to 2000 are as shown in this graph (the projections from 2000 to 2100 are from four different people):

    http://markbahner.typepad.com/photos/uncategorized/per_capita_gdp_growth_october_2004_predi_1.JPG

    What I found that's potentially more interesting is to look at 50-year periods, from 1600 to 2000. The growth rates look like:

    Time period....% Annual Per Capita GDP Growth
    1600-1650.......................0.12
    1650-1700.......................0.18
    1700-1750.......................0.16
    1750-1800.......................0.18
    1800-1850.......................0.87
    1850-1900.......................1.65
    1900-1950.......................1.76
    1950-2000.......................2.83

    You can see that from 1750 onward, the growth has been what Ray Kurzweil describes as "double exponential". That is, the rate of growth in percent per year is not simply staying the same (i.e., exponential growth)...the rate of change in percent per year is actually increasing ("double" exponential growth).

    http://www.kurzweilai.net/the-law-of-accelerating-returns

    If one simply took a linear extrapolation of the data from 1800 onward, one would arrive at a predicted world per-capita GDP growth rate for the period from 2000 to 2050 of 3.39 percent, and for the period from 2050 to 2100 of about 4 percent.

    But I maintain world per-capita GDP growth will be much, much faster than that, due to increases in artificial intelligence. I'm predicting increases in world per-capita GDP growth of more than 10 percent per year within the next 1-2 decades, because the number of human brain equivalents added by computers each year will be approximately 1 billion by 2024, and 1 trillion by 2033.

    P.S. Standard disclaimer: Barringterminatorstakingoverorglobalthermonuclearwar.

    ReplyDelete
  2. "Let's review. Gordon claims that US per capita economic growth has "steadily" declined from 1950 to 2007 ****(prior to the recent economic crisis)"****

    This is an interesting debate - but examining past trends may or may not be predictive of future developments. Mark is certainly optimistic, and perhaps he is right, but let us not overlook that since 2007 the rate of decrease in world hunger has slowed. And hunger in Africa has grown over the past two decades (I don't know whether has grown faster or slower than the population.) I'm not a doomsayer and we can certainly hope that the recent trend change is short-term, but I do have some concern that much of the growth over the past decade or more was built on unsustainable practices (i.e., irresponsible financial engineering and bubbles.)

    And this goes back to the importance of acknowledging the incompleteness of aggregated GDP as a measure of economic growth.

    ReplyDelete
  3. debt and hedonic adjustments to GDP

    ReplyDelete
  4. "Mark is certainly optimistic..."

    Don't ignore my caveats...especially the one regarding Terminators taking over. If total world computer intelligence doubles every year, that means in 10 years it has increased by a factor of 1000. Somewhere in the 2025-2030 time frame, I'm predicting world total computer intelligence equal to total hydrocarbon brain intelligence. Then, only 1 decade later, total computer intelligence is 1000 times hydrocarbon brain intelligence. If the computers that have 1000 times our intelligence want to do us harm, there probably won't be anything we can do.

    "...and perhaps he is right,..."

    There are (at least) two questions to ask regarding that possibility:

    1) What creates wealth? --> I think human minds create wealth.

    2) What will be the power of computers relative to human minds? Right now the power of computers are a small fraction of the power of human minds, but I'm predicting (based mainly on Ray Kurzweil's work) that total worldwide computer intelligence will equal and then vastly exceed total human intelligence in the next 1-2 decades.

    ReplyDelete
  5. -4- Mark Bahner,

    I don't think that your estimates of computing power are bad, per se, but I'm skeptical about their equivalence to human intelligence, especially with respect to things like wealth creation.

    I guess we'll see when we get to the singularity...

    ReplyDelete
  6. Mark -

    I'm a big believer in human capital. I think that it is a much-overlooked factor in debates from climate change to economic growth to fiscal policy.

    But that said, I assumed your "Terminator" caveat to be a joke, and now that you've clarified that it wasn't I have to admit I still have a hard time taking it seriously. As Matt alludes to, your calculus of equivalent or superior intelligence seems a bit simplistic. Human intelligence is not something easily measured or fully understood. I remain unconcerned that growth in computing power represents some kind of existential threat to humans.

    ReplyDelete
  7. Hi Matt,

    "I don't think that your estimates of computing power are bad, per se, but I'm skeptical about their equivalence to human intelligence,..."

    Basically, I assume 20 petaflops (20 quadrillion operations per second) is equal to one human brain equivalent. That's Ray Kurzweil's estimate for human brain calculation capacity. Hans Moravec estimates the human brain at 500 teraflops (0.5 petaflops)...1/40th of Ray Kurzweil's estimate.

    I ignore software and sensory inputs. I think my simplifying assumption is justified, based on my (admittedly limited) knowledge of computers such as IBM's Watson and iPhone's Siri.

    IBM's Watson (per wonderful Wikipedia) is about 80 teraflops. So that's about one-sixth of Moravec's estimate, and one-five-hundredths of Kurzeil's estimate of the power of a human brain. And an iPhone 5...I think that's maybe 10-20 billion instructions per second.

    So it seems to me, given how far they are from the power of a human mind, that computers capable of 20 petaflops should have no problem producing intelligence comparable to a human.

    "...especially with regard to things like wealth creation."

    Consider this...the asteroid Eros is 1 km in size, and has an estimated $20 TRILLION worth of metals (at today's market rates). It has 20 BILLION tons of aluminum alone. That's about 500-1000 years worth of worldwide primary aluminum at today's rates.

    Now, we're probably never going to send humans to mine Eros. It would be way too dangerous, and we value human life highly. But suppose you send 1000 humanoid robots at $1 million a pop to Eros. That would only be $1 billion worth of robots, for up to $20 trillion of metals. (At today's rates, of course. Mining Eros in an significant way would probably crash the worldwide prices of most metals.)

    Also consider that most humans in developed countries would prefer not to work more than 40 hours a week, with several weeks off for vacation/sick leave. So less than 2000 hours a year. In contrast, a computer doesn't mind working 24/7...and close to 365 days a year.

    Never gets sleepy. Never gets bored. Never plays solitaire.

    ReplyDelete
  8. Hi Joshua,

    You write, "But that said, I assumed your 'Terminator' caveat to be a joke, and now that you've clarified that it wasn't I have to admit I still have a hard time taking it seriously."

    Well you should take it seriously. Not as a certainty, or even a probability (more than 50 percent chance). You should consider it a potentially extremely dangerous possibility.

    If you've ever seen the Terminator movies, there are at least two instances that I think are applicable:

    1) In Terminator 2, kid Connor has gotten Ahnold Terminator to swear that he won't kill anyone. The Terminator then immediately shoots an innocent security guard in the leg. When kid Connor protests, the Terminator replies, "He'll live."

    2) In Terminator 3, when John Connor and the future Mrs. Connor are laughing, the Terminator says, "Your laughter is good. It eases tension and fear of death."

    The point is that the Terminator simply doesn't view things the same way humans view them. He has absolutely zero "people skills."

    As Kyle Reese says in Terminator 1: "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

    Assuming that there were androids or some other robots with human-level intelligence, we humans would be very vulnerable. For instance, robots would be totally immune to biological attack, so they could poison all the water supplies on earth with impunity. They don't need to breath, so they could release poisonous gases (biological or chemical) with impunity.

    I read a really frightening article, I think it was in MIT's Technology Review, about the Soviets' biological weapons program. As I recall, they developed something that made the immune system attack the human brain, basically turning it into mush. Androids or robots could release such a thing with impunity.

    Basically, the reason we experiment on mice and monkeys and dogs is because we are more intelligent than they are. We only rule this planet by virtue of our intelligence.

    Computer intelligence is doubling on the order of year or three. So that means that a computer that's equal to us in intelligence one year is going to be twice our intelligence the next year or three. And within 1 to 3 decades, it will be 1000 times as intelligent as we are.

    Realistically, when there are 7 billion computers with intelligence equivalent to human intelligence, we will no longer control our destiny.

    That shouldn't cause you to lose sleep or be unable to eat. But it's a massively bigger potential problem than global warming ever will be. The phrase "existential threat" is overused, but if terminators did come into being, it would be an existential threat. Potentially on the order of global thermonuclear war.


    ReplyDelete
  9. "That's Ray Kurzweil's estimate for human brain calculation capacity."


    That is a very one-dimensional view of human intelligence. You could link every computer in the world and despite their calculating capacity they couldn't complete some tasks that are relatively simple for humans.

    ReplyDelete
  10. 9-Joshua

    Yes, one could link every computer in the world and despite their calculating capacity they couldn't complete some tasks that are relatively simple for humans. And there are tasks that computers do very easily that humans would find difficult or impossible. Like re-calculating all the cells in a spreadsheet that's 1000 rows by 1000 columns. Or search through a 300 page book and find every instance of the use of the word, "use." Or search the Internet for every use of the word "Joshua."

    So in a sense, my assumption that 20 quadrillion operations per second (20 petaflops) is one human brain equivalent (i.e., to ignore software and sensory inputs) is essentially like saying, "Two apples equals two oranges." If one loves the taste of apples and hates oranges (or vice versa) that's certainly not true. It's not true if one wants more vitamin C, where oranges have the clear advantage.

    But one thing I think we're going to see more and more of is that computers *will* be able to do the things that humans can do easily. Like drive a car. Or pick up things in one room, and return them to the room in which they belong. (That can't come soon enough for me!)

    ReplyDelete
  11. I think Joshua has this right.
    To take an electrical analogy, it would depend on whether these computers were connected in parallel, or in series.
    There are 13,000 people in the small town where I live, but their brains are linked in parallel so all of us together will never understand particle physics as well as Steven Hawkins.
    Unless, that is, one of our brains happens to have the same computational power as that of professor Hawkins.

    ReplyDelete