23 May 2015

Thanks for Reading

This blog is no longer being updated.

Thanks for reading, please enjoy the archives!

13 May 2015

Thoughts on the New Science Advisory Structure of the European Commission

Update 15 May: Science | Business has an interesting interview with Robert-Jan Smits, Director-General for Research and Innovation, on the new advisory mechanism.He explains that the new mechanism was in part to placate the fussy Brits, still smarting from the dismissal of Anne Glover as EC CSA. He also provides some more hints on how the structure might work.

Today, the European Commission released its new plans for science advice under Jean-Claude Juncker, the EC President. The new structure comes after Juncker terminated the office of the Chief Scientific Advisor, led by Anne Glover, which was put in place by his predecessor, Juan Manuel Barosso. The new structure is shown visually in the figure at the top of this post.

While there are many details of the plan still to be announced (here is the EC press release with a link to a presentation given earlier today), here are three quick reactions.

1. The 7-person super committee will likely be problematic

My first reaction is that this supra-committee -- called a "High Level Group of eminent scientists" and shown in a blue bubble above -- is a recipe for future problems.  This is for several reasons.

One is just math. The EU has 28 member states. You can do the political math. Another problem is one that dogs the EC, democracy. Who will the "eminent scientists" actually represent? They will not, as explained today, work for the EC. It is unlikely too that they will work for member states (such as a chief scientific advisor or EU or national government employee), as these individuals work for member governments with interests that may or may not coincide with those of the EU. That leaves industry, NGOs and academics. Appointing members from industry or NGOs seems impractical, leaving academics.

If this is correct, then the "eminent scientists" will probably look a bit like the group that Junker met with today, which included Nobel and other grand prize winners: Sir Paul Nurse, Jules Hoffmann, Serge Haroche, László Lovász, Jean Tirole and Edvard Ingjald Moser. I love Nobel Prize winners as much as anyone, but until they have a Nobel Prize in science policy, it is safe to say that this group probably does not have great knowledge of science advisory processes nor of the byzantine politics of the European Union, although some have relevant experience. A group of the "great and the good" is politically defensible, but may be practically problematic.

This, I question whether the "High Level" group is either necessary or desirable.

2. The model appears to confuse "science for policy" with "policy for science."

As I have written before, the EC already has a significant expert body within its ranks that stands ready to enhance science advice to the commission. There are undoubtedly considerable internal politics (Gambling? I am shocked, shocked!) within the EC between its Directorate for Research and Innovation and the Joint Research Centre, that I am utterly and blissfully unaware of.  That said, it is important to recognize that the support of science (which we might call "policy for science") is a different sort of beast that support of policy (which we might call "science for policy").

In fact, putting the same body in change of both functions could create some unhealthy potential for conflicts across roles, for the simple reason that DG RTD is in the business of advocating for science budgets. If advocating for the use of certain science is added to its portfolio, then these different missions could come into conflict. It is a little bit like putting an agricultural ministry in charge of advocating for farmers and for for healthy diets. You'd like to think that these two interests always go hand in hand, but experience shows that they sometimes don't. Experience shows that separating institutions that support science and those that support policy helps to avoid unnecessary conflicts.

In my opinion, the JRC is much better placed to negotiate the connections between the "demand" for science advice and the "supply" of science advice. Along those lines, see this paper by Dan Sarewitz and I a few years back which proposed a model for reconciling the supply of and demand for science (here in PDF). It is a notable positive that the EC proposal adopts the framing of "supply and demand" as this will highlight the significance of the interface between the two, and the need for expertise at that interface.

3. Roles and responsibilities have yet to be described

This is a point made by Corporate Observatory Europe (one of the groups which campaigned to have the EC CSA office terminated), and specifically that "independence" and "transparency" have yet to be defined. I agree. Much better than an "independence" framing might be a "conflict of interest" framing. This is another reason why coordinating the reconciliation of supply of and demand for science might be better done from inside the EC rather than by a body of supposedly heroic, independent super humans.

More broadly, the role of such a "science advisory mechanism" needs to be made clear. is is a science arbiter? An honest broker? Or what? How is demand to be identified? What products are to be produced? Other such questions might be raised as well.

To be fair, there is time to provide answers to these questions. But they should be answered. One of the problems with the EC CSA office is that its creation was seen by some as "mission accomplished" and it started with few resources or mechanisms (see the discussion in Wilsdon and Doubleday, here in PDF). It would be very easy for the EC to move on from today's announcement, to let the issue move into the background, only to reemerge sometime down the road in problematic fashion  as occurred with the CSA.

For anyone interested in improving connections of expertise and decision making, the EC stands at a significant fork in the road. Which route it takes will be significant. We should all keep paying attention.

08 May 2015

Evaluating UK Election Predictions

UPDATE: I have a piece up at the Guardian with draws upon and extends this analysis. See it here. Comments welcomed!

Back in March, I posted up a survey of thirteen various forecasts for the outcome of the UK election and promised to perform an evaluation when the results were in. Well the results are in. Let's jump right into the evaluation. After that, I'll offer some more general comments about predicting elections, data journalism and democracy.

Here are the forecasts that I am evaluating:
Many of these forecasts were dynamic, meaning that they changed over time based on updated information. The evaluation below is based on this snapshot of the forecasts from the end of March. But the results won't be particularly sensitive to the date of the forecast, give how badly wrong they were.

The methodology is a very simple one, and one that I have used frequently (e.g., to evaluate predictions of Olympic Medals, the World Cup, the NCAA tournament etc.) and is the basis of a chapter in my new book.

First, I identified a "naive baseline" forecast. In this case I chose to use the composition of the UK Parliament in March 2015. My expectation was that these numbers were (for the most part) determined in 2010, so any forecasters claiming to have skill really ought to improve upon those numbers. Let me emphasize that the March 2015 composition of the UK Parliament is an extremely low threshold for calculating skill.

Second, I calculate the improvement upon or degradation from the naive baseline. I do this by performing a very simple calculation. I (a) take the difference between the forecasted number of seats for a particular political party and the actual results, (b) square this number, (c) sum these squares across the political parties for each forecast, and then (d) take the square root of the resulting sum. The result is a measure of the total number of seats the forecast had in error.

Let's start with a look at the forecasts for the two biggest parties, the Tories and Labour, which were the only parties which had a realistic chance of forming a government. Here are those results, with RED indicating a worse performance than the naive baseline, and BLACK indicating an improvement (no, there is no black on this graph).
It is difficult to describe this graph as anything other than mass carnage for the forecasters. The predictions were off, and not by a small amount. Nate Silver visited the UK before the election to opine on the election explained to the British public, "What we know is that it’s highly likely you won’t have a majority.” Um, no.

Let's bring in the Liberal Democrats and see how that affected the results. (Note: Only 12 of the 13 forecasts included 3 parties.)
Here we have 2 of 12 forecasts outperforming the naive baseline. Stephen Fischer, who ran a great blog during the election at Elections, Etc. did the best as compared to the naive baseline, but this result is tempered a bit by the fact that their forecast degraded since the March prediction was made, with the election day forecast performing worse. The Naive Forecast, Fischer and Murr did pretty poorly overall, missing between 46-60 seats across the 3 parties.

The other forecast to outperform the naive baseline was produced by Andreas Murr at LSE and use a "wisdom of the crowds" approach. This method was based on asking people who they thought would win their constituency, not who they would vote for. The fact that this method outperformed every other approach, save one, is worth noting.

Overall, the track record of the forecasters for the three-party vote was also pretty dismal.

Let's bring in the SNP and UKIP. (Note: Only 8 of the 13 forecasts included SNP.)
With the SNP revolution occurring in Scotland, we would expect that this would improve the forecasts, since the naive baseline had only 6 SNP members in Parliament. (UKIP turns out to be mathematically irrelevant in this exercise.) Even so, adding in the SNP only raises two other forecasters above the naive baseline. It is worth noting that the worst performing forecast method (Stegmaier & Williams) had the very best prediction for the number of SNP seats.

Even with advance knowledge that the SNP would gain a large number of seats, that head start only led to 50% of the forecasters who predicted SNP seats to improve upon the naive baseline.

Overall, if we take the set of forecasts as an ensemble and ask how they did collectively (simply by summing their seat errors and dividing by the number of parties predicted), the picture remains pretty sorry:
  • Two-Party Forecasts (13): degraded from Naive Baseline by ~38 seats per party
  • Three-Party Forecasts (12): degraded from Naive Baseline by ~17 seats per party
  • Five-Party Forecasts (8): degraded from Naive Baseline by ~0.3 seats per party
So what lessons should we take from this exercise?

One lesson is that while predicting elections is interesting and fun from an academic perspective, it may not add much to our democratic practices. Nate Silver at FiveThirtyEight, for better or worse, has become the face of poll-drive "horse-race journalism" in which the politics and policy choices are stripped out and numbers are pretty much all that matters. This is of course ironic, because Silver used to complain about punditry and horse-race journalism. Yet during his recent PR tour of the United Kingdom he was the ultimate pundit weighing in on the horse race. Not discussed by Silver were questions about subjects such as the future of the NHS, recharging UK productivity, or the desirability of Scottish independence or a possible EU referendum.

My criticism of election forecasts goes back a long way. Back in 2004 I wrote:
Rather than trying to see the future, political science might serve us better by helping citizens to create that future by clarifying the choices we face and their possible consequences for policy.
By simply predicting seats and treating politics like a sporting event, we diminish the partisanship, the choices, and the fundamental values that lie at the core of politics. Politics is about people and our collective future. I fear that data journalists have diminished our politics.

A second lesson is that we often forget our ignorance. Back in 2012 Nate Silver wrote very smartly:
Can political scientists “predict winners and losers with amazing accuracy long before the campaigns start”?

The answer to this question, at least since 1992, has been emphatically not. Some of their forecasts have been better than others, but their track record as a whole is very poor.
The 2015 UK General Election reminds of of this fact. Sure, it does seem possible to anticipate US elections, but this may say something about American exceptionalism (e.g., highly partisan with well-gerrymandered districts, a relatively simple electoral system that is overwhelmingly well-surveyed) rather than anything about the predictability of politics more generally.

I don't mean to pick on Nate Silver (disclaimer: I worked for him briefly in 2014, and admit to sometimes being seduced by horse-race journalism!) but at the same time, his overwhelming presence in the UK elections (and that of other forecasters) was influential enough to warrant critique. I have long had a lot of respect for Nate, not least because in the US at least, he figured out how to systematically integrate and evaluate polls, something that academic political scientists utterly failed to do.

At the same time, here is one example of the overwhelming influence of a dominant "narrative" in popular discourse. One pollster, Survation, conducted a survey before the election that proved remarkably accurate. But they chose not to publish. Why not?
We had flagged that we were conducting this poll to the Daily Mirror as something we might share as an interesting check on our online vs our telephone methodology, but the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures – something I’m sure I’ll always regret.
While Survation has to live with the decision not to release their poll, I can understand the pressures that exist not to contradict popular narratives expressed by loud and powerful media bodies. These pressures can mean narrow perspectives that exclude other, inconvenient expertise. Sometimes, the popular narrative is wrong.

The role of data journalists (and their close cousins, the explainer journalists) should not be to limit public discourse, either intentionally or unintentionally by weight of influence, but rather to open it. This means going beyond the numbers and into all the messiness of policy and politics. Data journalism, like our democracies, remains a work in progress.

05 May 2015

Handicapping the UK General Election

I am trained as a political scientist but I am in no way an expert in UK politics. With that out of the way, I thought it might be fun and educational (both for me) to have a go at making sense of the upcoming UK election. So please read on and comment if you'd like.

The UK parliament has 650 seats, which means that a majority is 326 seats. Presently, the UK is governed by a coalition of the Conservative party (holding 302 seats) and Liberal Democrats (holding 56 seats). The upcoming election is particularly interesting because there is a significant chance that the combined loss of Conservative and LibDem seats will total less than that needed to form a government.  The number actually needed to secure a majority is actually less than 326 because the Sinn Fein Party does not take its seats, due to a longstanding boycott of Westminster. A working majority is generally thought to be 323.

So here are the numbers to watch as the election results come in.

1. The total seats won by the current coalition.

The current (May 5) forecast by electionforecast.co.uk suggests that this total will fall in between 272 and 343 seats, with a forecast of 307 seats. If the coalition secures 323 seats or more, then odds would appear to strongly favor a continuation of the current coalition and David Cameron continuing on as Prime Minister. This scenario is the least complicated but also seems fairly unlikely.

Things get more complicated if the current coalition does not win a working parliamentary majority. It is of course also possible that if the Conservative-LibDem coalition gets close enough to a working majority that several smaller parties are added to the coalition to secure a working majority.

But what if such a majority is not possible? Let's break down some additional key numbers.

2. The total seats won by the Conservatives.

The current forecast by electionforecast.co.uk has Conservatives on 281 seats and Labour on 267. Another group of academics, Polling Observatory, has it at 274 Conservative and 272 Labour. May2015 has it at 274 Conservative and 269 Labour. Ladbrokes (betting market) has it 286.5 Conservative and 266.5 Labour.

What these various prediction suggest is an expectation that the Conservatives will win more seats than Labour, but not by much and with substantial uncertainty. So, let's consider each possibility in turn.

2a. Conservatives win more seats than Labour

If this occurs, we can expect David Cameron to quickly assert victory and claim a mandate for a second term. This mandate might be implemented via a Conservative-led minority government or a continuation of the Conservative-LibDem coalition as a minority government.

A big wild card here is what Ed Miliband and the Labour party decide to do. If the combined total of Labour and Scottish National Party seats (perhaps plus some other minority parties) totals 323 or more, then together they will have the votes to pass a motion of no confidence in the Prime Minister, which would force Cameron to step down.

Such a vote would almost certainly lead to a constitutional crisis as the UK would be in uncharted territory under the provisions of the 2011 Fixed-term Parliaments Act (FTPA). Catherine Haddon at the Institute for Government explains that "the Act substantially changes the rules of politics; and that nobody can yet tell exactly how these new rules will change the game." She gets into more detail:
If a motion of no confidence is passed or there is a failed vote of confidence, there is a 14-day period in which to pass an act of confidence in a new government. If no such vote is passed, a new election must be held, probably a mere 17 working days later.

So far, so clear. But from there we start to get into uncharted territory on two fronts. One is that some of the crucial mechanisms are not set out; the other is how the operation of the Act could affect political dynamics and party bargaining.
It is conceivable that the election winds up in the UK courts.

Thus, assuming that the Conservatives win more seats, but parliament is hung, then a first big decision will be Ed Miliband and Labour's decision whether to join with the SNP to pass a vote of no confidence. Of course, under a minority government such a vote of no confidence could occur at any time. Perhaps Miliband waits for the first big screw up by the minority government to force a new election in weeks or months time.

Of course, if Miliband cannot assemble 323+ no confidence votes, then the point is moot and the coalition continues to govern with David Cameron as PM, but with a continuing risk of the government falling.

2b. Labor wins more seats than the Conservatives

If this occurs it has potential to be a game changer. Under this scenario a first big decision shifts to Nick Clegg and the Liberal Democrats. Do they then switch alliances from the Conservatives to Labour? Maybe so if David Cameron keeps as a "red line" and in/out vote on Europe. Clegg could then be known as the man who saved UK's role in Europe and perhaps the fate of the LimDems as a meaningful political party. Of course, if the UK public really values having this vote it could swing things the other way come the next election.

Under this scenario there would be essentially no risk of a constitutional crisis, as no conceivable combination of Conservatives and other parties would have the votes to pass a no confidence vote against a Labour government. A Labour-LibDem coalition may be a minority government, but with the SNP as backstop it might as well be majority. (For those unaware, Labour has ruled out a coalition with the SNP, and the SNP has ruled out supporting any scenario that includes a Conservative government.)

Bottom Line

My sense of the above is that Labour and Ed Miliband are in the drivers seat reagrdless of who is PM next week. While it is possible that the current coalition receives a mandate for another term, that seems unlikely. What seems more likely in order of my qualitative estimation is the following:

1. Cameron hangs on as PM over a minority government. It lasts somewhere between 2 weeks and 6 months before a second election of 2015.

2. Labour and the LibDems form a minority but new, stable coalition government.

3. The UK courts settle a constitutional crisis over the FTPA, as there is not enough votes to form a government or declare no confidence.

For those wanting to dive deeper, here is an analysis of seats to watch as results come in to get a sense of which way things are turning.

Whatever happens, it'll be fun to watch from Boulder, and a wonderful expression of modern democracy in action!

What do you think?

04 May 2015

Focus of Attention in 10 Years of Charlie Hebdo Covers

The graph above and accompanying analysis comes from  Jean-François Mignot and Céline Goffette writing the Le Monde last February.  The graph shows a content analysis of more than 10 years of Charlie Hebdo covers from 2005 to 2015. Of 523 covers, religion was the subject of 38, and of those 38, Islam was the subject of just 7.

Charlie Hebdo is in the news this week because its staff has been given Toni and James C. Goodale Freedom of Expression Courage Award by the PEN American Center, which supports free expression. The award prompted a backlash by some PEN members and others, who think that the award is inappropriate.

Whatever your views on free expression, Charlie Hebdo or the PEN Award, the data shown above are an important part of understanding what Charlie Hebdo does - they are not a magazine obsessed by (or even focused on) religion or Islam.

On the PEN controversy, I think that The Economist gets it just about right:
Unfettered free speech is good for humanity. Charlie Hebdo was firebombed, and its journalists were threatened and attacked for what they wrote—yet they persisted. That they persisted in drawing crass, juvenile cartoons is beside the point. Defending free speech means defending speech you don’t like; otherwise it’s just partisanship, not principle.