Track Record: 2015 Errors

This page first posted 16 May 2015

The headline prediction for the May 2015 election was not accurate. The final prediction was for a hung parliament with Labour/SNP as the largest bloc. The actual result was a small Conservative majority.

In numerical terms, the prediction and the outcome were:

Party2010 Votes2010 SeatsPred VotesPred Seats         Actual VotesActual Seats         Vote ErrorSeat Error
SNP 1.7%64.1%524.9%56+0.8%+4
Plaid 0.6%30.6%30.6%30.0%0
MIN 3.4%181.5%180.8%18-0.7%0

The Conservative support was significantly underestimated, which caused the number of Conservative seats also to be underestimated. Although the Labour support figure was quite accurate, the error in the Conservatives caused the predicted number of Labour seats to be too high. The Liberal Democrats were also partly overestimated, but the prediction was accurate in saying that they would lose the vast majority of their seats.

The predictions for the other parties were relatively good, getting Plaid Cymru, UKIP and the Greens exactly right (including the exact seats held and gained), and being fairly accurate in predicting the landslide win for the SNP in Scotland.

In total sixty-four seats were mis-predicted. This would be a moderately good result if the general trend had been right, but the overall prediction quality was poor at this election. This was mostly due to polling error in the pre-election opinion polls.

We will now look at these and other issues in more detail. The particular topics studied are:

  1. Opinion poll error
  2. Model error
  3. Idiosyncracies

1. Opinion poll error

To make our prediction, we used an average of the final campaign polls taken by recognised polling organisations (members of the British Polling Council), along with implied support figures taken from the spread betting markets (Sporting Index).

PollsterSample datesSample sizeCON%LAB%LIB%UKIP%Green%Error%
TNS BMRB30 Apr 2015 - 04 May 20151,185333281469.0
The Sun/YouGov04 May 2015 - 05 May 20152,148343491259.6
Opinium04 May 2015 - 05 May 20152,960353481268.8
The Guardian/ICM03 May 2015 - 06 May 20152,023353591149.6
Daily Mirror/Survation04 May 2015 - 06 May 20154,08831311016513.2
Daily Mail;
ITV News/ComRes
05 May 2015 - 06 May 20151,007353491247.6
Evening Standard/Ipsos-MORI05 May 2015 - 06 May 20151,186363581158.8
Populus05 May 2015 - 07 May 20153,917343491358.8
Poll Average30 Apr 2015 - 07 May 201518,51433.733.48.913.0 5.18.5
SportingIndex06 May 2015 - 06 May 201518,50033.329.013.012.9 5.213.0
AVERAGE30 Apr 2015 - 07 May 201537,01433.531.211.013.0 5.18.6
Actual Result07 May 2015 37.831.

None of the pollsters had a very good election. Every one had Conservative and Labour within one per cent of each other, compared with the actual gap of more than six per cent. At the time of writing, there is an investigation under way by the British Polling Council (the pollsters' trade body) into the polling errors. Amongst the pollsters, ComRes was the least inaccurate by a short head.

Electoral Calculus also used spread betting market prices from Sporting Index as well, because these had been successful in 2010. At this election, their performance was mixed. They were more accurate about the Conservative lead over Labour (seeing 4.3% instead of 6.6%), but they overestimated the Liberal Democrats. Their actual seat forecast was: Con 289, Lab 265, Lib 26, UKIP 3, Green 1, SNP 46. Their implied support figures were overall worse than the average of the pollsters, mostly because of their inaccurate belief in Lib Dem strength (or incumbency). Including them in the overall average made little difference in terms of vote share accuracy, but it helped in terms of seats.

2. Model error

Given the large polling errors, it is hard to tell how well the actual model performed. The model, which converts national support figures into seats, is only as good as its inputs. If the input polling data is bad, then the model output will be bad. This effect is sometimes described as "garbage in, garbage out".

But we can adjust for this by feeding the actual 2015 support levels, rather than the polling figures, into the model. Then we get the following result:

Seat Error+9-8-200+100

This is a much more accurate result, which helps confirm the fact that the prediction was wrong primarily because the pre-election polls were wrong. The prediction is still not quite exact. The Conservatives are a little low, and Labour are a little high, so that the Conservatives are shown just short of a majority, rather than just over. But it's not too bad, and the other parties are also predicted relatively well.

In terms of individual seats, only 36 seats are wrongly predicted, which is a good result. This compares well with 2010 when the equivalent figure was 63 mis-predicted seats, and 2005 which had 45 seats.

NumSeat NameGE2010PredictionGE2015County (Area)Comment
1Glasgow North EastLAB-54LAB-10NAT-24Glasgow area (Scotland)SNP strength
2Coatbridge, Chryston and BellshillLAB-50LAB-06NAT-23Glasgow area (Scotland)SNP strength
3Kirkcaldy and CowdenbeathLAB-50LAB-05NAT-19Fife (Scotland)SNP strength
4Bristol WestLIB-21LIB-02LAB-09Bristol area (South West)SWest Lib weakness
5GowerLAB-06LAB-09CON-00West Glamorgan (Wales)Wales Con strength
6Vale of ClwydLAB-07LAB-08CON-01Clwyd (Wales)Wales Con strength
7Plymouth Moor ViewLAB-04LAB-05CON-02Devon (South West)Con strength
8TelfordLAB-02LAB-04CON-02Shropshire (West Midlands)Con strength
9Derby NorthLAB-01LAB-04CON-00Derbyshire (East Midlands)Con strength
10Morley and OutwoodLAB-02LAB-04CON-01West Yorkshire (Yorks/Humber)Con strength
11Southampton ItchenLAB-00LAB-02CON-05Hampshire (South East)Con strength
12Bolton WestLAB-00LAB-02CON-02Western Manchester (North West)Con strength
13Cardiff NorthCON-00LAB-01CON-04South Glamorgan (Wales)Wales Con strength
14Warwickshire NorthCON-00LAB-01CON-06Warwickshire (East Midlands)Con strength
15SherwoodCON-00LAB-01CON-09Nottinghamshire (East Midlands)Con strength
16Stockton SouthCON-01LAB-01CON-10Teesside (The North)Con strength
17BroxtoweCON-01LAB-01CON-08Nottinghamshire (East Midlands)Con strength
18HendonCON-00LAB-01CON-08Barnet (London) 
19ThurrockCON-00LAB-01CON-01Essex (Anglia)Marginal
20Amber ValleyCON-01LAB-00CON-09Derbyshire (East Midlands)Con strength
21Wolverhampton South WestCON-02CON-00LAB-02Black Country (West Midlands)Marginal
22DewsburyCON-03CON-01LAB-03West Yorkshire (Yorks/Humber)Unsplit anti-Con vote
23Brentford and IsleworthCON-04CON-01LAB-01Hounslow (London)Marginal
24HoveCON-04CON-02LAB-02East Sussex (South East)Anti-Con tactical
25Enfield NorthCON-04CON-03LAB-02Enfield (London)London Lab strength
26Edinburgh SouthLAB-01NAT-24LAB-05Edinburgh area (Scotland)Anti-SNP Tactical
27Chester, City ofCON-06CON-04LAB-00Cheshire (West Midlands)Marginal
28Wirral WestCON-06CON-04LAB-01Merseyside (North West)NWest Con weakness
29Ealing Central and ActonCON-08CON-06LAB-01Ealing (London)London Lab strength
30Ilford NorthCON-11CON-11LAB-01Redbridge (London)London Lab strength
31BathLIB-25LIB-08CON-08Bristol area (South West)SWest Lib weakness
32Dumfriesshire, Clydesdale and TweeddaleCON-09NAT-11CON-02Dumfries and Galloway (Scotland)Anti-SNP Tactical
33YeovilLIB-23LIB-05CON-09Somerset (South West)SWest Lib weakness
34TwickenhamLIB-20LIB-00CON-03Richmond Upon Thames (London)Marginal
35SouthportLIB-14CON-06LIB-03Merseyside (North West)NWest Con weakness
36Carshalton and WallingtonLIB-11CON-09LIB-03Sutton (London) 

[Note the use of Slide-O-Meter notation of "CON-03" to mean a Conservative majority of 3% which is used in this table. Majorities are rounded to the nearest integer percentage, so "CON-00" means a majority of less than 0.5%.]

There are a number of stories here. In outline they are:

Summary and Conclusions

The main points that this analysis has shown are: The overall performance of the prediction was poor and was the worst prediction since 1992.
Return to top of page, or track record summary, or home page.