Track Record: 2017 Errors

This page first posted 22 June 2017, revised 24 June 2017

The headline prediction for the June 2017 election was not accurate. The final prediction was for a Conservative majority government with a majority of 66. The actual result was that the Conservatives were short eight seats of a majority.

In numerical terms, the prediction and the outcome were:

Party2015 Votes2015 SeatsPred VotesPred Seats         Actual VotesActual Seats         Vote ErrorSeat Error
CON37.8%33144.3%35843.5%318−0.8%−40
LAB31.2%23235.3%21841.0%262+5.7%+44
LIB8.1%87.9%37.6%12−0.3%+9
UKIP12.9%14.3%01.9%0−2.4%0
Green3.8%12.0%11.7%1−0.3%0
SNP 4.9%564.1%493.1%35−1.0%−14
Plaid 0.6%30.6%30.5%4−0.1%+1
MIN 0.8%181.6%180.7%18−0.9%0

Labour's support was significantly underestimated, which caused the number of Labour seats also to be underestimated. Although the Conservative support figure was quite accurate, the error in the Labour support caused the predicted number of Conservative seats to be too high. The Liberal Democrats were also underestimated, but the prediction was accurate in saying that they would lose votes overall.

In Scotland, the prediction was correct that the SNP had passed its peak and would lose seats, mostly to the Conservatives. But the prediction did not foresee the full extent of the SNP decline. The predictions for the smaller parties were relatively good, with Plaid Cymru, UKIP and the Greens all predicted fairly accurately.

In total sixty-nine seats were mis-predicted. This would be an acceptable result if the general trend had been right, but the overall prediction quality was poor at this election. This was mostly due to polling error in the pre-election opinion polls.

We will now look at these and other issues in more detail. The particular topics studied are:

  1. Confidence Bounds
  2. Opinion poll error
  3. Error Quantification
  4. Model error

1. Confidence Bounds

The final prediction involved not just a central forecast, but also confidence bounds which the result should, with a 90pc confidence level, lie between. The confidence bounds given were:

PartyLow SeatsCentralHigh SeatsActual Seats
CON314358418318
LAB166218269262
LIB03812
UKIP0000
Green0111
SNP34495535
PC0334
N Ire18181818

The actual result was not close to the central forecast, but it was (mostly) inside the confidence bounds. The bounds were sufficient for all the parties, except for the Liberal Democrats who won four more seats than their upper bound despite losing vote share, and Plaid Cymru who got one more seat.

Election Battleground 7 June 2017

The confidence bounds were also shown in the final pre-election battleground graphic. The "Low-pollsters" were closer to the right answer than the "High-pollsters" and the "confidence circle" around them does include a sliver of "Conservative minority" territory. It was around this location that the final result materialised.

In summary, the confidence bounds worked fairly effectively and helped communicate in advance the relatively high degree of uncertainty about the result.

2. Opinion poll error

To make our prediction, we used an average of the final campaign polls taken by recognised polling organisations (members of the British Polling Council), plus a manual adjustment to correct for historic poll bias. This adjustment was defined as half the average error of polls over the last twenty years and was calculated as a swing of 1.1pc from Labour to Conservative.

PollsterSample datesSample sizeCON%LAB%LIB%UKIP%Green%Error%
Opinium04 Jun 2017 - 06 Jun 20173,00243368529.3
Kantar Public01 Jun 2017 - 07 Jun 20172,15943387426.5
Panelbase02 Jun 2017 - 07 Jun 20173,01844367529.5
ComRes/Independent05 Jun 2017 - 07 Jun 20172,051443495212.3
YouGov/The Times05 Jun 2017 - 07 Jun 20172,1304235105213.3
ICM/The Guardian06 Jun 2017 - 07 Jun 20171,532463475213.5
Ipsos-MORI/Evening Standard06 Jun 2017 - 07 Jun 20171,29144367428.5
Survation06 Jun 2017 - 07 Jun 20172,79841408224.3
Poll Average04 Jun 2017 - 07 Jun 201717,98143.236.47.94.32.07.9
Poll Bias correction  +1.1−1.1
Total Average04 Jun 2017 - 07 Jun 201717,98144.335.37.94.32.09.5
Actual Result08 Jun 2017 43.541.07.61.91.70.0

There was a variety of values shown by the pollsters. Some were closer to the true result than others. Survation did particularly well with a poll showing a Conservative lead of 1pc, and Kantar Public who had a lead of 5pc. The other pollsters showed leads between 7pc and 12pc. The actual Conservative lead was 2.5pc, compared with the average over all the pollsters of 6.8pc. The Electoral Calculus adjustment for poll bias made our calculations worse, since the poll error was in favour of the Conservatives for the first time in many years.

The overall poll error in the Conservative lead was over 4pc. This is better than 2015 when the equivalent error was more than 6pc, and better also than 2001 and the bad error in 1992. But it was worse than in 1997, 2005 and 2010. So it was around the middle of the distribution of polling errors. The novelty this year was that the error overstated the Conservatives rather than, as is traditional, understating them.

In previous years, Electoral Calculus has also used spread betting market prices from Sporting Index. This year these were not used. On the eve of polling day, the seats market from Sporting Index showed the Conservatives with 363 seats and Labour with 205. This was less accurate even than the polls, so no accuracy was lost by the decision not to use it.

3. Error Quantification

Although poll error was the largest error, there were other errors. The main other errors were:

The table shows the effect of the errors as they are removed one by one:

ScenarioCONLABLIBSNPPlaidError Count
Electoral Calculus
Final prediction
3592183483108
Without bias correction352226348294
Without poll error,
bias
336251439144
Without EU Ref Model,
poll error, bias
327257639326
With Tactical Voting,
without EU/poll/bias
317263153528
Actual Election Result318262123540

Here "Error Count" is defined as the sum over the parties of the absolute difference between predicted and actual seats won. This is a larger measure than the number of seats mis-predicted.

We see that the largest reduction in seat error comes from the poll error which contributes a score of 50 out of the total error of 108. The bias correction was worth 14, the problematic EU Referendum model was worth 18, and the absence of tactical voting was worth a further 18. The residual model error produced an error score of 8, equivalent to a one seat error in predicting the Conservatives and Labour. That residual error is very acceptable and is well within the normal expected bounds for model error.

4. Seat by Seat errors

Even given all the corrections above, not every seat is predicted correctly. If we use the corrected parameters (removing the bias correction, fixing the pollster errors, remove the EU model and enable tactical voting) then we get the following result:

 CONLABLIBUKIPGreenSNPPlaidN.Ire
Predicted317263150035218
Predicted − Actual−1+1+30−10−20

This is a much more accurate result overall, but the prediction is still inexact. In terms of individual seats, fifty seats are wrongly predicted, which is a only a fair result. In 2015 only 36 seats were mispredicted and in 2010 there were 63 mis-predicted seats. This suggests that the 2017 election had relatively high local variation, in that many seats deviated from the national trend and were either more Conservative or less Conservative than the model suggested.

NumSeat NameGE2015PredictionGE2017AreaSwingComment
1IpswichCON-08CON-02LAB-02Anglia4Marginal
2ThurrockCON-01LAB-03CON-01Anglia4Marginal
3WaveneyCON-05LAB-02CON-17Anglia19
4High PeakCON-10CON-05LAB-04East Midlands9
5Derbyshire North EastLAB-04LAB-09CON-06East Midlands15
6MansfieldLAB-11LAB-17CON-02East Midlands19
7BatterseaCON-16CON-13LAB-04London17Anti-CON London
8Enfield SouthgateCON-10CON-08LAB-09London17Anti-CON London
9KensingtonCON-21CON-17LAB-00London17Anti-CON London
10Middlesbrough South and Cleveland EastLAB-05LAB-07CON-02North East9
11Stockton SouthCON-10CON-07LAB-02North East9
12Bolton WestCON-02LAB-01CON-02North West3Marginal
13Crewe and NantwichCON-07CON-05LAB-00North West5Marginal
14Warrington SouthCON-05CON-01LAB-04North West5Marginal
15SouthportLIB-03LAB-05CON-06North West11
16CopelandLAB-06LAB-10CON-04North West14By-election memory
17Fife North EastNAT-10LIB-00NAT-00Scotland0Marginal
18GordonNAT-15LIB-00CON-05Scotland5Marginal
19Kirkcaldy and CowdenbeathNAT-19NAT-01LAB-01Scotland2Marginal
20MidlothianNAT-20NAT-02LAB-02Scotland4Marginal
21Ayr Carrick and CumnockNAT-22NAT-03CON-06Scotland9Scottish volatility
22Coatbridge, Chryston and BellshillNAT-23NAT-06LAB-04Scotland10Scottish volatility
23Edinburgh North and LeithNAT-10LAB-04NAT-03Scotland7Scottish volatility
24Edinburgh South WestNAT-16CON-04NAT-02Scotland6Scottish volatility
25Glasgow North EastNAT-24NAT-08LAB-01Scotland9Scottish volatility
26Paisley and Renfrewshire SouthNAT-12LAB-05NAT-06Scotland11Scottish volatility
27Perth and North PerthshireNAT-18CON-10NAT-00Scotland10Scottish volatility
28Banff and BuchanNAT-31NAT-07CON-09Scotland16Scottish volatility
29Renfrewshire EastNAT-07LAB-03CON-09Scotland12Scottish volatility
30Ross Skye and LochaberNAT-12LIB-02NAT-15Scotland17Scottish volatility
31CanterburyCON-18CON-05LAB-00South East5Marginal
32Hastings and RyeCON-09LAB-00CON-01South East1Marginal
33Southampton ItchenCON-05LAB-05CON-00South East5Marginal
34Reading EastCON-13CON-04LAB-07South East11
35Brighton PavilionGreen-15LAB-14Green-25South East39Incumbency
36LewesCON-02LIB-03CON-10South East13
37Oxford West and AbingdonCON-17CON-11LIB-01South East12
38StroudCON-08CON-01LAB-01South West2Marginal
39Bristol North WestCON-10CON-01LAB-09South West10
40BathCON-08CON-05LIB-11South West16
41Plymouth Moor ViewCON-02LAB-09CON-11South West20
42Carmarthen East and DinefwrNAT-14LAB-00NAT-10Wales10West-Wales Plaid strength
43CeredigionLIB-08LIB-16NAT-00Wales16West-Wales Plaid strength
44TelfordCON-02LAB-01CON-02West Midlands3Marginal
45Warwick and LeamingtonCON-13CON-09LAB-02West Midlands11
46Stoke-on-Trent SouthLAB-06LAB-10CON-02West Midlands12
47Walsall NorthLAB-05LAB-10CON-07West Midlands17
48Colne ValleyCON-09CON-03LAB-02Yorks/Humber5Marginal
49KeighleyCON-06CON-01LAB-00Yorks/Humber1Marginal
50Morley and OutwoodCON-01LAB-06CON-04Yorks/Humber10Ed Balls memory

[Note the use of Slide-O-Meter notation of "CON-03" to mean a Conservative majority of 3% which is used in this table. Majorities are rounded to the nearest integer percentage, so "CON-00" means a majority of less than 0.5%.]

There are a number of stories here. In outline they are:

Summary and Conclusions

The main points that this analysis has shown are: The overall performance of the prediction was poor.
Return to top of page, or track record summary, or home page.