A special post by ComRes’s Adam Ludlow
The polls for the London mayoral election performed strongly across the board, perhaps bringing pollsters some respite following last year’s General Election. But as pollsters, we must be careful never to rest on our laurels and make sure to review our methods and seek to improve when things go well, as well as when bigger problems occur.
Indeed, at ComRes, the London result was of particular interest to us as it was the first outing in live action for our probabilistic electorate modelling.
For those unfamiliar with the ComRes Likely Electorate Model, this was the innovation we brought in following the General Election and uses a demographics-based approach to simulate what those who turn out to vote are likely to look like at an election, based on historic turnout patterns between different social groups. Respondents are now assigned a probability of voting based on their demographic characteristics, rather than just their own self-identified likelihood to vote.
As a number of pollwatchers have commented, this has tended to lead to our polls showing slightly better results for the Conservatives than other polling firms have. This is primarily because the model boosts the importance of older and more affluent voters, who data show are by far the most likely to vote and (at the current time) lean heavily towards the Conservatives.
On the face of the evidence from London though, these polling reforms appear to have worked. While all polling companies got the picture at the two party run-off right, there were a range of outcomes predicted for the first round where all parties were involved. Using the Likely Electorate Model, the final ComRes poll got all top five parties to within a point of their actual vote share on both rounds of voting – the only poll to do this.
It was also the only poll not to overstate the lead between Labour and the Conservatives on the first past the post share of the vote (nine points) – the exact problem the polls faced at the General Election.
Of course, there could be range of explanations for getting the winning margin correct, but reviewing the findings suggests the electorate modelling made a key difference. Rerunning our final London mayoral poll using the pre-GE2015 self-reported likelihood to vote filter, it shows that the poll would have seen an overstatement of the Labour lead on both the first and second round. Rather than correctly having a nine point lead on the first round, it would have had Sadiq overstated on 48% and Zac understated on 33%. On the second round, it would have had 60% to 40% – a result outside the margin of error. Positively though, the methodological changes ComRes made since the General Election successfully corrected for the error and accurately reflected the result. There is obviously good news across the industry too that the methods used by other pollsters worked in London as well.
So what does this mean for the upcoming referendum? That we can sit back and wait for the results to come in just in line with our polling? Unfortunately, polling is rarely so easy.
Firstly, while the Likely Electorate Model helped us produce the right result in London, this was using an online survey methodology (as it is more appropriate for contacting London’s young and transient population). We are yet to see how the model interplays with telephone polling, which we are using for polling the EU referendum (for reasons explained here).
Secondly, our polling for the London election used a different output from the Likely Electorate Model than what is used for our national polling. While the computations are the same, they are applied to different data: the London model makes projections based on historic data from mayoral elections, the national model uses data based on General Elections.
As we explored in depth in the run-up to Sadiq Khan’s election victory, the relationship between demographics and likelihood to vote is far weaker at mayoral elections in London than it is nationally at General Elections. In other words, at General Elections, there is a big difference between how likely young people and older people are to vote. At London elections, the difference is far smaller.
Due to this weaker relationship between age and turnout in London, the effect of the modelling ended up being weaker (where it explained 38% of the variance in someone’s likelihood to vote) than in our national polling (where it explains nearly 70% of the variance). Another way of putting this is that whereas we can be around 70% sure of someone’s likelihood to vote at a General Election based on their demographic characteristics, we can only be 38% sure of it at London mayoral elections (there are instead other “out of model” factors, such as non-demographic ones which determine whether they vote or not). The weight given to demographic factors was therefore less in London than in national polling.
Another effect of there being almost no relationship between age and likelihood to vote in London, was that the model was particularly reliant on social grade as a predictor of turnout. The strong performance therefore is somewhat reassuring for us in the lead up to the EU referendum vote.
This is because, despite there being much talk about the effect of turnout helping the anti-EU side thanks to older voters leaning towards Leave, when we applied our Likely Electorate Model to our last referendum poll, it actually increased the Remain lead. The significance of AB social grades (who lean heavily towards Remain) actually slightly outweighed that of older voters. It is early days, but the success of the model in London suggests that the inclusion of affluence in the model – and the importance attributed to it – is justified.
Of course, every election presents its own challenges, and EU referendum will do so more than most. But for the time being, the electorate modelling passed its first hurdle, and gives us some indications about how we should take the second.