Archive for the ' General Election' Category

h1

The GE2015 polling fail put down to “unrepresentative samples”

Thursday, January 14th, 2016

unnamed (3)

Too many LAB supporters interviewed – not enough Tories

A new report just published today by NatCen Social Research and authored by leading psephologist, Prof John Curtice, suggests that the polls called the General Election wrong primarily because the samples of people they polled were not adequately representative of the country as a whole.

Rather than other explanations, such as a late swing to the Conservative Party, Labour abstentions, or so-called “shy Tories” not telling pollsters their true voting intentions, the report suggests that the polls’ difficulties arose primarily because they interviewed too many Labour supporters and not enough Conservatives.

Even when the polls went back to their respondents after the election and asked how they had voted, they still largely put the Conservatives neck and neck with Labour. In contrast, today’s report reveals that the 4,328 respondents to NatCen’s 2015 British Social Attitudes (BSA) survey put the Conservatives 6.1 points ahead of Labour, very close to the actual election result of a 6.6 point lead.

BSA was conducted very differently from the polls. It selected its respondents using random sampling, the approach recommended by statistical theory and which is less at risk of producing an unrepresentative sample. BSA’s relative success at replicating the result of the election is in line with that of a similar random sample survey conducted since the election on behalf of the British Election Study.

If the polls were wrong because Conservative voters were especially reluctant to declare their preference or Labour supporters unwilling to admit that they had abstained, then both these surveys should also have got the result wrong. Instead BSA and the Election Study show that those who voted Conservative and those who abstained are capable of being found by survey researchers, so long as the right approach is used.

 Two sources of error

The report goes on to suggest there are two main reasons why the sample of respondents interviewed by BSA 2015 proved to be more representative than those obtained by the polls.

 More time and effort is needed to find Conservative voters. Polls are conducted over just two or three days, which means they are more likely to interview those who are contacted most easily, either over the internet or via their phone.

The evidence from BSA suggests that those who are contacted most easily are less likely to be Conservative voters. The survey made repeated efforts during the course of four months to make contact with those who had been selected for interview. Among those who were contacted most easily – that is they were interviewed the first time an interviewer called – Labour enjoyed a clear lead of no less than six points, a result not accounted for by the social profile of these respondents. In contrast, the Conservatives were eleven points ahead amongst those who were only interviewed after between three and six calls had been made.

 Identifying who is going to abstain is crucial.  People who are interested in politics are more likely to respond to polls and thus are more likely to vote. This means the polls are at risk of underestimating crucial differences in the inclination of different groups of voters to turn out and vote.

Just 70% of those who took part in BSA 2015 said that they had voted, only slightly above the official turnout figure of 66%. More importantly the survey shows that those aged 18-24 were around 30% less likely to vote than those aged 65 or more. Most polls, however, anticipated a smaller age gap than this. At the same time, BSA confirms the evidence of other surveys that Labour gained ground amongst younger voters in 2015 while the Conservatives advanced amongst older people. Thus any tendency among polls to overestimate the turnout of younger voters meant that there was a particularly strong risk in 2015 that Labour support would be overestimated.

Report author, Prof John Curtice, Senior Research Fellow at NatCen said:  “A key lesson of the difficulties faced by the polls in the 2015 general election is that surveys not only need to ask the right questions but also the right people. The polls evidently came up short in that respect in 2015.

“BSA’s relative success in replicating the election result has underlined how random sampling, time-consuming and expensive though it may be, is more likely to produce a sample of people who are representative of Britain as a whole. Using that approach is crucial for any survey, such as BSA, that aims to provide an accurate picture of what the public thinks about the key social and political issues facing Britain and thus ensure we have a proper understanding of the climate of public opinion.”

Kirby Swales, Director of the Survey Research Centre at NatCen said “This research shows how difficult it is to secure a sample that is truly representative of the public, without which it is not possible to accurately generalise about what the public thinks. When we are seeking to understand the opinions or views on issues of particular importance, such as those of voters during a General Election campaign, a random sample survey like British Social Attitudes should be used wherever possible.”

  • Note: This post was based on a summary of the report issued by the centre – Mike Smithson

 

 

 



h1

In Labour’s entire history just one general election winning leader was the choice of the membership

Saturday, January 2nd, 2016

022016115553

Watering down the power of MPs has created big problems.

Looking at the current gulf between the Parliamentary Labour Party and its leader it is perhaps worth reminding ourselves that party members had no say whatsoever in leadership elections until after the 1983 general election when the party, under Michael Foot, went down to its biggest defeat. He had been chosen by MPs alone and in the first MP ballot in 1980 chalked up just 31.3% of the vote against Denis Healey’s 42.3%.

By the time of Kinnock’s election three years later a new structure was in place which left the choice in the hands of an electoral college made up of MPs, the unions and the members at large. Kinnock won all three sections convincingly but went on to lead his party to defeat in 1987 and 1992. John Smith won the leadership selection that followed the latter defeat but died two years later. Tony Blair won decisively in 1994 and remains the only leader elected with membership involvement who has actually won general elections.

Since then the system has been refined with the members getting an increasing say in who gets the top job. Only problem is that the PLP, in whose hands the decision was until 1983, now only has the power of nomination. An MPs votes count the same as a member or a three quidder. This is the heart of the party’s current problem.

In fact Corbyn only had the backing of 6.5% of the PLP which is the very opposite of a mandate.

Where it goes from here will, apart from the EURef, be one of the big stories of 2016.

Under the rules of every previous leadership election the lack of backing from MPs would have stopped Corbyn’s September victory.

Mike Smithson





h1

Alastair Meeks compares his predictions for 2015 with what actually happened

Wednesday, December 30th, 2015

2015 – the past is a country of which I knew little

Every year I sit down at Christmas and try to work out what will happen in the following twelve months.  I do this not because I have any great confidence in my predictive power – as you’re about to see, that would be an illustration of the Dunning-Kruger effect – but because it is useful to have a record of what I thought might be going to happen and to see where I have gone wrong in the past to see if I can learn any lessons.  This year I have much to chew on.

This time last year I made the following predictions here.

  1. The next election will produce a hung Parliament

I was really confident about this prediction this time last year and I was really wrong.  I take absolutely no comfort from the fact that almost everyone else thought the same, including the main party leaders.  Despite the fact that I have long professed to be very sceptical about opinion polls, it turns out that I placed far more uncritical reliance on them than they merited, even though I intellectually understood their flaws.

Most of my other mistakes flowed from this unconscious reliance on the polls.  I have given more thought to this mistake than any other political point this year.  But I still have no clear view as to how to avoid similar mistakes in the future. 

  1. It will be neck and neck between the SNP and the Lib Dems as to which is the third party

Well this wasn’t right either.  The SNP got 56 seats and the Lib Dems got 8.  It was in fact neck and neck between the Lib Dems and the DUP as to which was fourth.

I did at least spot the SNP were going to break through, though a year ago even I was understating it (and at the time I was an SNP bull).  What I didn’t spot was that the Lib Dems were going to be obliterated.

  1. UKIP will get a good poll rating and few seats to show for it

I got something right then.

  1. The Greens will take precisely one seat: Brighton Pavilion

 Make that two things.

  1. The debates will take place, basically in the format put forward originally by David Cameron

The debates did take place, but in a much-changed format.  I’ll give myself half marks for that.

  1. The election campaign won’t change very much, but a lot of people will try to persuade you otherwise

I’m actually quite happy with this prediction too.  The polls didn’t really alter throughout the campaign.  And both main parties came to the same conclusion in their post-election post mortems – in the words of Cowley and Kavanagh in the British General Election of 2015: “Although the reports differed on details, they largely came to the same broad conclusion: Labour lost not because of things it did in the six weeks of the election campaign or because of events in the year or so before, but because it failed on fundamentals about the economy, spending and immigration.”

Although I misunderstood what was going on, I did at least understand that the election result was in reality determined well in advance.  The election campaign produced much heat (and a surreal excursion into pledge by menhir), but not much movement.

  1. The next government will be a Labour minority government

See point 1.  Enough said.  Cough.

  1. All change at the top (mostly)

This was a better prediction than it looked. Few general elections actually lead to a heavy turnover of party leaders.  What stood out before the last election was the weakness of all of the party leaders’ control over their own parties.  The exception was Nigel Farage, who puts to the sword anyone with the temerity to question his methods.  (Those methods resulted in UKIP getting just one MP who, like the Pirates of the Caribbean, regards UKIP’s code more as guidelines than actual rules.  It remains to be seen whether Nigel Farage’s tight grip is to UKIP’s ultimate advantage.)

This election despatched two party leaders immediately and a third (Nigel Farage) resigned and then unresigned, to much mockery from outside his party.  David Cameron’s surprise victory put him beyond challenge for the rest of 2015.  His internal enemies are for now vanquished.

Reflections

Making predictions is chastening.  It does, however, force you to confront your mistakes.  Last year I made a giant mistake (which in truth I had been pursuing as an idée fixe almost since the 2010 election).  I guess if I have a single lesson that I have drawn so far, it is not to get too attached to a single interpretation.  An idea can be good, well-reasoned, have backing evidence and still be wrong.  I shall be trying to keep my mind more open to alternatives in future.

Alastair Meeks



h1

Three words pollsters would rather you didn’t mention; differential non response

Monday, December 28th, 2015

A special column by ex-ICM boss & polling pioneer, Nick Sparrow

While trumpeting the fact that samples are representative of the adult population, researchers seldom, if ever, publish response rate data. Truth is that for telephone polls, response rates are frighteningly low and falling. The reasons for this are varied, but include the fact that many of us have become wary of calls from strangers, having been bombarded with unsolicited sales calls and by “suggers”, the industry term for people selling or list building under the guise of market research. The Telephone Preference Service which should stop these calls is ineffective. Meanwhile caller identification technology, and increased use of mobiles add to the problems.

Online pollsters cannot provide meaningful response rate data because they use volunteer panels of willing respondents who sign up in the hope of participating in polls. Nevertheless we might guess that respondents to internet panel polls are a vanishingly small proportion of all those who have ever given the pastime any serious thought.

Low response rates are not a problem in themselves, only if certain types of people are more likely to refuse than others. In this respect research by Pew in the US is not comforting. Unsurprisingly, they found that people who respond to political surveys are more interested in politics. A bias that could not be eliminated by weighting. That finding is also likely to apply here; people are more likely to participate in surveys if the subject matter interests them.

If we think about recent electoral tests here in the UK, the decisions could be simplified to change versus continuity. At the basic level, referenda on membership of the EU, or independence for Scotland, and even the General Election all ask voters to choose between the status quo and something new and different.

    For polls to be accurate we need the proportions of people interested in politics and wanting change or continuity to be the same as those who are not interested in politics. That might happen, but the tendency will be for people who haven’t really thought through all the pros and cons of change to opt in greater numbers for continuity. In other words the people who can be interviewed may easily have somewhat different attitudes to those who cannot

If this is a problem for the polls, then it will be most acute among groups we know are least interested in politics. For example pollsters find it very hard to interview 18-24 year olds, a group far less interested in politics than any other. It could be seen as evidence of differential refusal; the worry being that willing 18-24 year olds respondents are more interested than others in politics generally and in the subject of the poll, available for interview and most likely to vote. But pollsters treat the problem as one solely of availability, and simply work harder to get the right number in the sample, or cheaper still, just upweight the sample achieved to the correct proportion. But while this approach will, on the surface, appear to make the sample representative, it may well exacerbate the problem of differential non response.

If this theory is right then polls will contain a few more people in all age groups who are sympathetic to the idea of change than they should, and rather fewer people interested in continuity than they ought. What is the evidence? Polls in advance of the Scottish Referendum, the 2011 AV referendum, the 2015 General Election, and indeed the pollsters other debacle in 1992 all overestimated the appetite for change.

The obvious danger of polls exaggerating the mood for change is that they create a bandwagon effect. Alternatively they may promote a spiral of silence in which those tending to want continuity will perceive that their views are in a minority and thus become inclined to silence, thus further aggravating the polling error.

The problem for pollsters trying to wrestle with this problem is obvious; how can you interview by phone or recruit to an online panel those who aren’t interested in answering your questions?

The solution for vote intention polls must be methods that achieve much higher response rates, and thus include more respondents who are not particularly interested in politics. Achieving that is unlikely to make polls cheap or quick to conduct. Unwelcome news to editors accustomed to a regular diet of headline grabbing news stories from polls predicting sensational (if unlikely) outcomes.

Maybe the answer is to become far less obsessed with who will win, and focus instead on understanding the appeal of the arguments both sides are making, where appropriate the attributes of the personalities involved, and how these attitudes change over time. Such polls might point to likely outcomes, without risking the potentially false and misleading impression of deadly accuracy.

At this point, dare I mention that pollsters can ill afford an EU referendum in which they predict a better result for the “out” camp than it actually achieves?

Nick Sparrow



h1

Clegg’s YouGov ratings were substantially better than Corbyn’s now getting just before the election that saw his party almost wiped out

Monday, December 28th, 2015

me>

One of the polling elements that I’ve been highlighting in recent weeks is how leader ratings have proved to be a better pointer to electoral outcomes than voting intention polling.

The above charts seek to put into context Corbyn’s latest ratings. Clearly there’s four and a half years still to go and things can happen but I can find no example of a leader doing so badly on this scale who went on to turn things round. It is hard to see anything other than another CON majority.

Mike Smithson





h1

To start Christmas week a contender for the Tweet exchange of the year

Monday, December 21st, 2015



h1

The GE2015 polls weren’t wrong – we were just looking at the wrong numbers

Saturday, December 19th, 2015

MORI 79-15

Leader ratings have proved a far better guide to election outcomes

A month today, on January 19th, the investigation into what went wrong with the general election polling will be announcing its findings at a special event in London.

No doubt all sorts of tweaks will come out of it but the main factor. I’d argue, is that we (and I include myself in that) paid far far too much attention to the voting intention numbers. In almost every case these are asked right at the start of the interview/questionnaire and are almost akin to finding out what tribe those polled think they are in. Whether they’ll actually vote that way is a different matter.

    2015 was the second general election in the past six when the voting intention findings have been wrong – but when the leader ratings questions were asked these proved to proved spot on in identifying the winner

Look at the chart above showing the MORI then Ipsos MORI LAB and CON leader ratings in every election since 1979. The satisfaction “leader” in each case went on to head the party that won the election.

In 1992, the previous massive polling fail, the leader rating differential between Major and Kinnock was far larger than what we saw in May.

Regular PBers might have noticed that since the election I’ve not been putting much emphasis on voting numbers but have been giving the bulk of the site’s polling coverage to leader ratings. That will continue.

Mike Smithson





h1

The Labour share of the vote in 2020

Sunday, December 13th, 2015

Share of the Vote

Ladbrokes have a market up on whether Labour’s share of the vote will rise or fall at the next general election.

My initial reaction was to back ‘fall’ because of the appalling personal polling figures that Jeremy Corbyn has, but to paraphrase Donald Rumsfeld, there’s quite a few known unknowns about the next general election that might have an impact on this bet, they are, inter alia,

  1. We don’t know who will be leading the Conservative Party (whomever it is they won’t have the electoral appeal of David Cameron)
  2. We don’t know for certain that Corbyn will be leading Labour
  3. The impact of the EU referendum (could we see UKIP do what the SNP did after losing a referendum?)
  4. A potential Labour recovery in Scotland because history suggests that all political parties experience a downturn and the SNP are no different (though I’m not expecting it before 2020 but who knows?)

The other issue is despite Corbyn’s dire polling, Labour increased their share of the vote in the Oldham West and Royton by-election, which made me wonder if in 2020 we’d see what we saw in England in May, Labour piling up votes in their safe seats and the Conservatives getting the votes in the marginals seats where it matters the most.

This is probably a market I’m going to sit out, but PBers might take a different view and see some value in this market.

 

TSE