Victoria University’s Jack Vowles asks if it matters if the polls are askew
As those of us who pay attention to politics may remember, virtually all the polls got the 2015 British general election badly wrong: instead of it being a close race, it turned out that the Conservatives were heading for a clear majority. That the Conservatives squandered their advantage two years later is another story. A detailed inquiry followed the polling discrepancy in 2015, with most pollsters making significant changes to their methods. The result for the 2017 election was that at least some of the polls did much better.
Perceptions of difference between a close race and a more skewed one affect party strategies, and voter choices. It does matter if the polls are wrong.
Is this just volatility between polls and pollsters, or something that is really happening among those eligible to vote?
Comparing the last pre-election estimates to election results, the record of polls in New Zealand has been quite a good one. There has been some evidence of ‘house effects’, with Colmar Brunton polls sometimes giving National a somewhat higher level of support than other polls, but the differences have usually been small ones. Roy Morgan polls tend to show higher Greens support, suggesting that they probably do not sufficiently discount low turnout among the young.
The big discrepancy between Colmar Brunton and Reid Research’s recent polls is larger than any I can recall. Is this evidence of poll failure in New Zealand? Commentators have tended to blame ‘volatility’. But is this just volatility between polls and pollsters, or something that is really happening among those eligible to vote?
There are several reasons why polls conducted by different firms may differ. Sampling strategy is the most obvious. Colmar Brunton remains wedded to landline telephone polling. According to some estimates, landlines are now connected to somewhere between two out of three, or three out of four homes in New Zealand. Reid Research shifted to a three-quarter landline, one-quarter online sample in March this year. Online samples for political polls are common elsewhere, but the uptake is more recent in New Zealand. The relatively small size of the New Zealand population makes online sampling more of a challenge than elsewhere. But it may be possible to make it work here if pollsters work hard enough at it.
Looking at poll results from the beginning of 2017, Reid Research polls line up quite well with the two other consistent pollsters, Colmar Brunton and Roy Morgan. Only Reid’s most recent poll, that from September 6-11, has diverged significantly, with a seven or eight point difference for National when compared to two Colmar Brunton polls conducted from September 2-6 and September 9-13. This is outside confidence intervals (margins of error) for both polls, although probably only just.
The consistency of the two Colmar Brunton polls gives some grounds for greater confidence in their results.
The polling periods are an obvious potential source of difference. There was only a one-day overlap between the first of the two Colmar Brunton polls and that of Reid Research. The overlap is bigger for the second Colmar Brunton poll: three days, or half the polling period.
It is conceivably possible that Reid Research did capture a temporary upsurge in National party support during the period between the two Colmar Brunton polls; however, this amounts to only two days, September 7 and 8. Readers might consider what, if anything, might have made a difference over that period: it would have had to have been quite a significant shock.
What to make of this? The consistency of the two Colmar Brunton polls gives some grounds for greater confidence in their results. However, hints from party pollsters suggest that their numbers fall somewhere between the two public pollsters. The next Reid poll will be worth looking out for.