Comments on the UK House of Lords Report on Political Polling: In Defense of Numbers

At the WAPOR Annual Conference in Marrakesh, Patrick Sturgis, a member of the House of Lords committee on political polling, organized a panel where he presented the report of the committee (published in April 2018) and its recommendations. The original report of the committee here. WAPOR’s position presented at the committee can be found here.

Patrick Sturgis asked some scholars and practitioners to comment on the report and on his presentation. The present text features a synthesis of my personal reactions to the report as I presented them at the Conference and of my further thoughts since then. It therefore is not and cannot be interpreted as WAPOR’s position as an organization.

Some Context

In 2017, the UK House of Lords, concerned by recent polling misses – UK 2015 and Brexit 2016 being among the most publicised – mandated a committee with the task of investigating polling methods and their accuracy and giving their advice on the opportunity to regulate polls during electoral campaigns. WAPOR presented its position to the House of Lords committee. It stressed that governmental regulation of electoral polls was at best inefficient. It also emphasized that all the available research showed that there are no trends towards deterioration in the capacity for polls to accurately predict election outcomes results (see Jennings, W. and C. Wlezien, ‘Election polling errors across time and space’, Nature Human Behaviour, published online 12 March 2018 https://www.nature.com/articles/s41562–018-0315-6). This rather good poll performance occurs despite the fact that conducting polls has become more difficult recently given an increasing diversity in their modes of administration and procedures as well as the increasing difficulty in building samples representative of the electorate.

What’s in the Report… and a Criticism

The House of Lords’ report provides a very interesting synthesis of the different viewpoints that were presented before the committee. It concludes with a number of recommendations regarding political polling in the UK.

Having read the report, we are left with the impression that the reporting of polls by the media represents a major concern. This particular point is the main issue on which I will focus in the remainder of this text.

The organizations that represent the pollsters and the media alike tend to acknowledge that media reporting of polls is often problematic and they recognize that it may harm people’s confidence in polls. However, pollsters, through the British Polling Council and the Market Research Society, stress that they only estimate support for the different parties at a particular point in time and that they do not claim to forecast the results of the election. The problem, in their view, lies in the media’s reporting of the numbers they provide.

The media stress through various organizations representing written media and broadcasters that statistics as such are difficult for “ordinary” people to understand. As a result, it is very difficult for journalists to describe these statistics in a way that would make them accessible to the general public. Consequently, the media tend to omit technical information – even the margin of error –, which nevertheless is essential in order to understand the polls and interpret their numbers with sufficient caution.

What can one respond to these arguments? First, one needs to mention that voters, organizations and media alike follow polls during electoral campaigns precisely because they see polls as forecasts. When pollsters get it right, they publicize their “success”, boasting for example that their firm’s estimates were “closest” to election results. They stress this point although they know – or should know – that the only scientific criteria for assessing the quality of their estimates is to be within “the margin of error” of election results. Being closest is merely a question of chance. In short, pollsters sell forecasts.

Second, while stressing the difficulty of their task, the media do not seem to acknowledge that their emphasis on the horserace prevent them from moving away from the current competitive model aimed at media headlines to a model that would present the poll data more accurately. This is even more so when the “horses” in the race happen to be neck-to-neck. Therefore, the media fail to inform their audience of the necessary caution that should be applied when evaluating quantitative estimates.

What’s Not in the Report

The report fails to address several issues related to political polls, most probably because they were not raised by the various contributors. In terms of publishing poll estimates, it seems that British pollsters sometimes modify their recipe for the estimation of likely voters during the electoral campaign. This may explain some of the quirks noticed in recent electoral campaigns. In the Scottish referendum on independence, for example, the web polls estimated the ‘Yes’ three to five points higher than the other polls until the final six weeks preceding the referendum. Thereafter, their numbers all of a sudden became similar to those of pollsters that used other modes of administration. Is this an effect of the pollsters starting to resort to a likely voter model? We do not know. Although such changes in methods are supposed to be reported to the British Polling Council, they are not necessarily publicised and commented on in the media. Faced with a similar situation after the major polling miss of the 2002 presidential election, the French “Commission des sondages” (Survey Commission) required all pollsters to notify at the beginning of the campaign how they would adjust their estimates and forbade them to change their methods during the campaign. One lesson to be drawn from this is that when changes in methods occur, they should be made public.

Regarding media reporting, one issue not raised is that journalists have to report on the polls sponsored by the media organization for which they work. This situation may make it difficult for them to raise questions about the quality of the polls. How do media organizations and journalists associations ensure that the journalists who cover polls feel sufficiently at ease to publicly raise questions about the polls sponsored by their employer?

Another issue is the tendency for some journalists or media to “cherry pick” results from poll estimates published in a given period. For example, they tend to push forward estimates that are outliers compared to the other polls because they tell a “different story”, although these other polls are probably more reliable. If these outliers are the ones pushed forward, they may become game changers despite being most likely biased in statistical terms.

A final issue related to media reporting is the presence of aggregators and pollsters who aim at “translating” the results of voting intention estimates into the number of ridings that each party is likely to get. These projections are based on several unclear assumptions and recipes. Since the media and the public use these projections to guess what will happen concretely on election day (UK has a uninominal first-past-the-post type of election), the media should require more transparency about the assumptions and the recipes used.

Aggregators also publish “probabilities of winning”. These probabilities have a margin of error that is seldom stated clearly by the aggregators themselves. They may give the false impression that the election is already decided and may therefore have an impact on the election outcome. In the 2016 U.S. election, for example, the 90% probability of winning attributed to Clinton in the final days may eventually have worked against her. It seems that people who supported Clinton did not turn up to vote as much as Trump’s supporters. Probabilities of winning are tricky statistics. We do not know how people interpret them. Since they may have a substantial impact, more transparency is required.

A Word about the Recommendations

The report recommends an enhanced role for the British Polling Council (BPC), an organization of pollsters aimed not only at advocating good polling practices but also at supporting its members. One may ask whether the BPC already has, or has been given contradictory roles that may put it in a situation of possible or apparent conflict of interest. The report recommends for example that the BPC define the “list of criteria for a survey to become a recognized poll”. One may ask what a “recognized” poll is and who has the authority to decide what it is in a context where the methods used by pollsters are varied and in constant evolution. If the BPC or a governmental body had a say on such an issue, it could possibly hamper innovation in the methods used.

The report also recommends that the pollsters provide the Electoral Commission with all the information on their polls, including funding sources. This Commission would publish the information on the funding sources for each poll during the campaign. This recommendation looks ill-advised since the transparency required by the BPC Statement of Disclosure – in the same way as the WAPOR Code of Ethics – already includes the requirement that sponsors be publicly identified. Following this recommendation would give a governmental body a role in the publication of electoral polls. Such a role could be perceived as an interference in the electoral campaign.

Finally, much emphasis is put in the report’s recommendations on the training of journalists. However, at the same time, those who intervened before the committee on behalf of media organizations stressed that statistics are difficult to understand and explain to the public. The last US presidential election should serve as a lesson on the importance for the media to accurately cover polls. The great majority of the polls published in the final week showed that voting intentions for the two candidates were not statistically different. Not a single media organization seems to have notified its audience that this result meant that election results could go either way.

The purported difficulty in explaining the margin of error appears to be used as a pretext for not reporting it while precisely this information should be a major element of poll reporting. There should be a way to present this information so that the general public can understand it. In short, I am unsure whether the problem resides in journalists’ training or in the media’s reluctance to put the necessary effort into presenting this information.

Conclusion

The report provides a very interesting overview of the different actors’ thoughts about the situation of political polling in the UK. It also suggests that media reporting is a concern, though the pollsters and media organizations themselves are relatively comfortable with the status quo. It is doubtful that the recommendations made by the committee, if implemented, would help solve this problem if the organizations responsible for it do not take the lead and set out to solve it. Finally, the role that the committee attributes to the BPC and to the Electoral Commission should be put into question. Apparent conflicts of interest and accusation of political bias in any interventions from these organizations would likely occur if their roles were defined as proposed.

In my view, it would be appropriate to add the following recommendations:

1.  Ensure that pollsters publish their estimates for the total sample and not only those using a likely voter model and that they make public any change in the adjustments they use.

2.  Ensure that pollsters and aggregators alike make public their assumptions in estimating the number of ridings likely to go to each party.

3.  Media organizations should assign the task of covering polls to one or two specialized journalists. These journalists can be trained to better understand polls, the methods used to conduct them and their margin of error. They can also be trained to present this information in a way that is accessible to the public. Representatives of pollsters, academics and the media could meet to agree on ways to convey to their audience the uncertainty inherent in polling data.

4.  Journalists associations should discuss with their members the issues related to the ethics of reporting poll results, including those related to preferential treatment toward employer-sponsored polls and the cherry-picking of results.

To conclude on a humorous note, I would like to propose the launching of a Society for the Protection of Numbers (SPN). If the media do not improve their reporting of polls, we may contemplate the possibility to launch a “numbers” strike (!) in declining to provide them with figures unless they promise to communicate them appropriately.

Contributed by Claire Durand, WAPOR President, Dept. of Sociology, Université de Montréal