AAPOR/WAPOR Task Force Report on Quality in Comparative Surveys

Download the Full Report (PDF)

Chairing Committee:
Lars Lyberg, Demoskop, AAPOR Task Force Chair
Beth-Ellen Pennell, University of Michigan, WAPOR Task Force Chair
Kristen Cibelli Hibben, University of Michigan, Co-Chair
Julie de Jong, University of Michigan, Co-Chair

Contributors:
Dorothée Behr, GESIS – Leibniz Institute for the Social Sciences
Jamie Burnett, Kantar Public
Rory Fitzgerald, City, University of London
Peter Granda, University of Michigan
Linda Luz Guerrero, Social Weather Stations
Hayk Gyuzalyan, Conflict Management Consulting
Tim Johnson, University of Illinois, Chicago
Jibum Kim, Sungkyunkwan University, South Korea
Zeina Mneimneh, University of Michigan
Patrick Moynihan, Pew Research Center
Michael Robbins, Princeton University
Alisú Schoua-Glusberg, Research Support Services
Mandy Sha, www.mandysha.com
Tom W. Smith, NORC University of Chicago
Ineke Stoop, The Netherlands
Irina Tomescu-Dubrow, Institute of Philosophy and Sociology, Polish Academy of Sciences (PAN) and CONSIRT at Ohio State University and PAN
Diana Zavala-Rojas, Universitat Pompeu Fabra, Barcelona
Elizabeth J. Zechmeister, Vanderbilt University, LAPOP

This report was commissioned by the AAPOR and WAPOR Executive Councils as a service to the profession.  The report was reviewed and accepted by AAPOR and WAPOR Executive Councils. The opinions expressed in this report are those of the authors and do not necessarily reflect the views of either council. The authors, who retain the copyright to this report, grant AAPOR a non-exclusive perpetual license to the version on the AAPOR website and the right to link to any published versions. 

This report is dedicated to the memory of Lars Lyberg, who has had a profound and lasting influence on our field. He was a generous collaborator, colleague, and mentor, and a great friend. 

Table of Contents
Abbreviations used in the report
Executive Summary
Background
Priority areas for future research

  1. Introduction
  2. Background
    1. History of 3MC surveys
    2. 3MC surveys in practice
    3. The fundamental challenges of 3MC surveys
  3. Quality and comparability in 3MC surveys
  4. Prevailing operational and design challenges
    1. Organizational structure
    2. Sampling
    3. Questionnaire design
    4. Translation and adaptation
    5. Questionnaire pretesting
    6. Field implementation 
    7. Documentation in 3MC surveys
  5. The changing survey landscape
  6. Summary and recommendations

Appendix 1 – Task Force Charge
Appendix 2 – Table 2 References 
Appendix 3 – Smith’s 2011 TSE and comparison error figure 
Appendix 4 – Pennell et al. 2017 TSE framework adapted for 3MC surveys
Appendix 5 – Bauer’s random route alternatives

Appendix 6 – 3MC Survey documentation standards for study-level and variable-level metadata and auxiliary data
References 

Comparative surveys are surveys that study more than one population with the purpose of comparing various characteristics of the populations. The purpose of these types of surveys is to facilitate research of social phenomena across populations, and, frequently, over time. Researchers often refer to comparative surveys that take place in multinational, multiregional, and multicultural contexts as “3MC” surveys (Mneimneh et al., forthcoming).1 To achieve comparability, these surveys need to be carefully designed according to state-of-the-art principles and standards.

There are many 3MC surveys conducted within official statistics, and the academic and private sectors. They have become increasingly important to global and regional decision-making as well as theory-building. At the same time these surveys display considerable variation regarding methodological and administrative resources available, organizational infrastructure, awareness of error sources and error structures, level of standardized implementation across populations, as well as user involvement. These circumstances make 3MC surveys vulnerable from a quality perspective. Quality problems present in single-population surveys are therefore magnified in 3MC surveys. In addition, there are quality problems specific to 3MC surveys such as translation processes.

The wealth of output from such surveys is usually not accompanied by a corresponding interest in informing researchers, decision-makers, and other users about quality shortcomings. This can lead to understated margins of error and estimates that therefore appear more precise than they actually are. There are also cases where researchers are informed about quality shortcomings but opt to ignore those in their research reports. There are of course many possible explanations for this state of affairs. One is that 3MC surveys are very expensive and the formidable planning and implementation leaves relatively little room for a comprehensive treatment of quality issues. Another explanation is that the survey-taking cultures among survey professionals vary considerably across nations as manifested by varying degrees of methodological capacity, risk assessment, and willingness to adhere to specifications that are not normally applied.

The literature on data quality in 3MC surveys is scarce compared to the substantive literature. There are exceptions, though, including the Cross-Cultural Survey Guidelines developed by the University of Michigan and members of the International Workshop on Comparative Survey Design and Implementation (CSDI). AAPOR has created a cross-cultural and multilingual research affinity group and some 3MC surveys have advanced continuing data quality research programs. Members of the CSDI Workshop have produced three monographs that treat advances in the field of 3MC surveys. There are also scattered book chapters and journal articles that discuss 3MC and quality.

The task force has drawn upon this literature and the considerable and varied experience of its members. Many insights into challenges to, and possible solutions for strengthening the quality of 3MC data come from cross-national survey methodology. We note, however, that many societies have cultural and linguistic minorities, with considerable diversity among these groups (Harkness et al., 2014). Therefore, the 3MC issues discussed in the report are also highly relevant to single country multicultural and multiregional survey research, where comparability is also important.

With this context in mind, the main purposes of this task force are to identify the most pressing challenges concerning data quality, promote best practices, recommend priorities for future study, and foster dialogue and collaboration on 3MC methodology. The intended audience for this report includes those involved in all aspects of 3MC surveys including data producers, data archivists, data users, funders and other stakeholders, and those who wish to know more about this discipline. The full Task Force charge can be found in Appendix 1. Continue reading.

The focus of this report is comparative surveys of individuals in households, which is in line with the missions of American Association for Public Opinion Research (AAPOR) and the World Association for Public Opinion Research (WAPOR). We do not discuss other comparative surveys such as establishment surveys, and agricultural surveys.