Eight training workshops will be organized on July 28. Participation in training workshops is included into the conference registration package (both professional and student). Every participant can choose one workshop in the morning session and one workshop in the afternoon session. More information about the content of the workshops will be available shortly.

Morning session on July 28 (09:00-12:00 local time)

  1. Cross-Cultural Surveys (Henning Silber and Christof Wolf)
  2. Augmenting Surveys with Data from Smartphone Sensors and Apps: Best Practices (Florian Keusch and Bella Struminskaya)
  3. Respondent Driven Sampling (Sunghee Lee)
  4. Multilevel regression and Poststratification (Yajuan Si)

Afternoon session on July 28 (13:00-16:00 local time)

  1. Cross-Cultural Questionnaire Design (Lydia Repke)
  2. Applied Introduction to the Collection of Paradata and Sensor Data (Jan Karem Höhne)
  3. Web and Multimode Surveys Using Free/Open Source Tools (Adam Zammit)
  4. Communicating Opinion Polls to Journalists (Gary Langer)

Cross-Cultural Surveys

This workshop focuses on cross-cultural survey research and can be combined with the workshop on “questionnaire design for cross-cultural surveys.” Using the ISSP and ESS as examples, the workshop emphasizes issues that have to be taken into account when conducting surveys in more than one country or culture. The course begins by positioning cross-cultural survey research in the Total Survey Error framework. From there, we discuss specific problems of measurement and representation in cross-cultural research. Concerning the measurement, we briefly discuss questionnaire design from a cross-cultural perspective, including challenges of translation, harmonization, and applying international standard classifications in the survey context. This will be followed up by considerations of the mode of data collection and sampling, two crucial elements determining what part of the population is represented by a survey. Then, we turn to aspects of fieldwork monitoring. This leads us to approaches of survey data quality within cross-cultural surveys.

Henning Silber is head of the Survey Operations team at GESIS – Leibniz Institute for the Social Sciences and lecturer in quantitative social science at the University of Mannheim. His research interests include survey methodology, political sociology, and experimental social science research. His work on cross cultural surveys includes a book chapter on the association of culture and survey error: Silber, H., & Johnson, T. P. (2020). Culture and response behavior: An overview of cultural mechanisms explaining survey error. Understanding survey methodology. In: Sociological theory and applications edited by P. Brenner, pp. 67-86. He teaches, as a lecturer (“Privatdozent”), quantitative methods in the social sciences at the University of Mannheim, Germany. Since January 2022, Henning serves as the Publications Chair in the Council of the World Association for Public Opinion Research (WAPOR). 

Christof Wolf is president of GESIS Leibniz-Institute for the Social Sciences and professor of sociology at University of Mannheim. He is mainly interested in survey methodology and social stratification. He studied Sociology, Economics, Economic and Social History and Statistics at Hamburg University. In 1996 he received his doctorate in sociology at the University of Cologne and was awarded the venia legendi for sociology in 2003. From 2004 to 2015 Christof Wolf was Scientific Director of the department “Monitoring Society and Social Change”. He recently co-edited: Irina Tomescu-Dubrow, Christof Wolf, Kazimierz M. Slomczynski, & J. Craig Jenkins (Eds.). (2024). Survey Data Harmonization in the Social Sciences. Hoboken, New Jersey: Wiley.


Augmenting Surveys with Data from Smartphone Sensors and Apps: Best Practices

Smartphone sensors (e.g., GPS, camera, accelerometer) and apps allow researchers to collect rich behavioral data, potentially with less measurement error and lower respondent burden than self-reports through surveys. Passive mobile data collection (e.g., location tracking, call logs, browsing history) and respondents performing additional tasks on smartphones (e.g., taking pictures, scanning receipts) can augment or replace self-reports. However, there are multiple challenges to collecting these data: participant selectivity, (non)willingness to provide sensor data or perform additional tasks, ethical issues, privacy concerns, usefulness of these data, and practical issues of in-browser measurement and app development. This course will address these challenges by reviewing state-of-the-art practices of smartphone sensor data collection, ranging from small-scale studies of hard-to-reach populations to large-scale studies to produce official statistics, and discuss design best-practices for sensor measurement. Recommendations provided will include:

  •   What research questions can be answered using smartphone sensors and apps?
  •   What are participants’ concerns and how to address them?
  •   How to ask for consent for sensor measurements and ensure participation?

This course will discuss methods of assessing data quality and touch upon the analysis of passively collected data. The course will not provide analytic methods for “found” data nor demonstrate how to program smartphone sensor apps.

Who should attend: The course is intended for survey practitioners, researchers, or students who want a practical introduction to smartphone sensor-based research. No prior knowledge of smartphone sensors is required, but a basic understanding of survey practice and survey errors is helpful.

 By the end of the course participants will:

  •      know what smartphone sensors are available and what they can measure to facilitate and enhance surveys
  •      be able to identify potential applications of smartphone sensor measurement for their own data collection
  •      be able to anticipate practical issues when implementing smartphone sensor data collection.

Note that this course can be combined with “An applied exercise for collecting paradata and sensor data” taught by Jan Karem Höhne in the afternoon. 

Florian Keusch is Professor of Social Data Science and Methodology in the Department of Sociology at the University of Mannheim and Adjunct Research Professor in the Joint Program in Survey Methodology (JPSM) at the University of Maryland. He studies the quality of digital behavioral data from wearbales, apps, and sensors and how they can complement survey data to better measure attitudes, behaviors, and social interactions. He also serves on the Faculty Board of the International Program in Survey and Data Science (IPSDS) and is Associate Editor of Survey Research Methods. He received his PhD in Social and Economic Sciences (Dr.rer.soc.oec.) from WU, Vienna University of Economics and Business, Austria, in 2011. Before joining the University of Mannheim, he was a Post-doc Research Fellow at the Program in Survey Methodology at the University of Michigan’s Institute for Social Research.

Bella Struminskaya is Associate Professor at the Department of Methodology & Statistics. Prior to joining Utrecht University, she was a senior researcher at GESIS – Leibniz Institute for the Social Sciences at the Department Survey Design and Methodology. Her research interests include survey methodology, smartphone surveys, smartphone sensors & passive data collection using mobile devices, online and mixed-mode surveys, nonresponse and measuremnt errors in surveys, panel effects, and paradata. Bella is a board member of the German Society for Online Research and programme chair of the General Online Research Conference (GOR). She is a member of the Quality Assurance Board of the GESIS Panel and of the Data Collection Committee of ODISSEI (Open Data Infrastructure for Social Science and Economic Innovations). Bella is an associate editor of the Journal of Survey Statistics and MethodologySurvey Research Methods and a guest editor of Public Opinion QuarterlySocial Science Computer Review, and the Journal of Royal Statistical Society Series A.


Respondent Driven Sampling: Overview and Practical Tool

There is no clear practical, cost-effective solution in probability sampling, when data collection targets rare, hidden and/or elusive populations. Even with unlimited resources, minoritized groups’ low participation presents another challenge. Respondent driven sampling (RDS) has been proposed and used to fill this gap. RDS is feasible because of human nature— RDS is feasible because of human nature—we are connected to other people and form stronger ties with others who share similar characteristics. The peril in RDS implementation, however, is that the recruitment success is dependent on participants’ willingness to recruit others and their subsequent follow-through. These are not under researchers’ control and difficult to predict before the fieldwork begins.

This course is designed for survey researchers and social scientists with varying levels of familiarity with RDS. The participants will learn theoretical and practical premises of RDS, design options for RDS studies through existing applications of RDS and how to optimize RDS data collection. The course also will discuss topics related to RDS data confidentiality and data analysis, including its statistical properties and types of analyses unique to RDS data. Throughout the course, the emphasis will be given to the practice of RDS.

In addition to the course slides, this course will provide a toy RDS data and R codes which will be used to demonstrate RDS data collection monitoring, data analysis and data visualization in the course. Moreover, a short tutorial on an R function that guide design decisions will be provided.   

Sunghee Lee is a Research Associate Professor and a Director of Program in Survey and Data Science at the Institute for Social Research, University of Michigan. She is a survey methodologist whose research interest evolves around improving data quality through inclusivity, which has profound implications for equity in social programs and policy decisions. Specifically, she has examined two angles of data quality: representation and measurement. Her research focuses on identifying and addressing error sources that affect inclusivity, including coverage, nonresponse, translation, question order and response style, often in the intersection with cultural norms. She leads the sampling aspect of the Health and Retirement Study and is a principal investigator of multiple federally funded studies that apply RDS for recruiting hard-to-reach populations and population subgroups.


Multilevel regression and Poststratification

Multilevel regression and poststratification (MRP), a method originally applied to political polls, has become increasingly popular with applications to demography, epidemiology, and many other areas. Adapted from hierarchical models, MRP is an approach to modeling survey or other nonrepresentative sample data that has the potential to adjust for complex design features and nonresponse bias while performing small area estimation. The workshop covers the statistical concepts and practical issues in implementing MRP with real-life application examples. We will introduce the assumptions and properties of MRP, connecting with small area estimation and data integration methods. Recent developments of MRP for survey weighting and inference of probability/non-probability samples will be covered. We will conclude with some cautions and challenging issues in the application of MRP.

Yajuan Si is a Research Associate Professor in the Institute for Social Research at the University of Michigan. She received her Ph.D. in Statistical Science in 2012 from Duke University. Dr Si’s research lies in cutting-edge methodology development in streams of Bayesian statistics, linking design- and model-based approaches for survey inference, missing data analysis, confidentiality protection involving the creation and analysis of synthetic datasets, and causal inference with observational data. Yajuan has extensive collaboration experiences with interdisciplinary researchers to improve the application of statistics in many different substantive fields, and she has been providing statistical support to solve sampling design and analysis issues on health and social science surveys.


Questionnaire Design for Cross-Cultural Surveys

In cross-cultural research, special attention to the development of survey items is paramount to ensure that they measure what they intend to measure and that they do so in a comparative way across different cultures and languages. This workshop is an extension of the “Cross-Cultural Surveys” workshop. However, participation in that workshop is not a prerequisite for participation in this workshop, which aims to equip participants with the fundamentals of cross-cultural research, focusing on creating comparable multilingual measurement instruments. In addition, participants will be introduced to techniques and tools for evaluating and pretesting questionnaire items. Finally, the workshop will illuminate key considerations in the translation process, identifying potential pitfalls and sources of error, thereby fostering a comprehensive understanding of the challenges inherent in cross-cultural survey designs.

Lydia Repke is a social scientist who heads the Scale Development and Documentation team while also leading the Survey Quality Predictor 3.0 (SQP) project at GESIS – Leibniz Institute for the Social Sciences. Additionally, she holds a membership in the Young Academy of the Academy of Sciences and Literature | Mainz (German: Akademie der Wissenschaften und der Literatur Mainz, AdW Mainz) and serves as an elected board member of the “Sociological Network Research” Section within the German Sociological Association (German: Deutsche Gesellschaft für Soziologie, DGS). Originally, Lydia studied Political and Administrative Sciences at the University of Konstanz, Germany, and undertook a research stay at Boğaziçi Üniversitesi in Istanbul, Turkey. She graduated from the double master’s degree program European Master in Government of the University of Konstanz and Universitat Pompeu Fabra, Spain, where she received her doctor’s degree in cultural psychology (2017) and the Special Doctorate Award for her thesis on multicultural identifications and personal social networks. Throughout her studies, she was funded by the Studienstiftung des Deutschen Volkes (German Academic Scholarship Foundation). With her expertise in data quality of survey questions, egocentric network analysis, and multiculturalism, Lydia is dedicated to knowledge transfer through teaching, consulting, and science communication in these areas.


Applied Introduction to the Collection of Paradata and Sensor Data

Technological advancements, coupled with digital social change, provide novel research avenues for social and behavioral scientists. The global increase in high-speed (mobile) Internet and electronic device ownership moves daily life to digital spheres. In this short course, I give an applied introduction to the collection of various paradata (e.g., response times, window/browser tab switching, and user agent strings) and sensor data. The latter ones are distinguished into passively collected sensor data (e.g., acceleration and Global Positioning System data) and actively collected sensor data (e.g., image and voice data). For this purpose, I present program codes (mainly JavaScript) and how to implement these codes for the collection of paradata and sensor data. Importantly, the course includes applied data collection exercises in which participants:

  1. work with the codes,
  2. learn to collect the various data themselves,
  3. accumulate knowledge about data characteristics (e.g., structured and unstructured),
  4. and get novel insights into data handling.

As basis for the data-driven showcases, I will use the survey software solution Unipark (www.unipark.com). Previous knowledge on digital data and/or programming skills are not mandatory, but it is suitable to attend this course as well as the course “Augmenting surveys with data from smartphone sensors and apps: best practices” by Bella Struminskaya and Florian Keusch. Participants are encouraged to prepare questions concerning current and future studies and to bring a laptop PC for the exercises. They also receive access to all rogram codes that are introduced during the short course.

At the end of the workshop, the participants will be able to:

  1. independently employ codes for collecting paradata and sensor data,
  2. decide on best practices when it comes to data handling and processing,
  3. and critically reflect upon the merits and limits of various digital data and their suitability for empirical studies in the framework of social and behavioral science.

Jan Karem Höhne is junior professor at the Leibniz University Hannover in association with the German Centre for Higher Education Research and Science Studies and head of the CS3 lab for Computational Survey and Social Science. Currently, he is visiting professor at the Department of Methodology and Statistics at Utrecht University and permanent research fellow at the Research and Expertise Centre for Survey Methodology at the University Pompeu Fabra in Barcelona. Before that, he was senior postdoc in the ERC-funded POLITSOLID project at the Political Science Department at the University of Duisburg-Essen, postdoc at the German Internet Panel at the University of Mannheim, research fellow of the German Academic Exchange Service at the Institute for Social Research at the University of Michigan, and Fulbright fellow at the Department of Communication at Stanford University. His research combines survey research and computational social science. I employ Automatic Speech Recognition, Natural Language Processing, and generative AI applications for measuring political and social attitudes. This also includes the exploration of new, digital data sources and forms for empirical social research.


Web & Multimode Surveys Using Free/Open Source Tools

Having software that is free of cost is an easy sell, but source code freedom has many other advantages: self hosting ability, full control of data and privacy, and the ability to modify the software to suit your needs to name a few.

The workshop will include the installation of a suite of free and open source software tools for conducting surveys in multiple modes on your own device, then run through the use of these tools. The core tool is LimeSurvey, a powerful and free web based survey tool, which acts as a questionnaire authoring and web based data collection tool. Also demonstrated will be the Android Offline Surveys app for CAPI, queXS* for CATI and queXF* for PAPI data entry.

*Disclosure: queXS and queXF are developed by the author of this presentation.

Course objectives and planned hands-on activities:

– What is free / open source software and why does it matter
– Obtaining and installing the software (please bring a laptop, otherwise you can watch as a demonstration)
– Setting up a “base” questionnaire in LimeSurvey
– Using the suite of tools to deliver the questionnaire in multiple modes (CAPI, CATI, CAWI, PAPI)
– Limitations of the tools

The workshop consists mainly of hands-on activities, including the installation and use of the tools, followed by a Q&A session.

Adam Zammit is the Director of Operations at the Australian Consortium for Social and Political Research (ACSPRI). Adam is the lead developer and maintainer of a suite of free / open source tools for survey research (queXML, queXF, queXS and queXC) and is a contributor to the LimeSurvey project. Adam also manages many survey projects, including web, telephone and paper based studies. Current projects include data collection for the Australian Survey of Social Attitudes (AuSSA) – a national postal survey that incorporates the ISSP module for Australia.


Communicating Opinion Polls to Journalists

Public opinion researchers and journalists often cross paths, and for good reason: they share the core goal of producing meaningful information about the issues of our day and the condition of our societies. They share a common approach, as well: Survey researchers go to their best sources, ask their best questions and report what they found out. Journalists do precisely the same.

Yet communication between the two camps often is suboptimal. The reasons again are clear: While their goals and broad approach are shared, their methods diverge. Researchers’ work is circumscribed by the demands of scientific inquiry, shaped by methodology, subject to uncertainty and infused with complexity. Journalists need a straightforward story, a snappy headline – and confidence they’re not being played.

Our workshop will seek to bridge the gap between the two, seeking a common language and understanding. We’ll look at examples of journalists misreporting polls – and of pollsters misreporting news. We’ll examine best practices on both sides – for researchers in talking to journalists about polls, and for reporters in making poll coverage decisions.

The aim of our workshop is not to endorse or extend the use of polling as a public relations tool. Quite the opposite: We’ll explore the ways in which survey researchers can contribute to the aims of journalism, through clarity, accuracy and transparency, in order to help both disciplines achieve their shared goal of an informed society.

The latter portion of the workshop will include group discussion of successes and failures in communication with journalists. Participants are encouraged to bring in short descriptions of surveys they’ve produced that they see as newsworthy, for use in role play with a skeptical journalist.

Gary Langer is founder and president of Langer Research Associates, a New York-based survey research firm that produces news polls for the ABC News Television Network, investigates complex public policy issues for nonprofits and foundations and manages international surveys for Pew Research Center and other partners. Langer previously served as longtime director of polling at ABC News and as a newsman at The Associated Press. His work has been recognized in two news Emmy awards, 10 Emmy nominations and the 2010 Policy Impact Award of the American Association for Public Opinion Research. He’s vice chair of the Roper Center for Public Opinion Research and WAPOR’s U.S. representative.