Statistical Science

Options for Conducting Web Surveys

Matthias Schonlau and Mick P. Couper

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Web surveys can be conducted relatively fast and at relatively low cost. However, Web surveys are often conducted with nonprobability samples and, therefore, a major concern is generalizability. There are two main approaches to address this concern: One, find a way to conduct Web surveys on probability samples without losing most of the cost and speed advantages (e.g., by using mixed-mode approaches or probability-based panel surveys). Two, make adjustments (e.g., propensity scoring, post-stratification, GREG) to nonprobability samples using auxiliary variables. We review both of these approaches as well as lesser-known ones such as respondent-driven sampling. There are many different ways Web surveys can solve the challenge of generalizability. Rather than adopting a one-size-fits-all approach, we conclude that the choice of approach should be commensurate with the purpose of the study.

Article information

Source
Statist. Sci., Volume 32, Number 2 (2017), 279-292.

Dates
First available in Project Euclid: 11 May 2017

Permanent link to this document
https://projecteuclid.org/euclid.ss/1494489816

Digital Object Identifier
doi:10.1214/16-STS597

Mathematical Reviews number (MathSciNet)
MR3648960

Zentralblatt MATH identifier
1381.62030

Keywords
Convenience sample Internet survey

Citation

Schonlau, Matthias; Couper, Mick P. Options for Conducting Web Surveys. Statist. Sci. 32 (2017), no. 2, 279--292. doi:10.1214/16-STS597. https://projecteuclid.org/euclid.ss/1494489816


Export citation

References

  • AAPOR (2010). AAPOR report on online panels. American Association for Public Opinion Research, Deerfield, IL.
  • AAPOR (2012). Understanding a ‘credibility interval’ and how it differs from the ‘margin of sampling error’ in a public opinion poll. Available at http://www.aapor.org/AAPOR_Main/media/MainSiteFiles/DetailedAAPORstatementoncredibilityintervals.pdf.
  • AAPOR (2013). Report of the AAPOR Task Force on Non-probability Sampling. American Association for Public Opinion Research, Deerfield, IL.
  • Antoun, C., Zhang, C., Conrad, F. G. and Schober, M. F. (2015) Comparisons of online recruitment strategies for convenience samples Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk. Field Methods. DOI:10.1177/1525822X15603149.
  • Baker-Prewitt, J. (2010). Looking beyond quality differences: How do consumer buying patterns differ by sample source? Paper presented at the CASRO panel conference, New Orleans, LA.
  • Bauermeister, J. A., Zimmerman, M. A., Johns, M. M., Glowacki, P., Stoddard, S. and Volz, E. (2012). Innovative recruitment using online networks: Lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (WebRDS) strategy. J. Stud. Alcohol Drugs 73 834–838.
  • Bengtsson, L., Lu, X., Nguyen, Q. C., Camitz, M., Hoang, N. L., Nguyen, T. A., Liljeros, F. and Thorson, A. (2012). Implementation of web-based respondent-driven sampling among men who have sex with men in Vietnam. PLoS ONE 7 e49417.
  • Berinsky, A. J., Huber, G. A. and Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Polit. Anal. 20 351–368.
  • Bethlehem, J. (2010). Selection bias in Web surveys. Int. Stat. Rev. 78 161–188.
  • Bethlehem, J. (2016). Solving the nonresponse problem with sample matching? Soc. Sci. Comput. Rev. 34 59–77.
  • Bethlehem, J. and Biffignandi, S. (2011). Handbook of Web Surveys. Wiley, New York.
  • Blom, A. G., Gathmann, C. and Krieger, U. (2015). Setting up an online panel representative of the general population the German Internet panel. Field Methods 27 391–408.
  • Brickman Bhutta, C. (2012). Not by the book: Facebook as a sampling frame. Sociol. Methods Res. 41 57–88.
  • Buhrmester, M., Kwang, T. and Gosling, S. D. (2011). Amazon’s Mechanical Turk a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6 3–5.
  • Callegaro, M., Baker, R. P., Bethlehem, J., Göritz, A. S., Krosnick, J. A. and Lavrakas, P. J. (2014). Online Panel Research: A Data Quality Perspective. Wiley, New York.
  • CDC (2016). National HIV behavioral surveillance (NHBS). Available at http://www.cdc.gov/hiv/statistics/systems/nhbs/index.html.
  • Citizen Panel (2015). The Citizen Panel at the University of Gothenburg. Available at http://lore.gu.se/surveys/citizen.
  • Cochran, W. G. (1968). The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics 25 295–313.
  • Couper, M. P. (2000). Web surveys: A review of issues and approaches. Public Opin. Q. 64 464–494.
  • Couper, M. P. (2007). Issues of representation in eHealth research (with a focus on Web surveys). Am. J. Prev. Med. 32 S83–S89.
  • Couper, M. P. (2012). Assessment of innovations in data collection technology for understanding society. Report to the Economic and Social Research Council, UK. Available at http://eprints.ncrm.ac.uk/2276/.
  • Couper, M. P. and Miller, P. V. (2008). Web survey methods: Introduction. Public Opin. Q. 72 831–835.
  • Craig, B. M., Hays, R. D., Pickard, A. S., Cella, D., Revicki, D. A. and Reeve, B. B. (2013). Comparison of US panel vendors for online surveys. J. Med. Internet Res. 15 e260.
  • Das, M., Toepoel, V. and van Soest, A. (2011). Nonparametric tests of panel conditioning and attrition bias in panel surveys. Sociol. Methods Res. 40 32–56.
  • Dennis, M. (2015). Technical overview of the AmeriSpeak panel—NORC’s probability-based research panel. Technical report.
  • Dever, J. A., Rafferty, A. and Valliant, R. (2008). Internet surveys: Can statistical adjustments eliminate coverage bias? Surv. Res. Methods 2 47–62.
  • DiSogra, C. (2008). River samples: A good catch for researchers? Knowledge Networks Newsletter. Available at http://www.knowledgenetworks.com/accuracy/fall-winter2008/disogra.html.
  • Elliott, M. N. and Haviland, A. (2007). Use of a Web-based convenience sample to supplement a probability sample. Surv. Methodol. 33 211–215.
  • Elliott, M. R. and Valliant, R. (2017). Inference for non-probability samples. Statist. Sci. 32 249–264.
  • Erens, B., Burkill, S., Couper, M. P., Conrad, F., Clifton, S., Tanton, C., Phelps, A., Datta, J., Mercer, C. H., Sonnenberg, P. et al. (2014). Nonprobability Web surveys to measure sexual behaviors and attitudes in the general population: A comparison with a probability sample interview survey. J. Med. Internet Res. 16 e276.
  • Faasse, J. (2005). Panel proliferation and quality concerns. In Proceedings of ESOMAR Conference on Worldwide Panel Research: Developments and Progress, Budapest, Hungary 159–169, ESOMAR, Amsterdam. [CD].
  • GESIS Panel (2015). GESIS panel study descriptions (version 11.0.0). Available at http://www.gesis.org/en/services/data-collection/gesis-panel/gesis-panel-data-usage/.
  • Ghosh-Dastidar, B., Elliott, M. N., Haviland, A. M. and Karoly, L. A. (2009). Composite estimates from incomplete and complete frames for minimum-MSE estimation in a rare population an application to families with young children. Public Opin. Q. 73 761–784.
  • Gile, K. J. and Handcock, M. S. (2010). Respondent-driven sampling: An assessment of current methodology. Sociol. Method. 40 285–327.
  • Haziza, D. and Beaumont, J.-F. (2017). Construction of weights in surveys: A review. Statist. Sci. 32 206–226.
  • Heckathorn, D. D. (1997). Respondent-driven sampling: A new approach to the study of hidden populations. Soc. Probl. 44 174–199.
  • Høgestøl, A. and Skjervheim, Ø. (2013). The Norwegian Citizen Panel: 2013 first wave. Available at http://www.uib.no/en/citizen.
  • Holmberg, A., Lorenc, B. and Werner, P. (2010). Contact strategies to improve participation via the Web in a mixed-mode mail and Web survey. J. Off. Stat. 26 465–480.
  • Hughes, T. and Tancreto, J. (2015). Refining the Web response option in the multiple mode collection of the American Community Survey. Paper presented at the European Survey Research Association Conference, Reykjavik.
  • Keeter, S. and Christian, L. (2012). A Comparison of Results from Surveys by the Pew Research Center and Google Consumer Surveys. Pew Research Center for the People & The Press, Washington, DC.
  • Keeter, S. and Weisel, R. (2015). Building Pew Research Center’s American Trends Panel. Pew Research Center report. Available at http://www.pewresearch.org/files/2015/04/2015-04-08_building-the-ATP_FINAL.pdf.
  • Klausch, T., Schouten, B. and Hox, J. J. (2015). Evaluating bias of sequential mixed-mode designs against benchmark surveys. Sociol. Methods Res.. DOI:10.1177/0049124115585362.
  • Lee, S. (2006). Propensity score adjustment as a weighting scheme for volunteer panel web surveys. J. Off. Stat. 22 329–349.
  • Lee, S. and Valliant, R. (2009). Estimation for volunteer panel web surveys using propensity score adjustment and calibration adjustment. Sociol. Methods Res. 37 319–343.
  • Little, R. J. and Rubin, D. B. (2002). Statistical Analysis with Missing Data. Wiley, New York.
  • Lorch, J., Cavallaro, K. and van Ossenbruggen, R. (2010). Sample blending: $1+1>2$. Paper presented at the CASRO Panel Conference, New Orleans, LA.
  • Lu, X., Bengtsson, L., Britton, T., Camitz, M., Kim, B. J., Thorson, A. and Liljeros, F. (2012). The sensitivity of respondent-driven sampling. J. Roy. Statist. Soc. Ser. A 175 191–216.
  • McDonald, P., Mohebbi, M. and Slatkin, B. (2012). Comparing Google Consumer Surveys to existing probability and non-probability based Internet surveys. White paper. Google, Mountain View, CA. Available at http://www.google.com/insights/consumersurveys/static/358002174745700394/consumer_surveys_whitepaper.pdf.
  • Medway, R. L. and Fulton, J. (2012). When more gets you less: A meta-analysis of the effect of concurrent Web options on mail survey response rates. Public Opin. Q. 76 733–746.
  • Nelson, E. J., Hughes, J., Oakes, J. M., Pankow, J. S. and Kulasingam, S. L. (2014). Estimation of geographic variation in human papillomavirus vaccine uptake in men and women: An online survey using Facebook recruitment. J. Med. Internet Res. 16 e198.
  • O’Donovan, G. and Shave, R. (2007). British adults’ views on the health benefits of moderate and vigorous activity. Prev. Med. 45 432–435.
  • Pasek, J. and Krosnick, J. A. (2010). Measuring intent to participate and participation in the 2010 census and their correlates and trends: Comparisons of RDD telephone and non-probability sample Internet survey data. Technical report 2010:15, Statistical Research Division of the US Census Bureau, Washington DC.
  • Rivers, D. (2007). Sampling for Web surveys. Paper presented at the Joint Statistical Meetings in Salt Lake City, UT.
  • Rivers, D. and Bailey, D. (2009). Inference from matched samples in the 2008 US national elections. In Proceedings of the Joint Statistical Meetings 627–639.
  • Scherpenzeel, A. (2011). Data collection in a probability-based Internet panel: How the LISS panel was built and how it can be used. BMS. Bull. Méthodol. Sociol. 109 56–61.
  • Schnorf, S., Sedley, A., Ortlieb, M. and Woodruff, A. (2014). A comparison of six sample providers regarding online privacy benchmarks. In SOUPS Workshop on Privacy Personas and Segmentation, Menlo Park, CA.
  • Schonlau, M., Asch, B. J. and Du, C. (2003). Web surveys as part of a mixed-mode strategy for populations that cannot be contacted by e-mail. Soc. Sci. Comput. Rev. 21 218–222.
  • Schonlau, M., Fricker, R. and Elliott, M. N. (2002). Conducting Research Surveys via E-mail and the Web. RAND Corporation, Santa Monica, CA.
  • Schonlau, M. and Toepoel, V. (2015). Straightlining in Web survey panels over time. Surv. Res. Methods 9 125–137.
  • Schonlau, M., Weidmer, B. and Kapteyn, A. (2014). Recruiting an Internet panel using respondent-driven sampling. J. Off. Stat. 30 291–310.
  • Schonlau, M., Zapert, K., Payne Simon, L., Haynes Sanstad, K., Marcus, S. M., Adams, J., Spranca, M., Kan, H., Turner, R. and Berry, S. H. (2004). A comparison between responses from a propensity-weighted Web survey and an identical RDD survey. Soc. Sci. Comput. Rev. 22 128–138.
  • Schonlau, M., van Soest, A., Kapteyn, A. and Couper, M. (2009). Selection bias in web surveys and the use of propensity scores. Sociol. Methods Res. 37 291–318.
  • Schouten, B., van den Brakel, J., Buelens, B., van der Laan, J. and Klausch, T. (2013). Disentangling mode-specific selection and measurement bias in social surveys. Soc. Sci. Res. 42 1555–1570.
  • Seeman, N. (2015). Use data to challenge mental-health stigma. Nature 528 309. Available at http://www.nature.com/news/use-data-to-challenge-mental-health-stigma-1.19033.
  • Seeman, N., Tang, S., Brown, A. D. and Ing, A. (2016). World survey of mental illness stigma. J. Affective Disorders 190 115–121.
  • Stein, M. L., van Steenbergen, J. E., Chanyasanha, C., Tipayamongkholgul, M., Buskens, V., van der Heijden, P. G. M., Sabaiwan, W., Bengtsson, L., Lu, X., Thorson, A. E. et al. (2014). Online respondent-driven sampling for studying contact patterns relevant for the spread of close-contact pathogens: A pilot study in Thailand. PLoS ONE 9 e85256.
  • Sturgis, P., Baker, N., Callegaro, M., Fisher, S., Green, J., Jennings, W., Kuha, J., Lauderdale, B. and Smith, P. (2016). Report of the inquiry into the 2015 British general election opinion polls. Available at http://eprints.ncrm.ac.uk/3789/1/Report_final_revised.pdf.
  • Taylor, H., Bremer, J., Overmeyer, C., Siegel, J. W. and Terhanian, G. (2001). The record of Internet-based opinion polls in predicting the results of 72 races in the November 2000 US elections. Int. J. Mark. Res. 43 127–136.
  • Toepoel, V., Das, M. and Van Soest, A. (2008). Effects of design in Web surveys comparing trained and fresh respondents. Public Opin. Q. 72 985–1007.
  • Valliant, R., Dever, J. A. and Kreuter, F. (2013). Practical Tools for Designing and Weighting Survey Samples. Springer, Berlin.
  • Vavreck, L. and Rivers, D. (2008). The 2006 cooperative congressional election study. J. Elect. Publ. Opin. Part. 18 355–366.
  • Vonk, T., van Ossenbruggen, R. and Willems, P. (2006). A comparison study across 19 online panels (NOPVO 2006). In Access Panels and Online Research, Panacea or Pitfall? (I. Stoop and M. Wittenberg, eds.). DANS Symposium Publications 4. Aksant Academic Publishers, Amsterdam.
  • Wang, W., Rothschild, D., Goel, S. and Gelman, A. (2015). Forecasting elections with non-representative polls. Int. J. Forecast. 31 980–991.
  • Wejnert, C. and Heckathorn, D. D. (2008). Web-based network sampling: Efficiency and efficacy of respondent-driven sampling for online research. Sociol. Methods Res. 37 105–134.
  • Yeager, D. S., Krosnick, J. A., Chang, L., Javitz, H. S., Levendusky, M. S., Simpser, A. and Wang, R. (2011). Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples. Public Opin. Q. 75 709–747.
  • Zewoldi, Y. (2011). Introduction, 2011. Seminar on New Technologies in Population and Housing Censuses: Country Experiences, 42nd session of the United Nations Statistical Commission, New York. Available at http://unstats.un.org/unsd/statcom/statcom_2011/Seminars/NewTechnologies/default.html.