Contenuto in:
Capitolo

Multipoint vs slider: a protocol for experiments

  • Venera Tomaselli
  • Giulio Giacomo Cantone

Since the broad diffusion of Computer-Assisted survey tools (i.e. web surveys), a lively debate about innovative scales of measure arose among social scientists and practitioners. Implications are relevant for applied Statistics and evaluation research since while traditional scales collect ordinal observations, data from sliders can be interpreted as continuous. Literature, however, report excessive times of completion of the task from sliders in web surveys. This experimental protocol is aimed at testing hypotheses on the accuracy in prediction and dispersion of estimates from anonymous participants who are recruited online and randomly assigned into tasks in recognition of shades of colour. The treatment variable is two scales: a traditional multipoint 0-10 multipoint vs a slider 0-100. Shades have a unique parametrisation (true value) and participants have to guess the true value through the scale. These tasks are designed to recreate situations of uncertainty among participants while minimizing the subjective component of a perceptual assessment and maximizing information about scale-driven differences and biases. We propose to test statistical differences in the treatment variable: (i) mean absolute error from the true value (ii), time of completion of the task. To correct biases due to the variance in the number of completed tasks among participants, data about participants can be collected through both pre-tasks acceptance of web cookies and post-tasks explicit questions.

  • Keywords:
  • slider scales,
  • colour recognition,
  • web-survey design,
+ Mostra di più

Venera Tomaselli

University of Catania, Italy - ORCID: 0000-0002-2287-7343

Giulio Giacomo Cantone

University of Catania, Italy - ORCID: 0000-0001-7149-5213

  1. Agresti A. (2010). Analysis of Ordinal Categorical Data, Wiley, Hoboken, (NJ).
  2. Askalidis, G., Kim, S.J., Malthouse, E.C. (2017). Understanding and overcoming biases in online review systems. Decision Support Systems, 97, pp. 23-30.
  3. Austin, P.C., Brunner, L.J. (2003). Type I error inflation in the presence of a ceiling effect. The American Statistician, 57(2), pp. 97-104.
  4. Bosch, O.J., Revilla, M., DeCastellarnau, A., Weber, W. (2018). Measurement reliability, validity, and quality of slider versus radio button scales in an online probability-based panel in Norway. Social Science Computer Review. DOI: 10.1177/0894439317750089
  5. Chyung, S.Y.Y., Swanson, I., Roberts, K., Hankinson A. (2018). Evidence-based survey design: The use of continuous rating scales in surveys, Performance Improvement, 57(5), 38-48.
  6. Couper, M.P., Tourangeau, R., Conrad, F.G., Singer, E. (2006). Evaluating the effectiveness of visual analog scales. Social Science Computer Review, 24(2), pp. 227-245.
  7. Finn A., Ranchhod V. (2015). Genuine fakes: The prevalence and implications of data fabrication in a large South African survey. The World Bank Economic Review. DOI: 10.1093/wber/lhv054
  8. Fryer, L.K., Nakao, K. (2020). The future of survey self-report: An experiment contrasting Likert, VAS, slide, and swipe touch interfaces. Frontline Learning Research, 8(3), pp. 10-25.
  9. Funke, F. (2015) A web experiment showing negative effects of slider scales compared to visual analogue scales and radio button scales, Social Science Computer Review, 34(2), pp. 244-254.
  10. Kampen, J., Swyngedouw, M. (2000). The ordinal controversy revisited. Quality & Quantity, 34, pp. 87-102.
  11. Kluver, D., Ekstrand, M. D., Konstan, J. A. (2018). Rating-based collaborative filtering: algorithms and evaluation. In Social Information Access, eds. P. Brusilovsky and D. He, Springer, Charm, (SW), pp. 344-390.
  12. Liu M., Conrad, F.G. (2018). Where should I start? On default values for slider questions in web surveys, Social Science Computer Review. DOI: 10.1177/0894439318755336
  13. Lorenz, J. (2006). Universality in movie rating distributions. The European Physical Journal B. 71, pp. 251-258.
  14. Matejka, J., Glueck, M., Grossman, T., Fitzmaurice, G. (2016). The effect of visual appearance on the performance of continuous sliders and visual analogue scales. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. DOI: 10.1145/2858036.2858063
  15. Roberts, J.M., Brewer, D.D. (2001). Measures and tests of heaping in discrete quantitative distributions. Journal of Applied Statistics, 28(7), pp. 887-896.
  16. Roster, C.A., Lucianetti L., Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys, Journal of Research Practice, 11(1), Article D1. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/509/413.
  17. Tomaselli, V., Cantone, G.G. (2020). Evaluating Rank-Coherence of Crowd Rating in Customer Satisfaction. Social Indicators Research. DOI: 10.1007/s11205-020-02581-8
  18. Velleman, P.F., Wilkinson, L. (1993). Nominal, ordinal, interval, and ratio typologies are misleading. American Statistician, 47(1), pp. 65-72.
  19. Voutilainen, A., Pitkäaho, T., Kvist, T., Vehviläinen-Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing, 72(4), pp. 946-957.
  20. Zinn, S., Würbach, A. (2015). A statistical approach to address the problem of heaping in self- reported income data. Journal of Applied Statistics, 43(4), pp. 682-703.
PDF
  • Anno di pubblicazione: 2021
  • Pagine: 91-96

XML
  • Anno di pubblicazione: 2021

Informazioni sul capitolo

Titolo del capitolo

Multipoint vs slider: a protocol for experiments

Autori

Venera Tomaselli, Giulio Giacomo Cantone

Lingua

English

DOI

10.36253/978-88-5518-304-8.19

Opera sottoposta a peer review

Anno di pubblicazione

2021

Copyright

© 2021 Author(s)

Licenza d'uso

CC BY 4.0

Licenza dei metadati

CC0 1.0

Informazioni bibliografiche

Titolo del libro

ASA 2021 Statistics and Information Systems for Policy Evaluation

Sottotitolo del libro

Book of short papers of the opening conference

Curatori

Bruno Bertaccini, Luigi Fabbris, Alessandra Petrucci

Opera sottoposta a peer review

Anno di pubblicazione

2021

Copyright

© 2021 Author(s)

Licenza d'uso

CC BY 4.0

Licenza dei metadati

CC0 1.0

Editore

Firenze University Press

DOI

10.36253/978-88-5518-304-8

eISBN (pdf)

978-88-5518-304-8

eISBN (xml)

978-88-5518-305-5

Collana

Proceedings e report

ISSN della collana

2704-601X

e-ISSN della collana

2704-5846

330

Download dei libri

408

Visualizzazioni

Salva la citazione

1.388

Libri in accesso aperto

in catalogo

2.597

Capitoli di Libri

4.205.799

Download dei libri

4.979

Autori

da 1067 Istituzioni e centri di ricerca

di 66 Nazioni

70

scientific boards

da 375 Istituzioni e centri di ricerca

di 43 Nazioni

1.304

I referee

da 397 Istituzioni e centri di ricerca

di 38 Nazioni