Types of Visual Response Requirement
Point-selection is the most standard way to present scales, either a continuous line or categorical options are provided from which the respondent should point and select the desired choice.
Slider is a type of linear implementation in which the respondent should move a marker to give a rating.
Text-box input is a typing space where respondents can type in their answer.
Drop-down menu shows the list of response options after clicking on the rectangular box, i.e. before clicking the respondent do not see the whole list of options and sometimes respondents have to scroll down to select the most desired option.
Drag-and-drop refer to the technique where respondents need to drag an element to the desired position.
Slider is a type of linear implementation in which the respondent should move a marker to give a rating.
Text-box input is a typing space where respondents can type in their answer.
Drop-down menu shows the list of response options after clicking on the rectangular box, i.e. before clicking the respondent do not see the whole list of options and sometimes respondents have to scroll down to select the most desired option.
Drag-and-drop refer to the technique where respondents need to drag an element to the desired position.
Theoretical arguments
- Box format does no give a clear sense of the range of the options (Buskirk et al. 2015).*
- Numeric text-box inputs better because drop-down menus are more cumbersome when large number of possible options are listed (Christian et al. 2007a).*
- Box format is closer to how questions are asked on telephone, where the visual display is not provided (Christian et al. 2009).*
- Drop boxes require added effort from respondents who have to click and scroll simply to see the answer options (Couper et al. 2004).*
- Drop-down menus are more burdensome for respondents (De Leeuw et al. 2008).*
- Respondents are more frustrated with drop-down menus as it requires a two-step process (Dillman and Bowker 2001).*
- It is more demanding, it requires more hand-eye coordination than point-selection and provides problems to identify non-substantive responses (Funke et al. 2011).*
- Drag and drop may prevent systematic response tendencies since respondents need to spend more time (Kunz 2015).*
- Hand movement is longer than for other types of scales (Reips 2002).*
- Sliders are more fun and engaging and produce better data than point-selection scales (Roster et al. 2015).*
Empirical evidence on data quality
*DeCastellarnau, A. Qual Quant (2018) 52: 1523. doi: 10.1007/s11135-017-0533-4
- Differences on selecting the lowest, middle or highest options and in missing data between sliders, radio button scales and box format [Satisficing bias and Item-nonresponse] (Buskirk et al. 2015) → YES*
- Responses are comparable between point-selection and number box scales [Response style through distribution comparison] (Christian et al. 2007b) → NO*
- Box entry has a significant impact on responses compared to point-selection [Response style bias through distribution comparison] (Christian et al. 2009) → YES*
- Sliders show no difference compared rating scales on reliability [Score reliability] (Cook et al. 2001) → NO*
- Nonresponse was comparable between drop-down menu and point-selection [Item-nonresponse] (Couper et al. 2004) → NO*
- There are more missing data in the slider than in the radio button or text input scale [Item-nonresponse] (Couper et al. 2006) → YES*
- Drag-and-drop scales are suffered from higher item-nonresponse compared to radio button scales [Item-nonresponse] (Kunz 2015) → YES*
- Item-nonresponse is non- significantly different compared to drop-down and text-box input [Item-nonresponse] (Liu and Conrad 2016) → NO*
- Drop-down menus do not influence on the answering behaviours compared to radio button scales [Response style through distribution comparison] (Reips 2002) → NO*
- Response rates between sliders and radio-button scales are non-significantly different [Item-nonresponse] (Roster et al. 2015) → NO*
*DeCastellarnau, A. Qual Quant (2018) 52: 1523. doi: 10.1007/s11135-017-0533-4
References
Buskirk, T.D., Saunders, T., Michaud, J. (2015). Are sliders too slick for surveys? An experiment comparing slider and radio button scales for smartphone, tablet and computer based surveys. Methods Data Anal. 9, 229–260. doi: 10.12758/mda.2015.013
Christian, L.M., Dillman, D.A., Smyth, J.D. (2007b). The effects of mode and format on answers to scalar questions in telephone and web surveys. In: Lepkowski, J.M., Tucker, C., Brick, M., De Leeuw, E.D., Japec, L., Lavrakas, P.J., Link, M.W.,Sangster, R.L. (eds.) Advances in Telephone Survey Methodology, pp. 250–275. Wiley, Hoboken.
Christian, L.M., Parsons, N.L., Dillman, D.A. (2009). Designing scalar questions for web surveys. Sociol. Methods Res. 37, 393–425. doi: 10.1177/0049124108330004
Cook, C., Heath, F., Thompson, R.L., Thompson, B.: Score reliability in webor internet-based surveys: unnumbered graphic rating scales versus Likert-type scales. Educ. Psychol. Meas. 61, 697–706 (2001). doi: 10.1177/00131640121971356
Couper, M.P., Tourangeau, R., Conrad, F.G., Crawford, S.D. (2004). What they see is what we get: response options for web surveys. Soc. Sci. Comput. Rev. 22, 111–127. doi: 10.1177/0894439303256555
Couper, M.P., Tourangeau, R., Conrad, F.G., Singer, E. (2006). Evaluating the effectiveness of visual analog scales: a web experiment. Soc. Sci. Comput. Rev. 24, 227–245. doi: 10.1177/0894439305281503
De Leeuw, E.D., Hox, J.J., Dillman, D.A. (2008). International Handbook of Survey Methodology. Routledge, New York
Dillman, D., Bowker, D. (2001). The web questionnaire challenge to survey methodologists. In: Reips, U.D., Bosnjak, M. (eds.) Dimensions of Internet Science. Pabst Science Publishers, Lengerich.
Funke, F., Reips, U.-D., Thomas, R.K. (2011). Sliders for the smart: type of rating scale on the web interacts with educational level. Soc. Sci. Comput. Rev. 29, 221–231. doi: 10.1177/0894439310376896
Kunz, T. (2015). Rating scales in Web surveys. A test of new drag-and-drop rating procedures. Technische Universität, Darmstadt [Ph.D. Thesis]
Liu, M., Conrad, F.G.(2016). An experiment testing six formats of 101-point rating scales. Comput. Hum. Behav. 55, 364–371. doi: 10.1016/j.chb.2015.09.036
Reips, U.-D. (2002). Context effects in web-surveys. In: Batnic, B., Reips, U.-D., Bosnjak, M. (eds.) Online Social Sciences, pp. 69–79. Hogrefe & Huber, Cambridge
Roster, C.A., Lucianetti, L., Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys. J. Res. Pract. 11. http://jrp.icaap.org/index.php/jrp/article/view/509/413
Buskirk, T.D., Saunders, T., Michaud, J. (2015). Are sliders too slick for surveys? An experiment comparing slider and radio button scales for smartphone, tablet and computer based surveys. Methods Data Anal. 9, 229–260. doi: 10.12758/mda.2015.013
Christian, L.M., Dillman, D.A., Smyth, J.D. (2007b). The effects of mode and format on answers to scalar questions in telephone and web surveys. In: Lepkowski, J.M., Tucker, C., Brick, M., De Leeuw, E.D., Japec, L., Lavrakas, P.J., Link, M.W.,Sangster, R.L. (eds.) Advances in Telephone Survey Methodology, pp. 250–275. Wiley, Hoboken.
Christian, L.M., Parsons, N.L., Dillman, D.A. (2009). Designing scalar questions for web surveys. Sociol. Methods Res. 37, 393–425. doi: 10.1177/0049124108330004
Cook, C., Heath, F., Thompson, R.L., Thompson, B.: Score reliability in webor internet-based surveys: unnumbered graphic rating scales versus Likert-type scales. Educ. Psychol. Meas. 61, 697–706 (2001). doi: 10.1177/00131640121971356
Couper, M.P., Tourangeau, R., Conrad, F.G., Crawford, S.D. (2004). What they see is what we get: response options for web surveys. Soc. Sci. Comput. Rev. 22, 111–127. doi: 10.1177/0894439303256555
Couper, M.P., Tourangeau, R., Conrad, F.G., Singer, E. (2006). Evaluating the effectiveness of visual analog scales: a web experiment. Soc. Sci. Comput. Rev. 24, 227–245. doi: 10.1177/0894439305281503
De Leeuw, E.D., Hox, J.J., Dillman, D.A. (2008). International Handbook of Survey Methodology. Routledge, New York
Dillman, D., Bowker, D. (2001). The web questionnaire challenge to survey methodologists. In: Reips, U.D., Bosnjak, M. (eds.) Dimensions of Internet Science. Pabst Science Publishers, Lengerich.
Funke, F., Reips, U.-D., Thomas, R.K. (2011). Sliders for the smart: type of rating scale on the web interacts with educational level. Soc. Sci. Comput. Rev. 29, 221–231. doi: 10.1177/0894439310376896
Kunz, T. (2015). Rating scales in Web surveys. A test of new drag-and-drop rating procedures. Technische Universität, Darmstadt [Ph.D. Thesis]
Liu, M., Conrad, F.G.(2016). An experiment testing six formats of 101-point rating scales. Comput. Hum. Behav. 55, 364–371. doi: 10.1016/j.chb.2015.09.036
Reips, U.-D. (2002). Context effects in web-surveys. In: Batnic, B., Reips, U.-D., Bosnjak, M. (eds.) Online Social Sciences, pp. 69–79. Hogrefe & Huber, Cambridge
Roster, C.A., Lucianetti, L., Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys. J. Res. Pract. 11. http://jrp.icaap.org/index.php/jrp/article/view/509/413