Objectives: After completing this module you should be able to:Discriminate between manifest (or observable) and latent variablesExplain the differences between a questionnaire, an index, and a scale Know when to use a questionnaire versus an index versus a scale Be able to explain why social scientists need to use multi-item measures -- why we cannot usually use one single item or question to measure (quantitatively or qualitatively) |
Our goal this week is for you to obtain a good understanding of key concepts that apply to three groups of instruments that share much in common -- questionnaires, indices and scales. I sometimes call these "check the box" instruments because they rely heavily on closed response items while interviews and focus groups rely heavily on open response items. There are big differences between questionnaires, indices and scales and your job is to understand those differences and demonstrate your understanding in your assignments. You have probably heard all three of these instruments called "surveys." Please do not use the term "survey" in this class. I will deduct points if you do because I will assume that you cannot distinguish between the these three and other instruments. Required Readings: Please use these materials in the group project and throughout the remainder of the course in the assignments. Use, cite and reference the materials you use in responding to the assignments, using APA style. What is the Difference: Questionnaire, Index or Scale? ThoughtCo has short, introductory readings about questionnaires, indices and scales as well as other concise pieces that could be useful to you. These are introductory, but they are a good place to start. They will prepare you for Bernard's more sophisticated discussion of scales and scaling. REQUIRED
What kinds of data do the different instruments produce? Refer to pp. 281-308, Scales & Scaling, in the textbook by H. Russell Bernard. Everyone read: (1) Simple Scales: Single Indicators (p. 281-282), (2) Complex Scales: Multiple Indicators (p. 282-283), and (3) Fifteen Rules for Question Wording and Format (p. 230-236). Seven of you will be assigned one other short section in the textbook to read, typically 3 or 4 pages at the most. See the Week 4 Discussion Board for your assignment. If your name is NOT on the list of assigned readings on the Canvas Discussion Board -- have no fear. You will be asked to provide the same kind of information next week. Be prepared to BRIEFLY state the key features of the type of instrument, technique, or measurement assigned to you. I want a few sentences only because this must be easy for people to find and use during our work this semester. Post your comments to the Week 4 discussion board before class on January 24. What can go wrong? Qualtrics has an excellent and short piece about how to get people to respond to "surveys" (their word, not ours). Their piece discusses major problems in "survey research," both of which are affected by the traits of the instrument itself -- traits that YOU the researcher build into the instrument. This is a really good advice piece and I will be looking at your instruments with all of the consisderations they raise in mind. Make sure you understand the difference betwen response rate and completion rate. A poorly constructed instrument decreases both of these, often leaving the researcher with a very questionnable data set. For a more scholarly and in-depth discussion of factors affecting completion rates, use this reference. It has several very important findings that you can apply to all of your work, even very simple instruments. Liu, M. & Wronski, L. (2018) Examining completion rates in web surveys via over 25,000 real-world surveys. Social Science Computer Review 36(1):116-124. DOI: 10.1177/0894439317695581 |
Additional Materials to Use in the Assignments
Afolabi, Olukayode Ayooluwa (2017) Indigenous emotional intelligence scale: Development and validation. Psychological Thought 10(1), 138-154. doi:10.5964/psyct.v10i1.184 This is a good example and discussion of why it is important to consider context in developing research instruments and why we cannot assume that instruments that provide reliable and valid data in one context will do so in another. Bendixen, M. & Ottesen Kennair, L.E. (2017) When less is more: Psychometric properties of Norwegian short-forms of the Ambivalent Sexism Scales (ASI and AMI) and the Illinois Rape Myth Acceptance (IRMA) Scale. Scandinavian Journal of Psychology 58, 541-554 . DOI: 10.1177/0013164416658325 There is a continual quest to develop shortened forms of "long" instruments that have demonstrated good ability to generate reliable, valid data because of the problem of non-response or drop-out during completion of lengthy instruments. This is one example of an attempt to develop a "short form." Berg, C.J., Nehl, E., Sterling, K., Buchanan, T. et al. (2011) The development and validation of a scale assessing individual schemas used in classifying a smoker: Implications for research and practice. Nicotine & Tobacco Research 13(12), 1257-1265. Ignore the topic -- smoking. Focus on the use of discriminant and convergent validity. Note that there are examples of several kinds of statistical tests you can use to test for reliability and validity. Shows how to use demographic characteristics to test for the contextual appropriateness of an instrument. Deng, L & Chan, W. (2017) Testing the difference between reliability coefficients Alpha and Omega. Educational & Psychological Measurement 77(2), 185-203. DOI: 10.1177/0013164416658325 Focuses on the use of various measures of reliability with a good discussion of Cronbach's alpha. Dijkstra, W. & Ongena, Y. (2006). Question-answer sequences in survey-interviews. Quality & Quantity 40(6), 983-1011. DOI 10.1007/s11135-005-5076-4. This is a nice piece that examines why respondents do not answer questions as we "expect them to." Some good ideas you can use for all of your projects. Freund, P.A., Tietjens, M. & Strauss, B. (2013) Using rating scales for the assessment of physical self-concept: Why the number of response categories matters. Measurement in Physical Education & Exercise Science 17, 249-263. DOI: 10.1080/1091367X.2013.807265 Discusses item response theory which you need to include in your discussion in Assignment 1. This is a good discussion of the issue of how many response categories to include. Galasinski, D. & Kozlowska, O. (2013) Interacting with a questionnaire: Respondents' constructions of questionnaire completion. Quality & Quantity 47(6), 3509-3520. Very good piece that takes us beyond cognitive testing to understand the processes that people use as they try to answer our questions. Garb, H.N., Wood, J.M. & Fiedler, E.R. (2011) A comparison of three strategies for scale construction to predict a specific behavioral outcome. Assessment 18(4), 399-411. I honestly provide you with only one of several ways of assessing the validity and reliability of scores produced by an instrument. This article compares and contrasts three ways of doing so, only the first of which I have included in my instructions for assignments. You may want to use or of the other two in your semester project. To be quite honest, I selected the internal assessment because it was "doable" in the context of a one-semester course. Hohne, W. & Ongena, Y. (2006) Investigating cognitive effort and response quality of question formats in web surveys using paradata. Field Methods 29(4):365-382. DOI: 10.1177/1525822X17710640 Joo, Min-Ho and Dennen, Vanessa P. (2017) Measuring university students' group work contribution: Scale development and validation. Small Group Research 48(3), 288-310. DOI: 10.1177/1046496416685159 I suspect the topic may be interesting given that you are doing group work. However, I selected this reading because it provides a very detailed discussion of how to use statistical tests for validity and discriminatory power. Kelly, P., Fitzsimons, C. & Bakeer, G. (2016) Should we reframe how we think about physical activity and sedentary behaviour measurement? Validity and reliability reconsidered. International Journal of Behavioral Nutrition and Physical Activity. 13:32. DOI 10.1186/s12966-016-0351-4 Pelli Paiva, P.C., Neves de Paiva, H., Messias de Oliveira Filho, P., Lamouanier, J.A., Ferreira e Ferreira, E., Conceicao Ferreira, R., Kawachi, I. & Zarzar, M. (2014) Development and validation of a social capital questionnaire for adolescent students (SCQ-AS). PloS ONE 9:(8):e103785. DOI: 10.1371/journal.pone.0103785 Priede, C. & Farrall, S. (2011) Comparing results from different styles of cognitive interviewing: "verbal probing" vs. "thinking aloud." International Journal of Social Research Methodology 14(4), 271-287. There are lots of specific techniques one can use in cognitive interviewing, but this article provides a good explanation of two quite distinct general approaches. Revilla, M.A., Saris, W.E. & Krosnick, J.A. (2014) Choosing the number of categories in agree-disagree scales. Sociological Methods & Research 43(1), 73-97. DOI: 10.1177/0049124113509605 This article discusses some of the issues involved in the "Likert-type response" approach to measurement. I personally find this approach cumbersome and overused and the approach is criticized by many for the high intellectual demand it places on respondents. This reading specifically addresses how many response categories, which has to do with the intellectual demand issue. Saylor, R. (2013) Concepts, measures, and measuring well: An alternative outlook. Sociological Methods & Research 42(3):354-391. DOI: 10.1177/0049124113500476. A nice analysis of the failure to consider the first key steps in measurement when we focus on making sure we are measuring the right things. Xu, H. & Tracey, T.J.G. (2017) Use of multi-group confirmatory factor analysis in examining measurement invariance in counseling psychology research. European Journal of Counselling Psychology. 6(1):75-82. DOI:10.5964/ejcop.v6i1.120 Zhang, W. & Watanabe-Galloway. (2014) Using mixed methods effectively in prevention science: Designs, procedures and examples. Prevention Science 14:654-662. DOI 10.1007/s11121-013-0415-5 |