This is a review session and I will stress testing for reliability & validity. I have listed some materials that were listed in previous weeks so that you can think about how to use these in the final component of your project. Use this session to make sure you are prepared to complete the reflective assessment of the quality of your index and identify the lessons learned during this project. Viswanathan, M. (2005). Measurement Error and Research Design. Sage, Thousand Oaks, CA. Pp. 1-41. e-reserve. We will be using the procedures described in this reading for your group project. Make sure you understand the basics of the procedures described. Use the information in this document in your projects -- both the group index project and your individual semester project.
Bernard, Chapter 2, pp. 27-60 -- You may want to review the materials on pp. 48-53 which focus on procedures to assess validity and reliability, but you do NOT need to re-read anything if you feel comfortable with your understanding. Steps in Instrument Development is one of my cheat sheets. You will need to use this for the Group Project. Please read for detail. If my discussion of a topic is confusing, use one of the resources listed below about the same topic. You do NOT have to use Cronbach's alpha in the group project, but you do have to conduct an expert review and cognitive test. Therefore, you must understand what these measures are, how to conduct specific tests of reliability and validity, and how to interpret the results of such tests. Reliability and Validity is a short (8 minutes) video about various tests for reliability and validity. These complement the video and readings last week and will help you understand my cheat sheet on techniques for creating and testing instruments. They do show you how to conduct specific statistical tests to assess reliability and validity. Test Procedures for this Course provides instructions for conducting specific tests of instrument validity and reliability. It explains how to complete each required test. I am NOT requiring that you complete the procedures, but you do need to understand what procedures are vailable to you and how to interpret resutls. You may select these or other procedures if you so desire. Calculating and Interpreting Cronbach's Alpha Using SPSS. 8.2 minutes -- shows you exactly how to calculate Cronbach's alpha. You have to do this for your group project. He also provides a good discussion of how to interpret the output. Bialosiewicz, S., Murphy, K. and Berry, T. (2013) Do Our Measures Measure Up? The Critical Role of Measurement Invariance. Demonstration Session, American Evaluation Association, October 2013, Washington, D.C. Available at http://comm.eval.org/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=63758fed-a490-43f2-8862-2de0217a08b8. This is the simplest explanation of measurement invariance and how to test for it that I have seen and the body of the document is very short. They do include a printout of results when you run tests for invariance, which is helpful if you use the same stats package to run the anaysis. Deng, L & Chan, W. (2017) Testing the difference between reliability coefficients Alpha and Omega. Educational & Psychological Measurement 77(2), 185-203. DOI: 10.1177/0013164416658325 Focuses on the use of various measures of reliability with a good discussion of Cronbach's alpha. Dijkstra, W. & Ongena, Y. (2006). Question-answer sequences in survey-interviews. Quality & Quantity 40(6), 983-1011. DOI 10.1007/s11135-005-5076-4. This is a nice piece that examines why respondents do not answer questions as we "expect them to." Some good ideas you can use for all of your projects. Freund, P.A., Tietjens, M. & Strauss, B. (2013) Using rating scales for the assessment of physical self-concept: Why the number of response categories matters. Measurement in Physical Education & Exercise Science 17, 249-263. DOI: 10.1080/1091367X.2013.807265 Discusses item response theory which you need to include in your discussion in Assignment 1. This is a good discussion of the issue of how many response categories to include. Galasinski, D. & Kozlowska, O. (2013) Interacting with a questionnaire: Respondents' constructions of questionnaire completion. Quality & Quantity 47(6), 3509-3520. Very good piece that takes us beyond cognitive testing to understand the processes that people use as they try to answer our questions. Garb, H.N., Wood, J.M. & Fiedler, E.R. (2011) A comparison of three strategies for scale construction to predict a specific behavioral outcome. Assessment 18(4), 399-411. I honestly provide you with only one of several ways of assessing the validity and reliability of scores produced by an instrument. This article compares and contrasts three ways of doing so, only the first of which I have included in my instructions for assignments. You may want to use or of the other two in your semester project. To be quite honest, I selected the internal assessment because it was "doable" in the context of a one-semester course. Joo, Min-Ho and Dennen, Vanessa P. (2017) Measuring university students' group work contribution: Scale development and validation. Small Group Research 48(3), 288-310. DOI: 10.1177/1046496416685159 I suspect the topic may be interesting given that you are doing group work. However, I selected this reading because it provides a very detailed discussion of how to use statistical tests for validity and discriminatory power. Revilla, M.A., Saris, W.E. & Krosnick, J.A. (2014) Choosing the number of categories in agree-disagree scales. Sociological Methods & Research 43(1), 73-97. DOI: 10.1177/0049124113509605 This article discusses some of the issues involved in the "Likert-type response" approach to measurement. I personally find this approach cumbersome and overused and the approach is criticized by many for the high intellectual demand it places on respondents. This reading specifically addresses how many response categories, which has to do with the intellectual demand issue. Saylor, R. (2013) Concepts, measures, and measuring well: An alternative outlook. Sociological Methods & Research 42(3):354-391. DOI: 10.1177/0049124113500476. A nice analysis of the failure to consider the first key steps in measurement when we focus on making sure we are measuring the right things. Xu, H. & Tracey, T.J.G. (2017) Use of multi-group confirmatory factor analysis in examining measurement invariance in counseling psychology research. |