Cross Sectional Designs

Objectives After completing this module, you will be able to:

  • Understand the kinds of research questions that can be answered through cross sectional designs;
  • Identify inappropriate or inadequate uses of the cross-sectionaldesigns, including multi-point in time cross-sectional studies which do have a time component but are not equivalent to longitudinal designs;
  • Take steps to reduce the threats to internal and external validity inherent in cross-sectional designs, particularly the threats that result from limited ability to anticipate unanticipated or accounted for differences among groups that threaten the validity of results;
  • Assess the explanatory power, and internal and external validity of cross-sectional designs;
  • Evaluate the quality of sampling strategies used in these designs and develop sampling strategies to help ensure that samples are adequate and that the samples for comparison groups are reasonable;
  • Create cross-sectional designs to answer research and evaluation questions.

Topic 1: What is a cross-sectional design?

You need to explore the resources about these designs in depth. We are at a point in this class where you should be able to understand and apply the concepts in these last modules well. They should make sense to you now. If they do not, come to class prepared to ask questions. Anything that is not clear to you is probably not clear to others as well.

This piece by the Institute for Work & Health provides a very short explanation of the differences between cross-sectional and longitudinal designs. What Researchers Mean by Cross-Sectional vs. Longitudinal Studies

Slide show Cross Sectional Designs My comments, for what they are worth -- which may not be very much

Have a copy of Comparative Characteristics of Design Groups in class

Some Clarifications about Multiple Comparison Populations in Cross-Sectional Designs This is my cheat sheet about the use of multiple comparison groups. I think the overview of cross-sectional designs from Johns Hopkins (just a few slides) is better. They use an example of disease prevalence in multiple populations, but the same principles apply to any outcome variable, and the outcome variable typically defines the populations of interest in an observational design.

Topic 2: What can we do to make cross-sectional designs better? Or should we just give them up altogether?

Improving the Internal Validity of Cross-Sectional Designs

Bernard. Review Ch. 5 which we already used way back in week 5 when we talked about types of samples. This week, also come to class prepared to discuss Chapter 6 on Sampling theory (pp. 146-161). Sampling is absolutely critical to both internal and external validity in cross-sectional designs. I know you are probably sick of hearing about sampling, but sampling is critical to all science. Try to complete Bernard's exercises on p. 144 and 160. Make SURE you are clear about all of the summary points he makes. If you cannot understand why these points are important, ask in class (also on p. 144 and 160).

Learning Guide to Cross-Sectional Designs

Recommended Materials - Cross-sectional designs. These are a couple of good pieces that do not quite rise to the level of required materials. The Duncan & Magnuson one really helps you understand the very fundamental differences in what you can conclude, how confident you can be in your conclusions, and how much you can generalize your conclusions if you use non-experimental designs (any non-experimental design, not just a cross-sectional). It is good food for thought. The Wheaton one provides an excellent overview of some of the important limitations of cross-sectional designs. I strongly encourage you to consult these materials as you complete Assignment 5. They will be helpful, not matter whether you select a cross-sectional, longitudinal, or case study design.

Duncan, G.J. & Magnuson, K.A. (2003) The promise of random assignment social experiments for understanding well-being and behavior. Current Sociology 51(5), 529-541. Compares cross-sectionals to experiments -- why you cannot conclude that A causes B.

Wheaton, B. (2003) When methods make a difference. Current Sociology 51(5), 543-571. Insightful piece about some pitfalls with cross-ssectional designs.

Additional Materials: Sampling is critical to the internal validity, external validity, and explanatory power of conclusions reached through cross-sectional designs. Consult the appropriate materials from this list for Assignment 3 -- a crucial component in my assessment criteria.

Almutairi, A.F., Gardner, G.E. & McCarthy, A. (2014) Practical guidance for the use of pattern-matching technique in case-study research: A case presentation. Nursing & Health Sciences 16(2), 239-244. DOI: 10.1111/nhs.12096.

Bennett, C., Khangura, S., Brehaut, J.C. et al. (2011) Reporting guidelines for survey research: An analysis of published guidance and reporting practices. PLoS Medicine 8(8), 1-11. DOI: 10.1371/journal.pmed.1001069.

Bethlemem, J. (2016) Solving the nonresponse problem with sample matching? Social Science Computer Review 34(1), 59-77. DOI: 10.1177/0894439315573926.

Cronin, C. (2014) Using case study research as a rigorous form of inquiry. Nurse Researcher 21(5), 19-27.

DeBoni, R., Do Nascimento Silva, P.L. Bastos, F.I. et al. (2012) Reaching the hard-to-reach: A probability sampling method for assessing prevalence of driving under the influence of drinking in alcohol outlets. PLoS ONE 7(4), 1-9. DOI: 10.1371/journal.pone.0034104.

Dragulis, J.R. & Plaza, C.M. (2009) Best practices for survey research reports revisited: Implications of target population, probability sampling, and response rate. American Journal of Pharmaceutical Education 73(8), 1-3.

Elman, C., Gerring, J. & Mahoney, J. (2016) Case study research: Putting the quant into the qual. Sociological Methods & Research. 45(3), 375-391. DOI: 10.1177/0049124116644273.

Freeman Herreid, C., Prod'homme-Genereaux, A., Schiller, A. et al. (2016) What makaes a good case, revisited: The Survey Monkey tells all. Journal of College Science Teaching 45(1), 60-65.

Hayward, M.W., Boitani, L., Burrows, N.D. et al. (2015) Ecologists need robust survey designs, sampling and analytical methods. Journal of Applied Ecology 52(2), 286-290. DOI: 10.1111/1365-2664.12408.

Houghton, C., Casey, D., Shaw, D. & Murphy, K. (2013). Rigour in qualitative case-study research. Nurse Researcher 20(4), 12-17.

Kamholz, B.W., Gulliver, S.B., Helstrom, A. et al. (2009) Implications of participant self-selection for generalizability: Who participates in smoking laboratory research. Substance Use and Misuse 44(3), 343-356. DOI: 10.1080/10826080802345051.

McInroy, L.B. (2016) Pitfalls, potentials and ethics of online survey research: LGBTQ and other marginalized and hard-to-access youths. Social Work Research 40(2), 83-93. DOI: 10.1093/swr/svw005

Miller, P.G., Johnston, J., Dunn, M. et al. (2010) Comparing probabiity and non-probability sampling methods in ecstasy research: Implications for the Internet as a research tool. Substance Use & Misuse 45(3), 437-450. DOI: 10.3109/10826080903452470.

Rule, P., John, V.M. (2015) A necessary dialogue: Theory in case study research. International Journal of Qualitative Methods 14(4), 1-11. DOI: 10.1177/1609406915611575.

Tingley, D. (2014) Survey research in international political economy: Motivations, designs, methods. International Interactions 40(3), 443-451. DOI: 10.1080/03050629.2014.900614.

Unicomb, R., Colyvas, K., Harrison, E. & Hewat, S. (2015) Assessment of reliable change usingn 95% credible intervals for the differences in proportions: A statistical analysis for case-study methodology. Journal of Speech, Language & Hearing Research 58(3), 728-739. DOI: 10.1044/2015_JSLHR-S-14-0158.

Walia, R., Bhansali, A. Ravikiran, M. et al. (2014) Self weighing and non-probability samples. Indian Journal of Medical Research 140(1), 150-151.

West, B.T., Sakshaug, J.W. & Aurelien, G.A.S. (2016) How big of a problem is analytic error in secondary analyses of survey data? PLoS ONE 11(6). DOI: : e0158120. doi:10.1371/journal.pone.0158120

Yeager, D.S., Krosnick, J.A., Chang, L.C. et al. (2011) Comparing the accuracy of RDD telephone surveys and internet surveys conducted with probability and non-probability samples. Public Opinion Quarterly 75(4), 709-747.