Testing for Measurement Invariance with Many Groups

Chapter 2 comparative survey research.

Cross-national and cross-cultural comparative surveys are a very important resource for the Social Sciences. According to the Overview of Comparative Surveys Worldwide , more than 90 cross-national comparative surveys have been conducted around the world since 1948.

Even though surveys can aim to fulfill different purposes, generally they aim to estimate population means, totals or distributions or relationships between variables. A comparative survey will aim to compare these levels or relationships across groups (national or otherwise).

Comparative percentages by country regarding immigration tolerance, from the European Social Survey Round 7. Source: [Dimiter Toshkov 2020](https://dimiter.eu/Visualizations_files/ESS/Visualizing_ESS_data.html#saving_the_visualization)

Figure 2.1: Comparative percentages by country regarding immigration tolerance, from the European Social Survey Round 7. Source: Dimiter Toshkov 2020

Figure 2.1 shows a rather common application of a comparative survey. The groups, in this case European countries, are compared on their percentage shares on the answer to the question about allowing more immigrants.

However, what we see in this graph is only the final abstraction of very long process that typically surveys, and most particularly cross-national surveys, must go through. This process is sometimes called survey lifecycle and goes from design to dissemination.

Survey Lifecycle. Source: [Cross Cultural Survey Guidelines](https://ccsg.isr.umich.edu/chapters)

Figure 2.2: Survey Lifecycle. Source: Cross Cultural Survey Guidelines

2.1 Survey Error

Survey error is any error arising from the survey process that contributed to the deviation of an estimate from its true parameter values. (Biemer 2016 )

But regardless of how much we can try to prevent it, survey errors in one form or another will always occur. And survey errors might affect the estimates and their comparability.

This applies both to when we compare data from different surveys and comparisons of sub-groups within the same survey.

The comparability of survey measurements is an issue that should be thoughtfully considered before drawing substantive conclusions from comparative surveys.

Slighty problematic survey question. Source: [badsurveyq](https://twitter.com/badsurveyq)

Figure 2.3: Slighty problematic survey question. Source: badsurveyq

Survey error can be classified in two components:

Random error is caused by any factors that randomly affect measurement of the variable
Systematic error is caused by any factors that systematically affect measurement of the variable

Survey group comparability problems come from systematic error or “Bias” 1 .

What is particular about comparative surveys is that there are at least two different survey statistics. Therefore, each one of these statistics is subject to different sources of error. If the overall statistics are differently affected by the error, this will cause some form of “bias” in the comparison.

In other words, besides substantive differences between survey statistics, there might be systematic differences caused by survey error.

2.2 Total Survey Error framework

Total survey error is the accumulation of all errors that may arise in the design, collection, processing and analysis of survey data.

The Total Survey Error offers an elegant framework to describe survey errors and the several sources of error that are rooted within the survey lifecycle.

Source: @Groves2010

Figure 2.4: Source: Groves and Lyberg ( 2010 )

As it can be seen in figure 2.4, there is a “representation” and a “measurement” side.

Systematic representation errors include:

  • coverage error
  • sampling error
  • nonresponse

Systematic error in measurement include:

  • measurement error

Measurement error includes: response error, interviewer-induced response effects, social desirability, methods effects, response styles.

Here we will focus on the Measurement side

2.3 The “Bias” framework

The TSE error classification is analogous to the “Bias” framework in the field of cross-cultural psychology. Under this framework, Vijver and Leung ( 1997 ) distinguished between “construct”, “item”, and “method” bias which are essencially similar to the TSE’s validity, measurement error and all the remaining errors.

The bias framework is developed from the perspective of cross-cultural psychology and attempts to provide a comprehensive taxonomy of all systematic sources of error that can challenge the inferences drawn from cross-cultural studies (Vijver and Leung 1997 , 2000 ; Van de Vijver and Poortinga 1997 ; Vijver and Tanzer 2004 ) .

2.3.1 Construct Bias

Construct bias is present if the underlying construct measured is not the same across cultures.

  • It can occur if a construct is differently defined or only has a partial overlap across cultural groups.
Varying definitions of happiness in Western and East Asian cultures (Uchida, Norasakkunkit, and Kitayama 2004 ) . In Western cultures, happiness tends to be defined in terms of individual achievement, whereas in East Asian cultures happiness is defined in terms of interpersonal connectedness.

2.3.2 Method Bias

Sample Bias: is the incomparability of samples due to cross-cultural variations in characteristics, such as different educational levels, students versus the general population, and urban versus rural residents

Instrument bias: involves systematic errors derived from instrument characteristics such as self-report bias in Likert-type scale measures. The systematic tendency of respondents to endorse certain response options on some basis other than the target construct (i.e., response styles) may affect the validity of cross- cultural comparisons (Herk, Poortinga, and Verhallen 2004 ) .

Administration Bias: stems from administration conditions (e.g., data collection modes, group versus individual assessment), ambiguous instructions, interaction between administrators and respondents (e.g., halo effects), and communication problems (e.g., language differences, taboo topic).

2.3.3 Item Bias

Occurs when an item has a different meaning across cultures. An item of a scale is biased if persons with the same target trait level, but coming from different cultures, are not equally likely to endorse the item ( Vijver and Leung ( 1997 ) ; Vijver ( 2013 ) ).

Item bias can arise from poor translation, inapplicability of item contents in different cultures, or from items that trigger additional traits or have words with ambiguous connotations.

2.4 Preventing survey comparability problems

Following the TSE framework, the best way to reduce eventual comparability issues in survey data is to reduce the survey error to the very minimum and assuring that the persistent errors are most likely similar across the groups.

There is a vast literature discussing how to reduce TSE. However, two issues are particularly relevant to cross-cultural/national surveys.

2.4.1 Translation

TRAPD - Translation, Review, Adjudication, Pretesting, and Documentation

This method was proposed by Harkness, Vijver, and Mohler ( 2003 )

Team approach to survey translation:

T ranslators produce, independently from each other, initial translations
R eviewers review translations with the translators
A djudicator (one or more) decides whether the translation is ready
P retesting is the next step before going out to the field
D ocumentation should be constant during the entire process

2.4.2 Question coding system: SQP

It offers an additional way to check question comparability by taking into account the different characteristics of the questions in the original and adapted versions.

https://sqp.upf.edu/

Biemer, Paul P. 2016. “Total Survey Error Paradigm: Theory and Practice.” In The Sage Handbook of Survey Methodology , 122–41. London: SAGE Publications Ltd. https://doi.org/10.4135/9781473957893.n10 .

Groves, R. M., and L. Lyberg. 2010. “Total Survey Error: Past, Present, and Future.” Public Opinion Quarterly 74 (5): 849–79. https://doi.org/10.1093/poq/nfq065 .

Harkness, Janet A., Fons J. R. van de Vijver, and Peter Ph. Mohler. 2003. Cross-Cultural Survey Methods . Hoboken, NJ: Wiley.

Herk, Hester van, Ype H Poortinga, and Theo M M Verhallen. 2004. “Response Styles in Rating Scales: Evidence of Method Bias in Data From Six EU Countries.” Journal of Cross-Cultural Psychology 35 (3): 346–60. https://doi.org/10.1177/0022022104264126 .

Uchida, Yukiko, Vinai Norasakkunkit, and Shinobu Kitayama. 2004. “Cultural Constructions of Happiness: Theory and Empirical Evidence.” Journal of Happiness Studies 5 (February): 223–39. https://doi.org/10.1007/s10902-004-8785-9 .

Van de Vijver, Fons, and Ype Poortinga. 1997. “Towards an Integrated Analysis of Bias in Cross-Cultural Assessment.” European Journal of Psychological Assessment 13 (January): 29–37. https://doi.org/10.1027/1015-5759.13.1.29 .

Vijver, Fons J R van de. 2013. “Item Bias.” Major Reference Works. https://doi.org/doi:10.1002/9781118339893.wbeccp309 .

Vijver, Fons J R van de, and Kwok Leung. 1997. Methods and data analysis for cross-cultural research. Cross-Cultural Psychology Series, Vol 1. Thousand Oaks, CA, US: Sage Publications, Inc.

Vijver, Fons J R van de, and Kwok Leung. 2000. “Methodological issues in psychological research on culture.” Journal of Cross-Cultural Psychology 31 (1): 33–51. https://doi.org/10.1177/0022022100031001004 .

Vijver, Fons van de, and Norbert K Tanzer. 2004. “Bias and equivalence in cross-cultural assessment: An overview.” European Review of Applied Psychology / Revue Européenne de Psychologie Appliquée 54 (2): 119–35. https://doi.org/10.1016/j.erap.2003.12.004 .

systematic error and “Bias” are terms used interchangeably in the literature and they refer to deviations that are not due to chance alone. ↩︎

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

comparative survey research design

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved March 12, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 10 methods for comparative studies.

Francis Lau and Anne Holbrook .

10.1. Introduction

In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between these groups, while controlling for as many of the conditions as possible such as the composition, system, setting and duration.

According to the typology by Friedman and Wyatt (2006) , comparative studies take on an objective view where events such as the use and effect of an eHealth system can be defined, measured and compared through a set of variables to prove or disprove a hypothesis. For comparative studies, the design options are experimental versus observational and prospective versus retro­­spective. The quality of eHealth comparative studies depends on such aspects of methodological design as the choice of variables, sample size, sources of bias, confounders, and adherence to quality and reporting guidelines.

In this chapter we focus on experimental studies as one type of comparative study and their methodological considerations that have been reported in the eHealth literature. Also included are three case examples to show how these studies are done.

10.2. Types of Comparative Studies

Experimental studies are one type of comparative study where a sample of participants is identified and assigned to different conditions for a given time duration, then compared for differences. An example is a hospital with two care units where one is assigned a cpoe system to process medication orders electronically while the other continues its usual practice without a cpoe . The participants in the unit assigned to the cpoe are called the intervention group and those assigned to usual practice are the control group. The comparison can be performance or outcome focused, such as the ratio of correct orders processed or the occurrence of adverse drug events in the two groups during the given time period. Experimental studies can take on a randomized or non-randomized design. These are described below.

10.2.1. Randomized Experiments

In a randomized design, the participants are randomly assigned to two or more groups using a known randomization technique such as a random number table. The design is prospective in nature since the groups are assigned concurrently, after which the intervention is applied then measured and compared. Three types of experimental designs seen in eHealth evaluation are described below ( Friedman & Wyatt, 2006 ; Zwarenstein & Treweek, 2009 ).

Randomized controlled trials ( rct s) – In rct s participants are randomly assigned to an intervention or a control group. The randomization can occur at the patient, provider or organization level, which is known as the unit of allocation. For instance, at the patient level one can randomly assign half of the patients to receive emr reminders while the other half do not. At the provider level, one can assign half of the providers to receive the reminders while the other half continues with their usual practice. At the organization level, such as a multisite hospital, one can randomly assign emr reminders to some of the sites but not others. Cluster randomized controlled trials ( crct s) – In crct s, clusters of participants are randomized rather than by individual participant since they are found in naturally occurring groups such as living in the same communities. For instance, clinics in one city may be randomized as a cluster to receive emr reminders while clinics in another city continue their usual practice. Pragmatic trials – Unlike rct s that seek to find out if an intervention such as a cpoe system works under ideal conditions, pragmatic trials are designed to find out if the intervention works under usual conditions. The goal is to make the design and findings relevant to and practical for decision-makers to apply in usual settings. As such, pragmatic trials have few criteria for selecting study participants, flexibility in implementing the intervention, usual practice as the comparator, the same compliance and follow-up intensity as usual practice, and outcomes that are relevant to decision-makers.

10.2.2. Non-randomized Experiments

Non-randomized design is used when it is neither feasible nor ethical to randomize participants into groups for comparison. It is sometimes referred to as a quasi-experimental design. The design can involve the use of prospective or retrospective data from the same or different participants as the control group. Three types of non-randomized designs are described below ( Harris et al., 2006 ).

Intervention group only with pretest and post-test design – This design involves only one group where a pretest or baseline measure is taken as the control period, the intervention is implemented, and a post-test measure is taken as the intervention period for comparison. For example, one can compare the rates of medication errors before and after the implementation of a cpoe system in a hospital. To increase study quality, one can add a second pretest period to decrease the probability that the pretest and post-test difference is due to chance, such as an unusually low medication error rate in the first pretest period. Other ways to increase study quality include adding an unrelated outcome such as patient case-mix that should not be affected, removing the intervention to see if the difference remains, and removing then re-implementing the intervention to see if the differences vary accordingly. Intervention and control groups with post-test design – This design involves two groups where the intervention is implemented in one group and compared with a second group without the intervention, based on a post-test measure from both groups. For example, one can implement a cpoe system in one care unit as the intervention group with a second unit as the control group and compare the post-test medication error rates in both units over six months. To increase study quality, one can add one or more pretest periods to both groups, or implement the intervention to the control group at a later time to measure for similar but delayed effects. Interrupted time series ( its ) design – In its design, multiple measures are taken from one group in equal time intervals, interrupted by the implementation of the intervention. The multiple pretest and post-test measures decrease the probability that the differences detected are due to chance or unrelated effects. An example is to take six consecutive monthly medication error rates as the pretest measures, implement the cpoe system, then take another six consecutive monthly medication error rates as the post-test measures for comparison in error rate differences over 12 months. To increase study quality, one may add a concurrent control group for comparison to be more convinced that the intervention produced the change.

10.3. Methodological Considerations

The quality of comparative studies is dependent on their internal and external validity. Internal validity refers to the extent to which conclusions can be drawn correctly from the study setting, participants, intervention, measures, analysis and interpretations. External validity refers to the extent to which the conclusions can be generalized to other settings. The major factors that influence validity are described below.

10.3.1. Choice of Variables

Variables are specific measurable features that can influence validity. In comparative studies, the choice of dependent and independent variables and whether they are categorical and/or continuous in values can affect the type of questions, study design and analysis to be considered. These are described below ( Friedman & Wyatt, 2006 ).

Dependent variables – This refers to outcomes of interest; they are also known as outcome variables. An example is the rate of medication errors as an outcome in determining whether cpoe can improve patient safety. Independent variables – This refers to variables that can explain the measured values of the dependent variables. For instance, the characteristics of the setting, participants and intervention can influence the effects of cpoe . Categorical variables – This refers to variables with measured values in discrete categories or levels. Examples are the type of providers (e.g., nurses, physicians and pharmacists), the presence or absence of a disease, and pain scale (e.g., 0 to 10 in increments of 1). Categorical variables are analyzed using non-parametric methods such as chi-square and odds ratio. Continuous variables – This refers to variables that can take on infinite values within an interval limited only by the desired precision. Examples are blood pressure, heart rate and body temperature. Continuous variables are analyzed using parametric methods such as t -test, analysis of variance or multiple regression.

10.3.2. Sample Size

Sample size is the number of participants to include in a study. It can refer to patients, providers or organizations depending on how the unit of allocation is defined. There are four parts to calculating sample size. They are described below ( Noordzij et al., 2010 ).

Significance level – This refers to the probability that a positive finding is due to chance alone. It is usually set at 0.05, which means having a less than 5% chance of drawing a false positive conclusion. Power – This refers to the ability to detect the true effect based on a sample from the population. It is usually set at 0.8, which means having at least an 80% chance of drawing a correct conclusion. Effect size – This refers to the minimal clinically relevant difference that can be detected between comparison groups. For continuous variables, the effect is a numerical value such as a 10-kilogram weight difference between two groups. For categorical variables, it is a percentage such as a 10% difference in medication error rates. Variability – This refers to the population variance of the outcome of interest, which is often unknown and is estimated by way of standard deviation ( sd ) from pilot or previous studies for continuous outcome.

Table 10.1. Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

Sample Size Equations for Comparing Two Groups with Continuous and Categorical Outcome Variables.

An example of sample size calculation for an rct to examine the effect of cds on improving systolic blood pressure of hypertensive patients is provided in the Appendix. Refer to the Biomath website from Columbia University (n.d.) for a simple Web-based sample size / power calculator.

10.3.3. Sources of Bias

There are five common sources of biases in comparative studies. They are selection, performance, detection, attrition and reporting biases ( Higgins & Green, 2011 ). These biases, and the ways to minimize them, are described below ( Vervloet et al., 2012 ).

Selection or allocation bias – This refers to differences between the composition of comparison groups in terms of the response to the intervention. An example is having sicker or older patients in the control group than those in the intervention group when evaluating the effect of emr reminders. To reduce selection bias, one can apply randomization and concealment when assigning participants to groups and ensure their compositions are comparable at baseline. Performance bias – This refers to differences between groups in the care they received, aside from the intervention being evaluated. An example is the different ways by which reminders are triggered and used within and across groups such as electronic, paper and phone reminders for patients and providers. To reduce performance bias, one may standardize the intervention and blind participants from knowing whether an intervention was received and which intervention was received. Detection or measurement bias – This refers to differences between groups in how outcomes are determined. An example is where outcome assessors pay more attention to outcomes of patients known to be in the intervention group. To reduce detection bias, one may blind assessors from participants when measuring outcomes and ensure the same timing for assessment across groups. Attrition bias – This refers to differences between groups in ways that participants are withdrawn from the study. An example is the low rate of participant response in the intervention group despite having received reminders for follow-up care. To reduce attrition bias, one needs to acknowledge the dropout rate and analyze data according to an intent-to-treat principle (i.e., include data from those who dropped out in the analysis). Reporting bias – This refers to differences between reported and unreported findings. Examples include biases in publication, time lag, citation, language and outcome reporting depending on the nature and direction of the results. To reduce reporting bias, one may make the study protocol available with all pre-specified outcomes and report all expected outcomes in published results.

10.3.4. Confounders

Confounders are factors other than the intervention of interest that can distort the effect because they are associated with both the intervention and the outcome. For instance, in a study to demonstrate whether the adoption of a medication order entry system led to lower medication costs, there can be a number of potential confounders that can affect the outcome. These may include severity of illness of the patients, provider knowledge and experience with the system, and hospital policy on prescribing medications ( Harris et al., 2006 ). Another example is the evaluation of the effect of an antibiotic reminder system on the rate of post-operative deep venous thromboses ( dvt s). The confounders can be general improvements in clinical practice during the study such as prescribing patterns and post-operative care that are not related to the reminders ( Friedman & Wyatt, 2006 ).

To control for confounding effects, one may consider the use of matching, stratification and modelling. Matching involves the selection of similar groups with respect to their composition and behaviours. Stratification involves the division of participants into subgroups by selected variables, such as comorbidity index to control for severity of illness. Modelling involves the use of statistical techniques such as multiple regression to adjust for the effects of specific variables such as age, sex and/or severity of illness ( Higgins & Green, 2011 ).

10.3.5. Guidelines on Quality and Reporting

There are guidelines on the quality and reporting of comparative studies. The grade (Grading of Recommendations Assessment, Development and Evaluation) guidelines provide explicit criteria for rating the quality of studies in randomized trials and observational studies ( Guyatt et al., 2011 ). The extended consort (Consolidated Standards of Reporting Trials) Statements for non-pharmacologic trials ( Boutron, Moher, Altman, Schulz, & Ravaud, 2008 ), pragmatic trials ( Zwarestein et al., 2008 ), and eHealth interventions ( Baker et al., 2010 ) provide reporting guidelines for randomized trials.

The grade guidelines offer a system of rating quality of evidence in systematic reviews and guidelines. In this approach, to support estimates of intervention effects rct s start as high-quality evidence and observational studies as low-quality evidence. For each outcome in a study, five factors may rate down the quality of evidence. The final quality of evidence for each outcome would fall into one of high, moderate, low, and very low quality. These factors are listed below (for more details on the rating system, refer to Guyatt et al., 2011 ).

Design limitations – For rct s they cover the lack of allocation concealment, lack of blinding, large loss to follow-up, trial stopped early or selective outcome reporting. Inconsistency of results – Variations in outcomes due to unexplained heterogeneity. An example is the unexpected variation of effects across subgroups of patients by severity of illness in the use of preventive care reminders. Indirectness of evidence – Reliance on indirect comparisons due to restrictions in study populations, intervention, comparator or outcomes. An example is the 30-day readmission rate as a surrogate outcome for quality of computer-supported emergency care in hospitals. Imprecision of results – Studies with small sample size and few events typically would have wide confidence intervals and are considered of low quality. Publication bias – The selective reporting of results at the individual study level is already covered under design limitations, but is included here for completeness as it is relevant when rating quality of evidence across studies in systematic reviews.

The original consort Statement has 22 checklist items for reporting rct s. For non-pharmacologic trials extensions have been made to 11 items. For pragmatic trials extensions have been made to eight items. These items are listed below. For further details, readers can refer to Boutron and colleagues (2008) and the consort website ( consort , n.d.).

Title and abstract – one item on the means of randomization used. Introduction – one item on background, rationale, and problem addressed by the intervention. Methods – 10 items on participants, interventions, objectives, outcomes, sample size, randomization (sequence generation, allocation concealment, implementation), blinding (masking), and statistical methods. Results – seven items on participant flow, recruitment, baseline data, numbers analyzed, outcomes and estimation, ancillary analyses, adverse events. Discussion – three items on interpretation, generalizability, overall evidence.

The consort Statement for eHealth interventions describes the relevance of the consort recommendations to the design and reporting of eHealth studies with an emphasis on Internet-based interventions for direct use by patients, such as online health information resources, decision aides and phr s. Of particular importance is the need to clearly define the intervention components, their role in the overall care process, target population, implementation process, primary and secondary outcomes, denominators for outcome analyses, and real world potential (for details refer to Baker et al., 2010 ).

10.4. Case Examples

10.4.1. pragmatic rct in vascular risk decision support.

Holbrook and colleagues (2011) conducted a pragmatic rct to examine the effects of a cds intervention on vascular care and outcomes for older adults. The study is summarized below.

Setting – Community-based primary care practices with emr s in one Canadian province. Participants – English-speaking patients 55 years of age or older with diagnosed vascular disease, no cognitive impairment and not living in a nursing home, who had a provider visit in the past 12 months. Intervention – A Web-based individualized vascular tracking and advice cds system for eight top vascular risk factors and two diabetic risk factors, for use by both providers and patients and their families. Providers and staff could update the patient’s profile at any time and the cds algorithm ran nightly to update recommendations and colour highlighting used in the tracker interface. Intervention patients had Web access to the tracker, a print version mailed to them prior to the visit, and telephone support on advice. Design – Pragmatic, one-year, two-arm, multicentre rct , with randomization upon patient consent by phone, using an allocation-concealed online program. Randomization was by patient with stratification by provider using a block size of six. Trained reviewers examined emr data and conducted patient telephone interviews to collect risk factors, vascular history, and vascular events. Providers completed questionnaires on the intervention at study end. Patients had final 12-month lab checks on urine albumin, low-density lipoprotein cholesterol, and A1c levels. Outcomes – Primary outcome was based on change in process composite score ( pcs ) computed as the sum of frequency-weighted process score for each of the eight main risk factors with a maximum score of 27. The process was considered met if a risk factor had been checked. pcs was measured at baseline and study end with the difference as the individual primary outcome scores. The main secondary outcome was a clinical composite score ( ccs ) based on the same eight risk factors compared in two ways: a comparison of the mean number of clinical variables on target and the percentage of patients with improvement between the two groups. Other secondary outcomes were actual vascular event rates, individual pcs and ccs components, ratings of usability, continuity of care, patient ability to manage vascular risk, and quality of life using the EuroQol five dimensions questionnaire ( eq-5D) . Analysis – 1,100 patients were needed to achieve 90% power in detecting a one-point pcs difference between groups with a standard deviation of five points, two-tailed t -test for mean difference at 5% significance level, and a withdrawal rate of 10%. The pcs , ccs and eq-5D scores were analyzed using a generalized estimating equation accounting for clustering within providers. Descriptive statistics and χ2 tests or exact tests were done with other outcomes. Findings – 1,102 patients and 49 providers enrolled in the study. The intervention group with 545 patients had significant pcs improvement with a difference of 4.70 ( p < .001) on a 27-point scale. The intervention group also had significantly higher odds of rating improvements in their continuity of care (4.178, p < .001) and ability to improve their vascular health (3.07, p < .001). There was no significant change in vascular events, clinical variables and quality of life. Overall the cds intervention led to reduced vascular risks but not to improved clinical outcomes in a one-year follow-up.

10.4.2. Non-randomized Experiment in Antibiotic Prescribing in Primary Care

Mainous, Lambourne, and Nietert (2013) conducted a prospective non-randomized trial to examine the impact of a cds system on antibiotic prescribing for acute respiratory infections ( ari s) in primary care. The study is summarized below.

Setting – A primary care research network in the United States whose members use a common emr and pool data quarterly for quality improvement and research studies. Participants – An intervention group with nine practices across nine states, and a control group with 61 practices. Intervention – Point-of-care cds tool as customizable progress note templates based on existing emr features. cds recommendations reflect Centre for Disease Control and Prevention ( cdc ) guidelines based on a patient’s predominant presenting symptoms and age. cds was used to assist in ari diagnosis, prompt antibiotic use, record diagnosis and treatment decisions, and access printable patient and provider education resources from the cdc . Design – The intervention group received a multi-method intervention to facilitate provider cds adoption that included quarterly audit and feedback, best practice dissemination meetings, academic detailing site visits, performance review and cds training. The control group did not receive information on the intervention, the cds or education. Baseline data collection was for three months with follow-up of 15 months after cds implementation. Outcomes – The outcomes were frequency of inappropriate prescribing during an ari episode, broad-spectrum antibiotic use and diagnostic shift. Inappropriate prescribing was computed by dividing the number of ari episodes with diagnoses in the inappropriate category that had an antibiotic prescription by the total number of ari episodes with diagnosis for which antibiotics are inappropriate. Broad-spectrum antibiotic use was computed by all ari episodes with a broad-spectrum antibiotic prescription by the total number of ari episodes with an antibiotic prescription. Antibiotic drift was computed in two ways: dividing the number of ari episodes with diagnoses where antibiotics are appropriate by the total number of ari episodes with an antibiotic prescription; and dividing the number of ari episodes where antibiotics were inappropriate by the total number of ari episodes. Process measure included frequency of cds template use and whether the outcome measures differed by cds usage. Analysis – Outcomes were measured quarterly for each practice, weighted by the number of ari episodes during the quarter to assign greater weight to practices with greater numbers of relevant episodes and to periods with greater numbers of relevant episodes. Weighted means and 95% ci s were computed separately for adult and pediatric (less than 18 years of age) patients for each time period for both groups. Baseline means in outcome measures were compared between the two groups using weighted independent-sample t -tests. Linear mixed models were used to compare changes over the 18-month period. The models included time, intervention status, and were adjusted for practice characteristics such as specialty, size, region and baseline ari s. Random practice effects were included to account for clustering of repeated measures on practices over time. P -values of less than 0.05 were considered significant. Findings – For adult patients, inappropriate prescribing in ari episodes declined more among the intervention group (-0.6%) than the control group (4.2%)( p = 0.03), and prescribing of broad-spectrum antibiotics declined by 16.6% in the intervention group versus an increase of 1.1% in the control group ( p < 0.0001). For pediatric patients, there was a similar decline of 19.7% in the intervention group versus an increase of 0.9% in the control group ( p < 0.0001). In summary, the cds had a modest effect in reducing inappropriate prescribing for adults, but had a substantial effect in reducing the prescribing of broad-spectrum antibiotics in adult and pediatric patients.

10.4.3. Interrupted Time Series on EHR Impact in Nursing Care

Dowding, Turley, and Garrido (2012) conducted a prospective its study to examine the impact of ehr implementation on nursing care processes and outcomes. The study is summarized below.

Setting – Kaiser Permanente ( kp ) as a large not-for-profit integrated healthcare organization in the United States. Participants – 29 kp hospitals in the northern and southern regions of California. Intervention – An integrated ehr system implemented at all hospitals with cpoe , nursing documentation and risk assessment tools. The nursing component for risk assessment documentation of pressure ulcers and falls was consistent across hospitals and developed by clinical nurses and informaticists by consensus. Design – its design with monthly data on pressure ulcers and quarterly data on fall rates and risk collected over seven years between 2003 and 2009. All data were collected at the unit level for each hospital. Outcomes – Process measures were the proportion of patients with a fall risk assessment done and the proportion with a hospital-acquired pressure ulcer ( hapu ) risk assessment done within 24 hours of admission. Outcome measures were fall and hapu rates as part of the unit-level nursing care process and nursing sensitive outcome data collected routinely for all California hospitals. Fall rate was defined as the number of unplanned descents to the floor per 1,000 patient days, and hapu rate was the percentage of patients with stages i-IV or unstageable ulcer on the day of data collection. Analysis – Fall and hapu risk data were synchronized using the month in which the ehr was implemented at each hospital as time zero and aggregated across hospitals for each time period. Multivariate regression analysis was used to examine the effect of time, region and ehr . Findings – The ehr was associated with significant increase in document rates for hapu risk (2.21; 95% CI 0.67 to 3.75) and non-significant increase for fall risk (0.36; -3.58 to 4.30). The ehr was associated with 13% decrease in hapu rates (-0.76; -1.37 to -0.16) but no change in fall rates (-0.091; -0.29 to 011). Hospital region was a significant predictor of variation for hapu (0.72; 0.30 to 1.14) and fall rates (0.57; 0.41 to 0.72). During the study period, hapu rates decreased significantly (-0.16; -0.20 to -0.13) but not fall rates (0.0052; -0.01 to 0.02). In summary, ehr implementation was associated with a reduction in the number of hapu s but not patient falls, and changes over time and hospital region also affected outcomes.

10.5. Summary

In this chapter we introduced randomized and non-randomized experimental designs as two types of comparative studies used in eHealth evaluation. Randomization is the highest quality design as it reduces bias, but it is not always feasible. The methodological issues addressed include choice of variables, sample size, sources of biases, confounders, and adherence to reporting guidelines. Three case examples were included to show how eHealth comparative studies are done.

  • Baker T. B., Gustafson D. H., Shaw B., Hawkins R., Pingree S., Roberts L., Strecher V. Relevance of consort reporting criteria for research on eHealth interventions. Patient Education and Counselling. 2010; 81 (suppl. 7):77–86. [ PMC free article : PMC2993846 ] [ PubMed : 20843621 ]
  • Columbia University. (n.d.). Statistics: sample size / power calculation. Biomath (Division of Biomathematics/Biostatistics), Department of Pediatrics. New York: Columbia University Medical Centre. Retrieved from http://www ​.biomath.info/power/index.htm .
  • Boutron I., Moher D., Altman D. G., Schulz K. F., Ravaud P. consort Group. Extending the consort statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration. Annals of Internal Medicine. 2008; 148 (4):295–309. [ PubMed : 18283207 ]
  • Cochrane Collaboration. Cochrane handbook. London: Author; (n.d.) Retrieved from http://handbook ​.cochrane.org/
  • consort Group. (n.d.). The consort statement . Retrieved from http://www ​.consort-statement.org/
  • Dowding D. W., Turley M., Garrido T. The impact of an electronic health record on nurse sensitive patient outcomes: an interrupted time series analysis. Journal of the American Medical Informatics Association. 2012; 19 (4):615–620. [ PMC free article : PMC3384108 ] [ PubMed : 22174327 ]
  • Friedman C. P., Wyatt J.C. Evaluation methods in biomedical informatics. 2nd ed. New York: Springer Science + Business Media, Inc; 2006.
  • Guyatt G., Oxman A. D., Akl E. A., Kunz R., Vist G., Brozek J. et al. Schunemann H. J. grade guidelines: 1. Introduction – grade evidence profiles and summary of findings tables. Journal of Clinical Epidemiology. 2011; 64 (4):383–394. [ PubMed : 21195583 ]
  • Harris A. D., McGregor J. C., Perencevich E. N., Furuno J. P., Zhu J., Peterson D. E., Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. Journal of the American Medical Informatics Association. 2006; 13 (1):16–23. [ PMC free article : PMC1380192 ] [ PubMed : 16221933 ]
  • The Cochrane Collaboration. Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. London: 2011. (Version 5.1.0, updated March 2011) Retrieved from http://handbook ​.cochrane.org/
  • Holbrook A., Pullenayegum E., Thabane L., Troyan S., Foster G., Keshavjee K. et al. Curnew G. Shared electronic vascular risk decision support in primary care. Computerization of medical practices for the enhancement of therapeutic effectiveness (compete III) randomized trial. Archives of Internal Medicine. 2011; 171 (19):1736–1744. [ PubMed : 22025430 ]
  • Mainous III A. G., Lambourne C. A., Nietert P.J. Impact of a clinical decision support system on antibiotic prescribing for acute respiratory infections in primary care: quasi-experimental trial. Journal of the American Medical Informatics Association. 2013; 20 (2):317–324. [ PMC free article : PMC3638170 ] [ PubMed : 22759620 ]
  • Noordzij M., Tripepi G., Dekker F. W., Zoccali C., Tanck M. W., Jager K.J. Sample size calculations: basic principles and common pitfalls. Nephrology Dialysis Transplantation. 2010; 25 (5):1388–1393. Retrieved from http://ndt ​.oxfordjournals ​.org/content/early/2010/01/12/ndt ​.gfp732.short . [ PubMed : 20067907 ]
  • Vervloet M., Linn A. J., van Weert J. C. M., de Bakker D. H., Bouvy M. L., van Dijk L. The effectiveness of interventions using electronic reminders to improve adherence to chronic medication: A systematic review of the literature. Journal of the American Medical Informatics Association. 2012; 19 (5):696–704. [ PMC free article : PMC3422829 ] [ PubMed : 22534082 ]
  • Zwarenstein M., Treweek S., Gagnier J. J., Altman D. G., Tunis S., Haynes B., Oxman A. D., Moher D. for the consort and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the consort statement. British Medical Journal. 2008; 337 :a2390. [ PMC free article : PMC3266844 ] [ PubMed : 19001484 ] [ CrossRef ]
  • Zwarenstein M., Treweek S. What kind of randomized trials do we need? Canadian Medical Association Journal. 2009; 180 (10):998–1000. [ PMC free article : PMC2679816 ] [ PubMed : 19372438 ]

Appendix. Example of Sample Size Calculation

This is an example of sample size calculation for an rct that examines the effect of a cds system on reducing systolic blood pressure in hypertensive patients. The case is adapted from the example described in the publication by Noordzij et al. (2010) .

(a) Systolic blood pressure as a continuous outcome measured in mmHg

Based on similar studies in the literature with similar patients, the systolic blood pressure values from the comparison groups are expected to be normally distributed with a standard deviation of 20 mmHg. The evaluator wishes to detect a clinically relevant difference of 15 mmHg in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80, the corresponding multipliers 1 are 1.96 and 0.842, respectively. Using the sample size equation for continuous outcome below we can calculate the sample size needed for the above study.

n = 2[(a+b)2σ2]/(μ1-μ2)2 where

n = sample size for each group

μ1 = population mean of systolic blood pressures in intervention group

μ2 = population mean of systolic blood pressures in control group

μ1- μ2 = desired difference in mean systolic blood pressures between groups

σ = population variance

a = multiplier for significance level (or alpha)

b = multiplier for power (or 1-beta)

Providing the values in the equation would give the sample size (n) of 28 samples per group as the result

n = 2[(1.96+0.842)2(202)]/152 or 28 samples per group

(b) Systolic blood pressure as a categorical outcome measured as below or above 140 mmHg (i.e., hypertension yes/no)

In this example a systolic blood pressure from a sample that is above 140 mmHg is considered an event of the patient with hypertension. Based on published literature the proportion of patients in the general population with hypertension is 30%. The evaluator wishes to detect a clinically relevant difference of 10% in systolic blood pressure as an outcome between the intervention group with cds and the control group without cds . This means the expected proportion of patients with hypertension is 20% (p1 = 0.2) in the intervention group and 30% (p2 = 0.3) in the control group. Assuming a significance level or alpha of 0.05 for 2-tailed t -test and power of 0.80 the corresponding multipliers are 1.96 and 0.842, respectively. Using the sample size equation for categorical outcome below, we can calculate the sample size needed for the above study.

n = [(a+b)2(p1q1+p2q2)]/χ2

p1 = proportion of patients with hypertension in intervention group

q1 = proportion of patients without hypertension in intervention group (or 1-p1)

p2 = proportion of patients with hypertension in control group

q2 = proportion of patients without hypertension in control group (or 1-p2)

χ = desired difference in proportion of hypertensive patients between two groups

Providing the values in the equation would give the sample size (n) of 291 samples per group as the result

n = [(1.96+0.842)2((0.2)(0.8)+(0.3)(0.7))]/(0.1)2 or 291 samples per group

From Table 3 on p. 1392 of Noordzij et al. (2010).

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Lau F, Holbrook A. Chapter 10 Methods for Comparative Studies. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Types of Comparative Studies
  • Methodological Considerations
  • Case Examples
  • Example of Sample Size Calculation

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An ... Chapter 10 Methods for Comparative Studies - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Characteristics of a Comparative Research Design

Hannah richardson, 28 jun 2018.

Characteristics of a Comparative Research Design

Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. Comparative studies can be used to increase understanding between cultures and societies and create a foundation for compromise and collaboration. These studies contain both quantitative and qualitative research methods.

Explore this article

  • Comparative Quantitative
  • Comparative Qualitative
  • When to Use It
  • When Not to Use It

1 Comparative Quantitative

Quantitative, or experimental, research is characterized by the manipulation of an independent variable to measure and explain its influence on a dependent variable. Because comparative research studies analyze two different groups -- which may have very different social contexts -- it is difficult to establish the parameters of research. Such studies might seek to compare, for example, large amounts of demographic or employment data from different nations that define or measure relevant research elements differently.

However, the methods for statistical analysis of data inherent in quantitative research are still helpful in establishing correlations in comparative studies. Also, the need for a specific research question in quantitative research helps comparative researchers narrow down and establish a more specific comparative research question.

2 Comparative Qualitative

Qualitative, or nonexperimental, is characterized by observation and recording outcomes without manipulation. In comparative research, data are collected primarily by observation, and the goal is to determine similarities and differences that are related to the particular situation or environment of the two groups. These similarities and differences are identified through qualitative observation methods. Additionally, some researchers have favored designing comparative studies around a variety of case studies in which individuals are observed and behaviors are recorded. The results of each case are then compared across people groups.

3 When to Use It

Comparative research studies should be used when comparing two people groups, often cross-nationally. These studies analyze the similarities and differences between these two groups in an attempt to better understand both groups. Comparisons lead to new insights and better understanding of all participants involved. These studies also require collaboration, strong teams, advanced technologies and access to international databases, making them more expensive. Use comparative research design when the necessary funding and resources are available.

4 When Not to Use It

Do not use comparative research design with little funding, limited access to necessary technology and few team members. Because of the larger scale of these studies, they should be conducted only if adequate population samples are available. Additionally, data within these studies require extensive measurement analysis; if the necessary organizational and technological resources are not available, a comparative study should not be used. Do not use a comparative design if data are not able to be measured accurately and analyzed with fidelity and validity.

  • 1 San Jose State University: Selected Issues in Study Design
  • 2 University of Surrey: Social Research Update 13: Comparative Research Methods

About the Author

Hannah Richardson has a Master's degree in Special Education from Vanderbilt University and a Bacheor of Arts in English. She has been a writer since 2004 and wrote regularly for the sports and features sections of "The Technician" newspaper, as well as "Coastwach" magazine. Richardson also served as the co-editor-in-chief of "Windhover," an award-winning literary and arts magazine. She is currently teaching at a middle school.

Related Articles

Research Study Design Types

Research Study Design Types

Correlational Methods vs. Experimental Methods

Correlational Methods vs. Experimental Methods

Different Types of Methodologies

Different Types of Methodologies

Quasi-Experiment Advantages & Disadvantages

Quasi-Experiment Advantages & Disadvantages

What Are the Advantages & Disadvantages of Non-Experimental Design?

What Are the Advantages & Disadvantages of Non-Experimental...

Independent vs. Dependent Variables in Sociology

Independent vs. Dependent Variables in Sociology

Methods of Research Design

Methods of Research Design

Qualitative Research Pros & Cons

Qualitative Research Pros & Cons

How to Form a Theoretical Study of a Dissertation

How to Form a Theoretical Study of a Dissertation

What Is the Difference Between Internal & External Validity of Research Study Design?

What Is the Difference Between Internal & External...

Difference Between Conceptual & Theoretical Framework

Difference Between Conceptual & Theoretical Framework

The Advantages of Exploratory Research Design

The Advantages of Exploratory Research Design

What Is Quantitative Research?

What Is Quantitative Research?

What is a Dissertation?

What is a Dissertation?

What Are the Advantages & Disadvantages of Correlation Research?

What Are the Advantages & Disadvantages of Correlation...

What Is the Meaning of the Descriptive Method in Research?

What Is the Meaning of the Descriptive Method in Research?

How to Use Qualitative Research Methods in a Case Study Research Project

How to Use Qualitative Research Methods in a Case Study...

How to Tabulate Survey Results

How to Tabulate Survey Results

How to Cross Validate Qualitative Research Results

How to Cross Validate Qualitative Research Results

Types of Descriptive Research Methods

Types of Descriptive Research Methods

Regardless of how old we are, we never stop learning. Classroom is the educational resource for people of all ages. Whether you’re studying times tables or applying to college, Classroom has the answers.

  • Accessibility
  • Terms of Use
  • Privacy Policy
  • Copyright Policy
  • Manage Preferences

© 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on the Word Net lexical database for the English Language. See disclaimer .

Internationally Comparative Research Designs in the Social Sciences: Fundamental Issues, Case Selection Logics, and Research Limitations

International vergleichende Forschungsdesigns in den Sozialwissenschaften: Grundlagen, Fallauswahlstrategien und Grenzen

  • Abhandlungen
  • Published: 29 April 2019
  • Volume 71 , pages 75–97, ( 2019 )

Cite this article

  • Achim Goerres 1 ,
  • Markus B. Siewert 2 &
  • Claudius Wagemann 2  

2315 Accesses

17 Citations

Explore all metrics

This paper synthesizes methodological knowledge derived from comparative survey research and comparative politics and aims to enable researches to make prudent research decisions. Starting from the data structure that can occur in international comparisons at different levels, it suggests basic definitions for cases and contexts, i. e. the main ingredients of international comparison. The paper then goes on to discuss the full variety of case selection strategies in order to highlight their relative advantages and disadvantages. Finally, it presents the limitations of internationally comparative social science research. Overall, the paper suggests that comparative research designs must be crafted cautiously, with careful regard to a variety of issues, and emphasizes the idea that there can be no one-fits-all solution.

Zusammenfassung

Dieser Beitrag bietet eine Synopse zentraler methodischer Aspekte der vergleichenden Politikwissenschaft und Umfrageforschung und zielt darauf ab, Sozialwissenschaftler zu reflektierten forschungspraktischen Entscheidungen zu befähigen. Ausgehend von der Datenstruktur, die bei internationalen Vergleichen auf verschiedenen Ebenen vorzufinden ist, werden grundsätzliche Definitionen für Fälle und Kontexte, d. h. die zentralen Bestandteile des internationalen Vergleichs, vorgestellt. Anschließend wird die gesamte Bandbreite an Strategien zur Fallauswahl diskutiert, wobei auf ihre jeweiligen Vor- und Nachteile eingegangen wird. Im letzten Teil werden die Grenzen international vergleichender Forschung in den Sozialwissenschaften dargelegt. Der Beitrag plädiert für ein umsichtiges Design vergleichender Forschung, welches einer Vielzahl von Aspekten Rechnung trägt; dabei wird ausdrücklich betont, dass es keine Universallösung gibt.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

comparative survey research design

Criteria for Good Qualitative Research: A Comprehensive Review

Drishti Yadav

comparative survey research design

The potential of working hypotheses for deductive exploratory research

Mattia Casula, Nandhini Rangarajan & Patricia Shields

comparative survey research design

Ecotourism and sustainable development: a scientometric review of global research trends

Lishan Xu, Changlin Ao, … Zhenyu Cai

One could argue that there are no N  = 1 studies at all, and that every case study is “comparative”. The rationale for such an opinion is that it is hard to imagine a case study which is conducted without any reference to other cases, including theoretically possible (but factually nonexisting) ideal cases, paradigmatic cases, counterfactual cases, etc.

This exposition might suggest that only the combinations of “most independent variables vary and the outcome is similar between cases” and “most independent variables are similar and the outcome differs between cases” are possible. Ragin’s ( 1987 , 2000 , 2008 ) proposal of QCA (see also Schneider and Wagemann 2012 ) however shows that diversity (Ragin 2008 , p. 19) can also lie on both sides. Only those designs in which nothing varies, i. e. where the cases are similar and also have similar outcomes, do not seem to be very analytically interesting.

Beach, Derek, and Rasmus Brun Pedersen. 2016a. Causal case study methods: foundations and guidelines for comparing, matching, and tracing. Ann Arbor, MI: University of Michigan Press.

Book   Google Scholar  

Beach, Derek, and Rasmus Brun Pedersen. 2016b. “electing appropriate cases when tracing causal mechanisms. Sociological Methods & Research, online first (January). https://doi.org/10.1177/0049124115622510 .

Google Scholar  

Beach, Derek, and Rasmus Brun Pedersen. 2019. Process-tracing methods: Foundations and guidelines. 2. ed. Ann Arbor: University of Michigan Press.

Behnke, Joachim. 2005. Lassen sich Signifikanztests auf Vollerhebungen anwenden? Einige essayistische Anmerkungen. (Can significance tests be applied to fully-fledged surveys? A few essayist remarks) Politische Vierteljahresschrift 46:1–15. https://doi.org/10.1007/s11615-005-0240-y .

Article   Google Scholar  

Bennett, Andrew, and Jeffrey T. Checkel. 2015. Process tracing: From philosophical roots to best practices. In Process tracing. From metaphor to analytic tool, eds. Andrew Bennett and Jeffrey T. Checkel, 3–37. Cambridge: Cambridge University Press.

Bennett, Andrew, and Colin Elman. 2006. Qualitative research: Recent developments in case study methods. Annual Review of Political Science 9:455–76. https://doi.org/10.1146/annurev.polisci.8.082103.104918 .

Berg-Schlosser, Dirk. 2012. Mixed methods in comparative politics: Principles and applications . Basingstoke: Palgrave Macmillan.

Berg-Schlosser, Dirk, and Gisèle De Meur. 2009. Comparative research design: Case and variable selection. In Configurational comparative methods: Qualitative comparative analysis, 19–32. Thousand Oaks: SAGE Publications, Inc.

Chapter   Google Scholar  

Berk, Richard A., Bruce Western and Robert E. Weiss. 1995. Statistical inference for apparent populations. Sociological Methodology 25:421–458.

Blatter, Joachim, and Markus Haverland. 2012. Designing case studies: Explanatory approaches in small-n research . Basingstoke: Palgrave Macmillan.

Brady, Henry E., and David Collier. Eds. 2004. Rethinking social inquiry: Diverse tools, shared standards. 1st ed. Lanham, Md: Rowman & Littlefield Publishers.

Brady, Henry E., and David Collier. Eds. 2010. Rethinking social inquiry: Diverse tools, shared standards. 2nd ed. Lanham, Md: Rowman & Littlefield Publishers.

Broscheid, Andreas, and Thomas Gschwend. 2005. Zur statistischen Analyse von Vollerhebungen. (On the statistical analysis of fully-fledged surveys) Politische Vierteljahresschrift 46:16–26. https://doi.org/10.1007/s11615-005-0241-x .

Caporaso, James A., and Alan L. Pelowski. 1971. Economic and Political Integration in Europe: A Time-Series Quasi-Experimental Analysis. American Political Science Review 65(2):418–433.

Coleman, James S. 1990. Foundations of social theory. Cambridge: The Belknap Press of Harvard University Press.

Collier, David. 2014. Symposium: The set-theoretic comparative method—critical assessment and the search for alternatives. SSRN Scholarly Paper ID 2463329. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2463329 .

Collier, David, and Robert Adcock. 1999. Democracy and dichotomies: A pragmatic approach to choices about concepts. Annual Review of Political Science 2:537–565.

Collier, David, and James Mahoney. 1996. Insights and pitfalls: Selection bias in qualitative research. World Politics 49:56–91. https://doi.org/10.1353/wp.1996.0023 .

Collier, David, Jason Seawright and Gerardo L. Munck. 2010. The quest for standards: King, Keohane, and Verba’s designing social inquiry. In Rethinking social inquiry. Diverse tools, shared standards, eds. Henry E. Brady and David Collier, 2nd edition, 33–64. Lanham: Rowman & Littlefield Publishers.

Dahl, Robert A. Ed. 1966. Political opposition in western democracies. Yale: Yale University Press.

Dion, Douglas. 2003. Evidence and inference in the comparative case study. In Necessary conditions: Theory, methodology, and applications , ed. Gary Goertz and Harvey Starr, 127–45. Lanham, Md: Rowman & Littlefield Publishers.

Eckstein, Harry. 1975. Case study and theory in political science. In Handbook of political science, eds. Fred I. Greenstein and Nelson W. Polsby, 79–137. Reading: Addison-Wesley.

Eijk, Cees van der, and Mark N. Franklin. 1996. Choosing Europe? The European electorate and national politics in the face of union. Ann Arbor: The University of Michigan Press.

Fearon, James D., and David D. Laitin. 2008. Integrating qualitative and quantitative methods. In The Oxford handbook of political methodology , eds. Janet M. Box-Steffensmeier, Henry E. Brady and David Collier. Oxford; New York: Oxford University Press.

Franklin, James C. 2008. Shame on you: The impact of human rights criticism on political repression in Latin America. International Studies Quarterly 52:187–211. https://doi.org/10.1111/j.1468-2478.2007.00496.x .

Galiani, Sebastian, Stephen Knack, Lixin Colin Xu and Ben Zou. 2017. The effect of aid on growth: Evidence from a quasi-experiment. Journal of Economic Growth 22:1–33. https://doi.org/10.1007/s10887-016-9137-4 .

Ganghof, Steffen. 2005. Vergleichen in Qualitativer und Quantitativer Politikwissenschaft: X‑Zentrierte Versus Y‑Zentrierte Forschungsstrategien. (Comparison in qualitative and quantitative political science. X‑centered v. Y‑centered research strategies) In Vergleichen in Der Politikwissenschaft, eds. Sabine Kropp and Michael Minkenberg, 76–93. Wiesbaden: VS Verlag.

Geddes, Barbara. 1990. How the cases you choose affect the answers you get: Selection bias in comparative politics. Political Analysis 2:131–150.

George, Alexander L., and Andrew Bennett. 2005. Case studies and theory development in the social sciences. Cambridge, Mass: The MIT Press.

Gerring, John. 2007. Case study research: Principles and practices. Cambridge; New York: Cambridge University Press.

Goerres, Achim, and Markus Tepe. 2010. Age-based self-interest, intergenerational solidarity and the welfare state: A comparative analysis of older people’s attitudes towards public childcare in 12 OECD countries. European Journal of Political Research 49:818–51. https://doi.org/10.1111/j.1475-6765.2010.01920.x .

Goertz, Gary. 2006. Social science concepts: A user’s guide. Princeton; Oxford: Princeton University Press.

Goertz, Gary. 2017. Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton, NJ: Princeton University Press.

Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton, N.J: Princeton University Press.

Goldthorpe, John H. 1997. Current issues in comparative macrosociology: A debate on methodological issues. Comparative Social Research 16:1–26.

Jahn, Detlef. 2006. Globalization as “Galton’s problem”: The missing link in the analysis of diffusion patterns in welfare state development. International Organization 60. https://doi.org/10.1017/S0020818306060127 .

King, Gary, Robert O. Keohane and Sidney Verba. 1994. Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.

Kittel, Bernhard. 2006. A crazy methodology?: On the limits of macro-quantitative social science research. International Sociology 21:647–77. https://doi.org/10.1177/0268580906067835 .

Lazarsfeld, Paul. 1937. Some remarks on typological procedures in social research. Zeitschrift für Sozialforschung 6:119–39.

Lieberman, Evan S. 2005. Nested analysis as a mixed-method strategy for comparative research. American Political Science Review 99:435–52. https://doi.org/10.1017/S0003055405051762 .

Lijphart, Arend. 1971. Comparative politics and the comparative method . American Political Science Review 65:682–93. https://doi.org/10.2307/1955513 .

Lundsgaarde, Erik, Christian Breunig and Aseem Prakash. 2010. Instrumental philanthropy: Trade and the allocation of foreign aid. Canadian Journal of Political Science 43:733–61.

Maggetti, Martino, Claudio Radaelli and Fabrizio Gilardi. 2013. Designing research in the social sciences. Thousand Oaks: SAGE.

Mahoney, James. 2003. Strategies of causal assessment in comparative historical analysis. In Comparative historical analysis in the social sciences , eds. Dietrich Rueschemeyer and James Mahoney, 337–72. Cambridge; New York: Cambridge University Press.

Mahoney, James. 2010. After KKV: The new methodology of qualitative research. World Politics 62:120–47. https://doi.org/10.1017/S0043887109990220 .

Mahoney, James, and Gary Goertz. 2004. The possibility principle: Choosing negative cases in comparative research. The American Political Science Review 98:653–69.

Mahoney, James, and Gary Goertz. 2006. A tale of two cultures: Contrasting quantitative and qualitative research. Political Analysis 14:227–49. https://doi.org/10.1093/pan/mpj017 .

Marks, Gary, Liesbet Hooghe, Moira Nelson and Erica Edwards. 2006. Party competition and European integration in the east and west. Comparative Political Studies 39:155–75. https://doi.org/10.1177/0010414005281932 .

Merton, Robert. 1957. Social theory and social structure. New York: Free Press.

Merz, Nicolas, Sven Regel and Jirka Lewandowski. 2016. The manifesto corpus: A new resource for research on political parties and quantitative text analysis. Research & Politics 3:205316801664334. https://doi.org/10.1177/2053168016643346 .

Michels, Robert. 1962. Political parties: A sociological study of the oligarchical tendencies of modern democracy . New York: Collier Books.

Nielsen, Richard A. 2016. Case selection via matching. Sociological Methods & Research 45:569–97. https://doi.org/10.1177/0049124114547054 .

Porta, Donatella della, and Michael Keating. 2008. How many approaches in the social sciences? An epistemological introduction. In Approaches and methodologies in the social sciences. A pluralist perspective, eds. Donatella della Porta and Michael Keating, 19–39. Cambridge; New York: Cambridge University Press.

Powell, G. Bingham, Russell J. Dalton and Kaare Strom. 2014. Comparative politics today: A world view. 11th ed. Boston: Pearson Educ.

Przeworski, Adam, and Henry J. Teune. 1970. The logic of comparative social inquiry . New York: John Wiley & Sons Inc.

Ragin, Charles C. 1987. The comparative method: Moving beyond qualitative and quantitative strategies. Berkley: University of California Press.

Ragin, Charles C. 2000. Fuzzy-set social science. Chicago: University of Chicago Press.

Ragin, Charles C. 2004. Turning the tables: How case-oriented research challenges variable-oriented research. In Rethinking social inquiry : Diverse tools, shared standards , eds. Henry E. Brady and David Collier, 123–38. Lanham, Md: Rowman & Littlefield Publishers.

Ragin, Charles C. 2008. Redesigning social inquiry: Fuzzy sets and beyond. Chicago: University of Chicago Press.

Ragin, Charles C., and Howard S. Becker. 1992. What is a case?: Exploring the foundations of social inquiry. Cambridge University Press.

Rohlfing, Ingo. 2012. Case studies and causal inference: An integrative framework . Basingstokes: Palgrave Macmillan.

Rohlfing, Ingo, and Carsten Q. Schneider. 2013. Improving research on necessary conditions: Formalized case selection for process tracing after QCA. Political Research Quarterly 66:220–35.

Rohlfing, Ingo, and Carsten Q. Schneider. 2016. A unifying framework for causal analysis in set-theoretic multimethod research. Sociological Methods & Research, online first (March). https://doi.org/10.1177/0049124115626170 .

Rueschemeyer, Dietrich. 2003. Can one or a few cases yield theoretical gains? In Comparative historical analysis in the social sciences , eds. Dietrich Rueschemeyer and James Mahoney, 305–36. Cambridge; New York: Cambridge University Press.

Sartori, Giovanni. 1970. Concept misformation in comparative politics. American Political Science Review 64:1033–53. https://doi.org/10.2307/1958356 .

Schmitter, Philippe C. 2008. The design of social and political research. Chinese Political Science Review . https://doi.org/10.1007/s41111-016-0044-9 .

Schneider, Carsten Q., and Ingo Rohlfing. 2016. Case studies nested in fuzzy-set QCA on sufficiency: Formalizing case selection and causal inference. Sociological Methods & Research 45:526–68. https://doi.org/10.1177/0049124114532446 .

Schneider, Carsten Q., and Claudius Wagemann. 2012. Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis. Cambridge: Cambridge University Press.

Seawright, Jason, and David Collier. 2010ra. Glossary.”In Rethinking social inquiry. Diverse tools, shared standards , eds. Henry E. Brady and David Collier, 2nd ed., 313–60. Lanham, Md: Rowman & Littlefield Publishers.

Seawright, Jason, and John Gerring. 2008. Case selection techniques in case study research, a menu of qualitative and quantitative options. Political Research Quarterly 61:294–308.

Shapiro, Ian. 2002. Problems, methods, and theories in the study of politics, or what’s wrong with political science and what to do about it. Political Theory 30:588–611.

Simmons, Beth A., and Zachary Elkins. 2004. The globalization of liberalization: Policy diffusion in the international political economy. American Political Science Review 98:171–89. https://doi.org/10.1017/S0003055404001078 .

Skocpol, Theda, and Margaret Somers. 1980. The uses of comparative history in macrosocial inquriy. Comparative Studies in Society and History 22:174–97.

Snyder, Richard. 2001. Scaling down: The subnational comparative method. Studies in Comparative International Development 36:93–110. https://doi.org/10.1007/BF02687586 .

Steenbergen, Marco, and Bradford S. Jones. 2002. Modeling multilevel data structures. American Journal of Political Science 46:218–37.

Wagemann, Claudius, Achim Goerres and Markus Siewert. Eds. 2019. Handbuch Methoden der Politikwissenschaft, Wiesbaden: Springer, online available at https://link.springer.com/referencework/10.1007/978-3-658-16937-4

Weisskopf, Thomas E. 1975. China and India: Contrasting Experiences in Economic Development. The American Economic Review 65:356–364.

Weller, Nicholas, and Jeb Barnes. 2014. Finding pathways: Mixed-method research for studying causal mechanisms . Cambridge: Cambridge University Press.

Wright Mills, C. 1959. The sociological imagination . Oxford: Oxford University Press.

Download references

Acknowledgements

Equal authors listed in alphabetical order. We would like to thank Ingo Rohlfing, Anne-Kathrin Fischer, Heiner Meulemann and Hans-Jürgen Andreß for their detailed feedback, and all the participants of the book workshop for their further comments. We are grateful to Jonas Elis for his linguistic suggestions.

Author information

Authors and affiliations.

Fakultät für Gesellschaftswissenschaften, Institut für Politikwissenschaft, Universität Duisburg-Essen, Lotharstr. 65, 47057, Duisburg, Germany

Achim Goerres

Fachbereich Gesellschaftswissenschaften, Institut für Politikwissenschaft, Goethe-Universität Frankfurt, Theodor-W.-Adorno Platz 6, 60323, Frankfurt am Main, Germany

Markus B. Siewert & Claudius Wagemann

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Achim Goerres .

Rights and permissions

Reprints and permissions

About this article

Goerres, A., Siewert, M.B. & Wagemann, C. Internationally Comparative Research Designs in the Social Sciences: Fundamental Issues, Case Selection Logics, and Research Limitations. Köln Z Soziol 71 (Suppl 1), 75–97 (2019). https://doi.org/10.1007/s11577-019-00600-2

Download citation

Published : 29 April 2019

Issue Date : 03 June 2019

DOI : https://doi.org/10.1007/s11577-019-00600-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • International comparison
  • Comparative designs
  • Quantitative and qualitative comparisons
  • Case selection

Schlüsselwörter

  • Internationaler Vergleich
  • Vergleichende Studiendesigns
  • Quantitativer und qualitativer Vergleich
  • Fallauswahl
  • Find a journal
  • Publish with us
  • Track your research

comparative survey research design

  • Origins of CSDI
  • CSDI Executive Committee
  • Cross Cultural Survey Guidelines
  • Data Harmonization
  • 2024 Program
  • 2024 CSDI Workshop Registration
  • Travel and Venue Information
  • 2023 Program
  • 2022 Program
  • COVID-19 Travel Information
  • 2021 CSDI Virtual Workshop Schedule
  • Live Panel Sessions
  • Virtual Social Event
  • Pre-recorded Sessions
  • 2021 CSDI Workshop Participants’ Bios
  • Virtual Workshop Video Tutorial
  • Recording and Presentation Guidelines
  • 2019 Workshop Program
  • 2019 CSDI Presentations
  • 2019 Participant Bios
  • 2019 Abstracts
  • 2021 User Program
  • 2021 Registration
  • Hotel and Venue Information
  • Programs, Abstracts, and Biosketches
  • 2018 Presentations
  • 2017 Program
  • 2017 Presentations
  • 2016 Presentations
  • 2016 Schedule
  • 2016 Program
  • 2015 Program
  • 2015 Presentations
  • 2015 Bio-sketches
  • 2015 Abstract
  • 2014 Program
  • 2014 Presentations
  • 2014 Bio-sketches
  • 2014 Abstracts
  • 2013 Program
  • 2013 Biosketches
  • 2013 Abstract
  • 2012 Program
  • 2012 Presentations
  • 2011 Program
  • 2011 Presentations
  • 2010 Program
  • 2010 Presentations
  • 2009 Program
  • 2009 Presentations
  • 2008 Presentations
  • 2007 Programme
  • 2007 Participants
  • 2007 Biosketches
  • 2006 Program
  • 2006 Biosketches
  • 2006 Abstracts
  • 2005 Program
  • 2005 Participants
  • 2005 Report
  • 2004 Programme
  • 2004 Biosketches
  • Cross-cultural Survey Guidelines
  • International Surveys Short Courses
  • Webinar: Showcase of Free Online Courses and Guidelines

We are pleased to announce that the SHARE BERLIN Institute GmbH will host the 2024 CSDI Workshop. The workshop will take place March 18th – 20th at the SHARE BERLIN Institute GmbH in Berlin, Germany.

Call for Individual Abstracts 

Below is a list of suggested topics. If your topic is not listed, please feel free to submit a session abstract for any topic that relates to comparative survey design and implementation.

The individual abstracts are due January 5, 2024 .

  • Equivalency measures
  • Achieving comparability
  • Questionnaire development and testing
  • Translation, adaptation, and assessment
  • Minimizing measurement error
  • Interviewer effects
  • Sampling innovations
  • Data collection challenges and solutions
  • Quality control
  • Innovative uses of technology
  • Paradata use across the lifecycle
  • Metadata use
  • Comparative standard demographics
  • Data curation and dissemination
  • Comparative analyses
  • Computational comparability measures

To submit your abstract (up to 300 words), please use this link: Submit Individual  Abstract.  

For your convenience, here is a list of important dates: 

  • January 5, 2024 – Individual abstracts due (session organizers are responsible for ensuring their session participants submit individual abstracts) 
  • January 15, 2024 – Abstract notification sent, and online registration opens
  • March 1, 2024 – Online registration closes
  • March 18-20, 2024 – CSDI Workshop

Updating CSDI Database

The Comparative Survey Design and Implementation (CSDI) group is doing a little housekeeping and we are in the process of updating contact information for people who have attended past events (e.g., CSDI Workshops or 3MC Conferences) or who have asked to be added to our email list to receive announcements and updates.

If you would like to continue to receive emails from CSDI, please take a minute (I promise it will only take a minute!) to update your contact information using this form . Please complete the form by December 1 st to ensure you do not miss out on future announcements and updates.

Advances in Comparative Survey Methods: Multinational, Multiregional and Multicultural Contexts (3MC)

comparative survey research design

Since the publication of the last 3MC monograph (2010), there have been substantial methodological, operational and technical advances in comparative survey research. There are also whole new areas of methodological development including collecting biomarkers, the human subject regulatory environment, innovations in data collection methodology and sampling techniques, use of paradata across the survey lifecycle, metadata standards for dissemination, as well as new analytical techniques. This new monograph follows the survey lifecycle and include chapters on study design and considerations, sampling, questionnaire design (assuming multi-language surveys), translation, mixed mode, the regulatory environment, data collection, quality assurance and control, analysis techniques and data documentation and dissemination. The table of contents can be accessed here .

Johnson, T. P., Pennell, B.-E., Stoop, I., & Dorer, B. (Eds.). (2018).  Advances in comparative survey methods: Multinational, multiregional and multicultural contexts (3MC) . Hoboken, New Jersey: John Wiley & Sons Inc.

Survey Methods in Multicultural, Multinational, and Multiregional Contexts

comparative survey research design

Harkness, J. A., Braun, M., Edwards, B., Johnson, T. P., & Lyberg, L. E. (2010).  Survey methods in multicultural, multinational, and multiregional contexts . Hoboken, New Jersey: John Wiley & Sons.

New and Expanded Cross-Cultural Survey Guidelines

First published in 2008, the Cross-Cultural Survey Guidelines have recently undergone a significant update and expansion (Beta release: July 2016). The new edition includes over 800 pages of content with major updates and the expansion of all existing chapters, as well as the addition of new chapters on study design, study management, paradata, and statistical analysis. More than 70 professionals from 35 organizations contributed to this effort. The senior editor was Tom W. Smith of NORC at the University of Chicago. See: http://ccsg.isr.umich.edu/index.php/about-us/contributions for a complete list of contributors. The Cross-Cultural Survey Guidelines were developed to provide information on best practices across the survey lifecycle in a world in which the number and scope of studies covering multiple cultures, languages, nations, or regions has increased significantly. They were the product of an initiative of the International Workshop on Comparative Survey Design and Implementation (http://www.csdiworkshop.org/). The initiative was led by Beth-Ellen Pennell, currently the director of international survey operations at the Survey Research Center, Institute for Social Research at the University of Michigan. The aim of the initiative was to develop and promote internationally recognized guidelines that highlight best practice for the conduct of comparative survey research across cultures and countries. The guidelines address the gap in the existing literature on the details of implementing surveys that are specifically designed for comparative research, including what aspects should be standardized and when local adaptation is appropriate. The intended audience for the guidelines includes researchers and survey practitioners planning or engaged in what are increasingly referred to as multinational, multiregional, or multicultural (3MC) surveys, although much of the material is also relevant for single country surveys. The guidelines cover all aspects of the survey life cycle and include the following chapters: Study Design and Organizational Structure; Study Management; Tenders, Bids and Contracts; Sample Design; Questionnaire Design; Adaptation; Translation; Instrument Technical Design; Interviewer Recruitment, Selection, and Training; Pretesting; Data Collection; Paradata and Other Auxiliary Data; Data Harmonization; Data Processing and Statistical Adjustment; Data Dissemination; Survey Quality and Ethical Considerations. The guidelines can be found at: http://ccsg.isr.umich.edu . We welcome feedback and suggestions.

Janet A. Harkness Student Paper Award

comparative survey research design

http://wapor.org/janet-a-harkness-student-paper-award

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Descriptive Research Design – Types, Methods and Examples

Descriptive Research Design – Types, Methods and Examples

Table of Contents

Descriptive Research Design

Descriptive Research Design

Definition:

Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

Descriptive research design does not attempt to establish cause-and-effect relationships between variables or make predictions about future outcomes. Instead, it focuses on providing a detailed and accurate representation of the data collected, which can be useful for generating hypotheses, exploring trends, and identifying patterns in the data.

Types of Descriptive Research Design

Types of Descriptive Research Design are as follows:

Cross-sectional Study

This involves collecting data at a single point in time from a sample or population to describe their characteristics or behaviors. For example, a researcher may conduct a cross-sectional study to investigate the prevalence of certain health conditions among a population, or to describe the attitudes and beliefs of a particular group.

Longitudinal Study

This involves collecting data over an extended period of time, often through repeated observations or surveys of the same group or population. Longitudinal studies can be used to track changes in attitudes, behaviors, or outcomes over time, or to investigate the effects of interventions or treatments.

This involves an in-depth examination of a single individual, group, or situation to gain a detailed understanding of its characteristics or dynamics. Case studies are often used in psychology, sociology, and business to explore complex phenomena or to generate hypotheses for further research.

Survey Research

This involves collecting data from a sample or population through standardized questionnaires or interviews. Surveys can be used to describe attitudes, opinions, behaviors, or demographic characteristics of a group, and can be conducted in person, by phone, or online.

Observational Research

This involves observing and documenting the behavior or interactions of individuals or groups in a natural or controlled setting. Observational studies can be used to describe social, cultural, or environmental phenomena, or to investigate the effects of interventions or treatments.

Correlational Research

This involves examining the relationships between two or more variables to describe their patterns or associations. Correlational studies can be used to identify potential causal relationships or to explore the strength and direction of relationships between variables.

Data Analysis Methods

Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research:

Descriptive Statistics

This method involves analyzing data to summarize and describe the key features of a sample or population. Descriptive statistics can include measures of central tendency (e.g., mean, median, mode) and measures of variability (e.g., range, standard deviation).

Cross-tabulation

This method involves analyzing data by creating a table that shows the frequency of two or more variables together. Cross-tabulation can help identify patterns or relationships between variables.

Content Analysis

This method involves analyzing qualitative data (e.g., text, images, audio) to identify themes, patterns, or trends. Content analysis can be used to describe the characteristics of a sample or population, or to identify factors that influence attitudes or behaviors.

Qualitative Coding

This method involves analyzing qualitative data by assigning codes to segments of data based on their meaning or content. Qualitative coding can be used to identify common themes, patterns, or categories within the data.

Visualization

This method involves creating graphs or charts to represent data visually. Visualization can help identify patterns or relationships between variables and make it easier to communicate findings to others.

Comparative Analysis

This method involves comparing data across different groups or time periods to identify similarities and differences. Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups within a population.

Applications of Descriptive Research Design

Descriptive research design has numerous applications in various fields. Some of the common applications of descriptive research design are:

  • Market research: Descriptive research design is widely used in market research to understand consumer preferences, behavior, and attitudes. This helps companies to develop new products and services, improve marketing strategies, and increase customer satisfaction.
  • Health research: Descriptive research design is used in health research to describe the prevalence and distribution of a disease or health condition in a population. This helps healthcare providers to develop prevention and treatment strategies.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs. This helps educators to improve teaching methods and develop effective educational programs.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs. This helps researchers to understand social behavior and develop effective policies.
  • Public opinion research: Descriptive research design is used in public opinion research to understand the opinions and attitudes of the general public on various issues. This helps policymakers to develop effective policies that are aligned with public opinion.
  • Environmental research: Descriptive research design is used in environmental research to describe the environmental conditions of a particular region or ecosystem. This helps policymakers and environmentalists to develop effective conservation and preservation strategies.

Descriptive Research Design Examples

Here are some real-time examples of descriptive research designs:

  • A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction. The survey data is analyzed using descriptive statistics and cross-tabulation to describe the characteristics of their customer base.
  • A medical researcher wants to describe the prevalence and risk factors of a particular disease in a population. They conduct a cross-sectional study in which they collect data from a sample of individuals using a standardized questionnaire. The data is analyzed using descriptive statistics and cross-tabulation to identify patterns in the prevalence and risk factors of the disease.
  • An education researcher wants to describe the learning outcomes of students in a particular school district. They collect test scores from a representative sample of students in the district and use descriptive statistics to calculate the mean, median, and standard deviation of the scores. They also create visualizations such as histograms and box plots to show the distribution of scores.
  • A marketing team wants to understand the attitudes and behaviors of consumers towards a new product. They conduct a series of focus groups and use qualitative coding to identify common themes and patterns in the data. They also create visualizations such as word clouds to show the most frequently mentioned topics.
  • An environmental scientist wants to describe the biodiversity of a particular ecosystem. They conduct an observational study in which they collect data on the species and abundance of plants and animals in the ecosystem. The data is analyzed using descriptive statistics to describe the diversity and richness of the ecosystem.

How to Conduct Descriptive Research Design

To conduct a descriptive research design, you can follow these general steps:

  • Define your research question: Clearly define the research question or problem that you want to address. Your research question should be specific and focused to guide your data collection and analysis.
  • Choose your research method: Select the most appropriate research method for your research question. As discussed earlier, common research methods for descriptive research include surveys, case studies, observational studies, cross-sectional studies, and longitudinal studies.
  • Design your study: Plan the details of your study, including the sampling strategy, data collection methods, and data analysis plan. Determine the sample size and sampling method, decide on the data collection tools (such as questionnaires, interviews, or observations), and outline your data analysis plan.
  • Collect data: Collect data from your sample or population using the data collection tools you have chosen. Ensure that you follow ethical guidelines for research and obtain informed consent from participants.
  • Analyze data: Use appropriate statistical or qualitative analysis methods to analyze your data. As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis.
  • I nterpret results: Interpret your findings in light of your research question and objectives. Identify patterns, trends, and relationships in the data, and describe the characteristics of your sample or population.
  • Draw conclusions and report results: Draw conclusions based on your analysis and interpretation of the data. Report your results in a clear and concise manner, using appropriate tables, graphs, or figures to present your findings. Ensure that your report follows accepted research standards and guidelines.

When to Use Descriptive Research Design

Descriptive research design is used in situations where the researcher wants to describe a population or phenomenon in detail. It is used to gather information about the current status or condition of a group or phenomenon without making any causal inferences. Descriptive research design is useful in the following situations:

  • Exploratory research: Descriptive research design is often used in exploratory research to gain an initial understanding of a phenomenon or population.
  • Identifying trends: Descriptive research design can be used to identify trends or patterns in a population, such as changes in consumer behavior or attitudes over time.
  • Market research: Descriptive research design is commonly used in market research to understand consumer preferences, behavior, and attitudes.
  • Health research: Descriptive research design is useful in health research to describe the prevalence and distribution of a disease or health condition in a population.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs.

Purpose of Descriptive Research Design

The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or altering any variables.

The purpose of descriptive research design can be summarized as follows:

  • To provide an accurate description of a population or phenomenon: Descriptive research design aims to provide a comprehensive and accurate description of a population or phenomenon of interest. This can help researchers to develop a better understanding of the characteristics of the population or phenomenon.
  • To identify trends and patterns: Descriptive research design can help researchers to identify trends and patterns in the data, such as changes in behavior or attitudes over time. This can be useful for making predictions and developing strategies.
  • To generate hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • To establish a baseline: Descriptive research design can establish a baseline or starting point for future research. This can be useful for comparing data from different time periods or populations.

Characteristics of Descriptive Research Design

Descriptive research design has several key characteristics that distinguish it from other research designs. Some of the main characteristics of descriptive research design are:

  • Objective : Descriptive research design is objective in nature, which means that it focuses on collecting factual and accurate data without any personal bias. The researcher aims to report the data objectively without any personal interpretation.
  • Non-experimental: Descriptive research design is non-experimental, which means that the researcher does not manipulate any variables. The researcher simply observes and records the behavior or characteristics of the population or phenomenon of interest.
  • Quantitative : Descriptive research design is quantitative in nature, which means that it involves collecting numerical data that can be analyzed using statistical techniques. This helps to provide a more precise and accurate description of the population or phenomenon.
  • Cross-sectional: Descriptive research design is often cross-sectional, which means that the data is collected at a single point in time. This can be useful for understanding the current state of the population or phenomenon, but it may not provide information about changes over time.
  • Large sample size: Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Systematic and structured: Descriptive research design involves a systematic and structured approach to data collection, which helps to ensure that the data is accurate and reliable. This involves using standardized procedures for data collection, such as surveys, questionnaires, or observation checklists.

Advantages of Descriptive Research Design

Descriptive research design has several advantages that make it a popular choice for researchers. Some of the main advantages of descriptive research design are:

  • Provides an accurate description: Descriptive research design is focused on accurately describing the characteristics of a population or phenomenon. This can help researchers to develop a better understanding of the subject of interest.
  • Easy to conduct: Descriptive research design is relatively easy to conduct and requires minimal resources compared to other research designs. It can be conducted quickly and efficiently, and data can be collected through surveys, questionnaires, or observations.
  • Useful for generating hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • Large sample size : Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Can be used to monitor changes : Descriptive research design can be used to monitor changes over time in a population or phenomenon. This can be useful for identifying trends and patterns, and for making predictions about future behavior or attitudes.
  • Can be used in a variety of fields : Descriptive research design can be used in a variety of fields, including social sciences, healthcare, business, and education.

Limitation of Descriptive Research Design

Descriptive research design also has some limitations that researchers should consider before using this design. Some of the main limitations of descriptive research design are:

  • Cannot establish cause and effect: Descriptive research design cannot establish cause and effect relationships between variables. It only provides a description of the characteristics of the population or phenomenon of interest.
  • Limited generalizability: The results of a descriptive study may not be generalizable to other populations or situations. This is because descriptive research design often involves a specific sample or situation, which may not be representative of the broader population.
  • Potential for bias: Descriptive research design can be subject to bias, particularly if the researcher is not objective in their data collection or interpretation. This can lead to inaccurate or incomplete descriptions of the population or phenomenon of interest.
  • Limited depth: Descriptive research design may provide a superficial description of the population or phenomenon of interest. It does not delve into the underlying causes or mechanisms behind the observed behavior or characteristics.
  • Limited utility for theory development: Descriptive research design may not be useful for developing theories about the relationship between variables. It only provides a description of the variables themselves.
  • Relies on self-report data: Descriptive research design often relies on self-report data, such as surveys or questionnaires. This type of data may be subject to biases, such as social desirability bias or recall bias.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

IMAGES

  1. Comparison of design-based research with experimental studies

    comparative survey research design

  2. Comparative Research

    comparative survey research design

  3. FREE 9+ Comparative Research Templates in PDF

    comparative survey research design

  4. PPT

    comparative survey research design

  5. Comparative Research

    comparative survey research design

  6. Comparative research paper

    comparative survey research design

COMMENTS

  1. 15

    Summary. In contrast to the chapters on survey research, experimentation, or content analysis that described a distinct set of skills, in this chapter, a variety of comparative research techniques are discussed. What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data.

  2. Chapter 2 Comparative survey research

    Chapter 2 Comparative survey research. Cross-national and cross-cultural comparative surveys are a very important resource for the Social Sciences. According to the Overview of Comparative Surveys Worldwide, more than 90 cross-national comparative surveys have been conducted around the world since 1948.. Even though surveys can aim to fulfill different purposes, generally they aim to estimate ...

  3. Comparative Research Methods

    The communication field is likely to see more comparative survey research in the future, partly due to the growing data availability on media use from projects such as the World Internet Project. But even in this project, which was designed with a comparative goal from the outset, meaningful conclusions can only be drawn after careful tests of ...

  4. PDF SURVEY AND CORRELATIONAL RESEARCH DESIGNS

    The survey research design is the use of a survey, administered either in written form or orally, to quan-tify, describe, or characterize an individual or a group. A survey is a series of questions or statements, called items, used in a questionnaire or an interview to mea-sure the self-reports or responses of respondents.

  5. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  6. (PDF) A Short Introduction to Comparative Research

    A comparative perspective exposes we aknesses in research design and helps a researcher improve the quality of research. The focus of comparative research is on si milarities and

  7. Chapter 10 Methods for Comparative Studies

    In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between ...

  8. Designing Research With Qualitative Comparative Analysis (QCA

    Berg-Schlosser D., De Meur G. 2009. "Comparative Research Design: Case and Variable Selection." Pp. 19-32 in Configurational Comparative ... Walter A. 2014. "QCA, the Truth Table Analysis and Large-N Survey Data: The Benefits of Calibration and the Importance of Robustness Tests." COMPASS Working Paper 2014-2079. Available at ...

  9. PDF Comparative Research Designs and Case Selection

    comparative research designs are essential. For many research purposes in comparative politics, the inquiry has to be conducted at the macro-level of entire societies, economies or states. At this level, in the present day, but also if we include compa-rable historical cases, the possible number of such cases is by necessity

  10. Comparative survey research: Goal and challenges.

    We spend some time therefore on issues of standardization and implementation, on question design and on question adaptation and translation. Among the topics not dealt with here, but of obvious relevance for comparative survey research, are sampling, analysis, instrument testing, study documentation, and ethical considerations.

  11. Comparative Research Methods

    Comparative Research Methods FRANK ESSER University of Zurich, Switzerland ... the research design. Second, the macro-level units of comparison must be clearly delineated, irrespective of how the boundaries are defined. ... In comparative survey research, a problem on the side of. COMPARATIVERESEARCH METHODS 11

  12. CCSG

    Welcome to the Cross-Cultural Survey Guidelines! These Guidelines were developed as part of the Comparative Survey Design and Implementation (CSDI) Guidelines Initiative.The aim of the Initiative was to promote internationally recognized guidelines that highlight best practice for the conduct of multinational, multicultural, or multiregional surveys, which we refer to as '3MC' surveys.

  13. Characteristics of a Comparative Research Design

    Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. ... making them more expensive. Use comparative research design when ...

  14. Comparative research

    Comparative research is a research methodology in the social sciences exemplified in cross-cultural or comparative studies that aims to make comparisons across different countries or cultures.A major problem in comparative research is that the data sets in different countries may define categories differently (for example by using different definitions of poverty) or may not use the same ...

  15. Questionnaire Design in Comparative Research

    Request PDF | On Jan 1, 2003, J. A. Harkness and others published Questionnaire Design in Comparative Research | Find, read and cite all the research you need on ResearchGate

  16. Quantitative Research with Nonexperimental Designs

    There are two main types of nonexperimental research designs: comparative design and correlational design. ... Data were collected from China and Malaysia using an online survey. A total of 815 respondents (418 Chinese and 397 Malaysians) participated in this study. Differences were found not only between Chinese and Malaysian participants but ...

  17. Internationally Comparative Research Designs in the Social ...

    This paper synthesizes methodological knowledge derived from comparative survey research and comparative politics and aims to enable researches to make prudent research decisions. Starting from the data structure that can occur in international comparisons at different levels, it suggests basic definitions for cases and contexts, i. e. the main ingredients of international comparison. The ...

  18. Comparative Survey Methodology

    The chapter considers the special nature of comparative surveys, and how comparability may drive design decisions. It also considers recent changes in comparative survey research methods and practice.

  19. CSDI Workshop

    The Comparative Survey Design and Implementation (CSDI) group is doing a little housekeeping and we are in the process of updating contact information for people who have attended past events (e.g., CSDI Workshops or 3MC Conferences) or who have asked to be added to our email list to receive announcements and updates.

  20. (PDF) Comparing and Contrasting Descriptive Designs: Observational

    This research was designed as correlational descriptive survey research, with quantitative and qualitative approaches. Research used to describe what it is about a variable, symptom, or situation ...

  21. Descriptive Research Design

    Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied. ... Survey Research. ... Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups ...

  22. (PDF) Quantitative Research Designs

    Abstract and Figures. In this chapter, we will explore several types of research designs. The designs in this chapter are survey design, descriptive design, correlational design, experimental ...

  23. (PDF) Survey Design~Quantitative research

    Step 2. Draft survey questions/ Questionnaires. Set up discussions with members of a target group to. identify ke y issues. Translate those into questions and answer categories. Draft simple and ...