Testing for Measurement Invariance with Many Groups

Chapter 2 comparative survey research.

Cross-national and cross-cultural comparative surveys are a very important resource for the Social Sciences. According to the Overview of Comparative Surveys Worldwide , more than 90 cross-national comparative surveys have been conducted around the world since 1948.

Even though surveys can aim to fulfill different purposes, generally they aim to estimate population means, totals or distributions or relationships between variables. A comparative survey will aim to compare these levels or relationships across groups (national or otherwise).

Comparative percentages by country regarding immigration tolerance, from the European Social Survey Round 7. Source: [Dimiter Toshkov 2020](https://dimiter.eu/Visualizations_files/ESS/Visualizing_ESS_data.html#saving_the_visualization)

Figure 2.1: Comparative percentages by country regarding immigration tolerance, from the European Social Survey Round 7. Source: Dimiter Toshkov 2020

Figure 2.1 shows a rather common application of a comparative survey. The groups, in this case European countries, are compared on their percentage shares on the answer to the question about allowing more immigrants.

However, what we see in this graph is only the final abstraction of very long process that typically surveys, and most particularly cross-national surveys, must go through. This process is sometimes called survey lifecycle and goes from design to dissemination.

Survey Lifecycle. Source: [Cross Cultural Survey Guidelines](https://ccsg.isr.umich.edu/chapters)

Figure 2.2: Survey Lifecycle. Source: Cross Cultural Survey Guidelines

2.1 Survey Error

Survey error is any error arising from the survey process that contributed to the deviation of an estimate from its true parameter values. (Biemer 2016 )

But regardless of how much we can try to prevent it, survey errors in one form or another will always occur. And survey errors might affect the estimates and their comparability.

This applies both to when we compare data from different surveys and comparisons of sub-groups within the same survey.

The comparability of survey measurements is an issue that should be thoughtfully considered before drawing substantive conclusions from comparative surveys.

Slighty problematic survey question. Source: [badsurveyq](https://twitter.com/badsurveyq)

Figure 2.3: Slighty problematic survey question. Source: badsurveyq

Survey error can be classified in two components:

Random error is caused by any factors that randomly affect measurement of the variable
Systematic error is caused by any factors that systematically affect measurement of the variable

Survey group comparability problems come from systematic error or “Bias” 1 .

What is particular about comparative surveys is that there are at least two different survey statistics. Therefore, each one of these statistics is subject to different sources of error. If the overall statistics are differently affected by the error, this will cause some form of “bias” in the comparison.

In other words, besides substantive differences between survey statistics, there might be systematic differences caused by survey error.

2.2 Total Survey Error framework

Total survey error is the accumulation of all errors that may arise in the design, collection, processing and analysis of survey data.

The Total Survey Error offers an elegant framework to describe survey errors and the several sources of error that are rooted within the survey lifecycle.

Source: @Groves2010

Figure 2.4: Source: Groves and Lyberg ( 2010 )

As it can be seen in figure 2.4, there is a “representation” and a “measurement” side.

Systematic representation errors include:

  • coverage error
  • sampling error
  • nonresponse

Systematic error in measurement include:

  • measurement error

Measurement error includes: response error, interviewer-induced response effects, social desirability, methods effects, response styles.

Here we will focus on the Measurement side

2.3 The “Bias” framework

The TSE error classification is analogous to the “Bias” framework in the field of cross-cultural psychology. Under this framework, Vijver and Leung ( 1997 ) distinguished between “construct”, “item”, and “method” bias which are essencially similar to the TSE’s validity, measurement error and all the remaining errors.

The bias framework is developed from the perspective of cross-cultural psychology and attempts to provide a comprehensive taxonomy of all systematic sources of error that can challenge the inferences drawn from cross-cultural studies (Vijver and Leung 1997 , 2000 ; Van de Vijver and Poortinga 1997 ; Vijver and Tanzer 2004 ) .

2.3.1 Construct Bias

Construct bias is present if the underlying construct measured is not the same across cultures.

  • It can occur if a construct is differently defined or only has a partial overlap across cultural groups.
Varying definitions of happiness in Western and East Asian cultures (Uchida, Norasakkunkit, and Kitayama 2004 ) . In Western cultures, happiness tends to be defined in terms of individual achievement, whereas in East Asian cultures happiness is defined in terms of interpersonal connectedness.

2.3.2 Method Bias

Sample Bias: is the incomparability of samples due to cross-cultural variations in characteristics, such as different educational levels, students versus the general population, and urban versus rural residents

Instrument bias: involves systematic errors derived from instrument characteristics such as self-report bias in Likert-type scale measures. The systematic tendency of respondents to endorse certain response options on some basis other than the target construct (i.e., response styles) may affect the validity of cross- cultural comparisons (Herk, Poortinga, and Verhallen 2004 ) .

Administration Bias: stems from administration conditions (e.g., data collection modes, group versus individual assessment), ambiguous instructions, interaction between administrators and respondents (e.g., halo effects), and communication problems (e.g., language differences, taboo topic).

2.3.3 Item Bias

Occurs when an item has a different meaning across cultures. An item of a scale is biased if persons with the same target trait level, but coming from different cultures, are not equally likely to endorse the item ( Vijver and Leung ( 1997 ) ; Vijver ( 2013 ) ).

Item bias can arise from poor translation, inapplicability of item contents in different cultures, or from items that trigger additional traits or have words with ambiguous connotations.

2.4 Preventing survey comparability problems

Following the TSE framework, the best way to reduce eventual comparability issues in survey data is to reduce the survey error to the very minimum and assuring that the persistent errors are most likely similar across the groups.

There is a vast literature discussing how to reduce TSE. However, two issues are particularly relevant to cross-cultural/national surveys.

2.4.1 Translation

TRAPD - Translation, Review, Adjudication, Pretesting, and Documentation

This method was proposed by Harkness, Vijver, and Mohler ( 2003 )

Team approach to survey translation:

T ranslators produce, independently from each other, initial translations
R eviewers review translations with the translators
A djudicator (one or more) decides whether the translation is ready
P retesting is the next step before going out to the field
D ocumentation should be constant during the entire process

2.4.2 Question coding system: SQP

It offers an additional way to check question comparability by taking into account the different characteristics of the questions in the original and adapted versions.

https://sqp.upf.edu/

Biemer, Paul P. 2016. “Total Survey Error Paradigm: Theory and Practice.” In The Sage Handbook of Survey Methodology , 122–41. London: SAGE Publications Ltd. https://doi.org/10.4135/9781473957893.n10 .

Groves, R. M., and L. Lyberg. 2010. “Total Survey Error: Past, Present, and Future.” Public Opinion Quarterly 74 (5): 849–79. https://doi.org/10.1093/poq/nfq065 .

Harkness, Janet A., Fons J. R. van de Vijver, and Peter Ph. Mohler. 2003. Cross-Cultural Survey Methods . Hoboken, NJ: Wiley.

Herk, Hester van, Ype H Poortinga, and Theo M M Verhallen. 2004. “Response Styles in Rating Scales: Evidence of Method Bias in Data From Six EU Countries.” Journal of Cross-Cultural Psychology 35 (3): 346–60. https://doi.org/10.1177/0022022104264126 .

Uchida, Yukiko, Vinai Norasakkunkit, and Shinobu Kitayama. 2004. “Cultural Constructions of Happiness: Theory and Empirical Evidence.” Journal of Happiness Studies 5 (February): 223–39. https://doi.org/10.1007/s10902-004-8785-9 .

Van de Vijver, Fons, and Ype Poortinga. 1997. “Towards an Integrated Analysis of Bias in Cross-Cultural Assessment.” European Journal of Psychological Assessment 13 (January): 29–37. https://doi.org/10.1027/1015-5759.13.1.29 .

Vijver, Fons J R van de. 2013. “Item Bias.” Major Reference Works. https://doi.org/doi:10.1002/9781118339893.wbeccp309 .

Vijver, Fons J R van de, and Kwok Leung. 1997. Methods and data analysis for cross-cultural research. Cross-Cultural Psychology Series, Vol 1. Thousand Oaks, CA, US: Sage Publications, Inc.

Vijver, Fons J R van de, and Kwok Leung. 2000. “Methodological issues in psychological research on culture.” Journal of Cross-Cultural Psychology 31 (1): 33–51. https://doi.org/10.1177/0022022100031001004 .

Vijver, Fons van de, and Norbert K Tanzer. 2004. “Bias and equivalence in cross-cultural assessment: An overview.” European Review of Applied Psychology / Revue Européenne de Psychologie Appliquée 54 (2): 119–35. https://doi.org/10.1016/j.erap.2003.12.004 .

systematic error and “Bias” are terms used interchangeably in the literature and they refer to deviations that are not due to chance alone. ↩︎

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.9(4); Oct-Dec 2018

Study designs: Part 1 – An overview and classification

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on “study designs,” we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

INTRODUCTION

Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem.

Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the nature of question, the goal of research, and the availability of resources. Since the design of a study can affect the validity of its results, it is important to understand the different types of study designs and their strengths and limitations.

There are some terms that are used frequently while classifying study designs which are described in the following sections.

A variable represents a measurable attribute that varies across study units, for example, individual participants in a study, or at times even when measured in an individual person over time. Some examples of variables include age, sex, weight, height, health status, alive/dead, diseased/healthy, annual income, smoking yes/no, and treated/untreated.

Exposure (or intervention) and outcome variables

A large proportion of research studies assess the relationship between two variables. Here, the question is whether one variable is associated with or responsible for change in the value of the other variable. Exposure (or intervention) refers to the risk factor whose effect is being studied. It is also referred to as the independent or the predictor variable. The outcome (or predicted or dependent) variable develops as a consequence of the exposure (or intervention). Typically, the term “exposure” is used when the “causative” variable is naturally determined (as in observational studies – examples include age, sex, smoking, and educational status), and the term “intervention” is preferred where the researcher assigns some or all participants to receive a particular treatment for the purpose of the study (experimental studies – e.g., administration of a drug). If a drug had been started in some individuals but not in the others, before the study started, this counts as exposure, and not as intervention – since the drug was not started specifically for the study.

Observational versus interventional (or experimental) studies

Observational studies are those where the researcher is documenting a naturally occurring relationship between the exposure and the outcome that he/she is studying. The researcher does not do any active intervention in any individual, and the exposure has already been decided naturally or by some other factor. For example, looking at the incidence of lung cancer in smokers versus nonsmokers, or comparing the antenatal dietary habits of mothers with normal and low-birth babies. In these studies, the investigator did not play any role in determining the smoking or dietary habit in individuals.

For an exposure to determine the outcome, it must precede the latter. Any variable that occurs simultaneously with or following the outcome cannot be causative, and hence is not considered as an “exposure.”

Observational studies can be either descriptive (nonanalytical) or analytical (inferential) – this is discussed later in this article.

Interventional studies are experiments where the researcher actively performs an intervention in some or all members of a group of participants. This intervention could take many forms – for example, administration of a drug or vaccine, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. For example, a study could randomly assign persons to receive aspirin or placebo for a specific duration and assess the effect on the risk of developing cerebrovascular events.

Descriptive versus analytical studies

Descriptive (or nonanalytical) studies, as the name suggests, merely try to describe the data on one or more characteristics of a group of individuals. These do not try to answer questions or establish relationships between variables. Examples of descriptive studies include case reports, case series, and cross-sectional surveys (please note that cross-sectional surveys may be analytical studies as well – this will be discussed in the next article in this series). Examples of descriptive studies include a survey of dietary habits among pregnant women or a case series of patients with an unusual reaction to a drug.

Analytical studies attempt to test a hypothesis and establish causal relationships between variables. In these studies, the researcher assesses the effect of an exposure (or intervention) on an outcome. As described earlier, analytical studies can be observational (if the exposure is naturally determined) or interventional (if the researcher actively administers the intervention).

Directionality of study designs

Based on the direction of inquiry, study designs may be classified as forward-direction or backward-direction. In forward-direction studies, the researcher starts with determining the exposure to a risk factor and then assesses whether the outcome occurs at a future time point. This design is known as a cohort study. For example, a researcher can follow a group of smokers and a group of nonsmokers to determine the incidence of lung cancer in each. In backward-direction studies, the researcher begins by determining whether the outcome is present (cases vs. noncases [also called controls]) and then traces the presence of prior exposure to a risk factor. These are known as case–control studies. For example, a researcher identifies a group of normal-weight babies and a group of low-birth weight babies and then asks the mothers about their dietary habits during the index pregnancy.

Prospective versus retrospective study designs

The terms “prospective” and “retrospective” refer to the timing of the research in relation to the development of the outcome. In retrospective studies, the outcome of interest has already occurred (or not occurred – e.g., in controls) in each individual by the time s/he is enrolled, and the data are collected either from records or by asking participants to recall exposures. There is no follow-up of participants. By contrast, in prospective studies, the outcome (and sometimes even the exposure or intervention) has not occurred when the study starts and participants are followed up over a period of time to determine the occurrence of outcomes. Typically, most cohort studies are prospective studies (though there may be retrospective cohorts), whereas case–control studies are retrospective studies. An interventional study has to be, by definition, a prospective study since the investigator determines the exposure for each study participant and then follows them to observe outcomes.

The terms “prospective” versus “retrospective” studies can be confusing. Let us think of an investigator who starts a case–control study. To him/her, the process of enrolling cases and controls over a period of several months appears prospective. Hence, the use of these terms is best avoided. Or, at the very least, one must be clear that the terms relate to work flow for each individual study participant, and not to the study as a whole.

Classification of study designs

Figure 1 depicts a simple classification of research study designs. The Centre for Evidence-based Medicine has put forward a useful three-point algorithm which can help determine the design of a research study from its methods section:[ 1 ]

An external file that holds a picture, illustration, etc.
Object name is PCR-9-184-g001.jpg

Classification of research study designs

  • Does the study describe the characteristics of a sample or does it attempt to analyze (or draw inferences about) the relationship between two variables? – If no, then it is a descriptive study, and if yes, it is an analytical (inferential) study
  • If analytical, did the investigator determine the exposure? – If no, it is an observational study, and if yes, it is an experimental study
  • If observational, when was the outcome determined? – at the start of the study (case–control study), at the end of a period of follow-up (cohort study), or simultaneously (cross sectional).

In the next few pieces in the series, we will discuss various study designs in greater detail.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Political Behavior

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Political Behavior

48 Comparative Opinion Surveys

John Curtice is Professor of Politics at Strathclyde University, Glasgow.

  • Published: 02 September 2009
  • Cite Icon Cite
  • Permissions Icon Permissions

This article discusses comparative opinion surveys and comparative survey research. It first identifies the problems of comparative surveys and how these problems can be solved or overcome. It then tries to determine whether these surveys should include more or fewer countries during the research process. The last two sections discuss the need for data on the countries included in the survey or data set and the how the comparative survey data is analyzed.

And what should they know of England, who only England know? (Kipling 1910)

1 Why do Comparative Opinion Research?

The study of mass political behavior has a deceptively simple objective—to establish the causes and consequences of the political values and behaviors of the general population. 1 It faces one major obstacle—the sheer size of (most) general populations. Its ability to overcome that obstacle rests heavily on the power of the sample survey. Statistical theory demonstrates that inferences about the characteristics of a large population can in fact be drawn from the evidence of relatively small samples drawn randomly from that population. True, there is some uncertainty associated with those inferences, but its degree is known. Moreover that uncertainty is centered on the true value in the population as a whole. So, for example, if 50 percent of a random sample of 1,000 people has a particular characteristic, there is a 95 percent chance that between 47 and 53 percent actually have that characteristic amongst the population from which the sample was drawn. Armed with that knowledge, the student of political behavior has been able since the advent of the sample survey in the 1930s to make empirically substantiated statements about mass publics on the basis of information gathered from just a thousand people or so.

This approach does, however, beg one important question—what is the population about which we want to make empirically substantiated statements? Sample surveys are commonly conducted within the confines of a particular state. For many purposes this is perfectly acceptable. If, for example, we want to understand why people vote the way that they do in US presidential elections, a survey based on a random sample of the population of the United States is likely to be perfectly adequate. But the study of political behavior has loftier ambitions than simply explaining how people behave in particular countries. It wishes to be able to make statements about behavior in general. Yet we cannot assume that what is true in one country necessarily holds elsewhere. Perhaps, for example, how people vote in US presidential elections is influenced by circumstances that are particular to those elections and thus is different from how people vote in other elections elsewhere.

So survey research that crosses the boundaries of the nation‐state is essential to the study of political behavior, as Kipling recognized in the above quotation. At minimum if we are to be able to make statements about how people behave politically that are generally empirically substantiated, they need to be tested in a wide variety of social and political environments. Yet in practice cross‐national research has much more to offer than this. If political behavior is influenced by the circumstances in which it takes place, its study needs to be pursued using a research design in which those circumstances vary. This condition is often not fulfilled by research conducted within one country. For example, it is often argued that people are more likely to participate in elections if a system of proportional representation is in place than if a majoritarian system is in use (see chapter by Blais in this volume; Blais and Dobryzynska 1998 ; Franklin 2002 ; Norris 2004 ). Yet, elections within any one country are typically held using either one kind of system or the other, not a mixture of the two. Thus, the study of the impact of electoral systems on turnout at elections almost inevitably requires us to undertake comparative research. Much the same can be said of almost any characteristic that does not usually vary within a country.

Moreover, there is more than one way in which circumstances that vary from one country to another can affect behavior. For example, one way in which the use of proportional representation might increase turnout is simply by increasing the proportion of all kinds of people who vote—young and old, those interested in politics and those less so, strong party identifiers and weak identifiers, etc.—by more or less the same amount. In short, a particular kind of electoral system simply influences the overall level of turnout rather than the kind of person who votes. But an alternative possibility is that the use of proportional representation increases turnout amongst some groups more than others. Perhaps, for example, it has more impact on those belonging to groups who are less likely to vote, such as younger people, the politically uninterested and weak identifiers (Fisher et al., 2006) . If that happens, then the electoral system a country uses not only affects the level of participation but its relationship with other variables, such as age, political interest, and strength of identification. The ability of comparative survey research to assess the degree to which relationships may be contingent on national circumstance is at least as important as the opportunities it opens up to assess the impact of national circumstance on the overall incidence of behaviors and attitudes.

Indeed, the ability of comparative research to uncover the impact of circumstances that vary between countries but do not differ within them means that such research should be of interest even to the student whose concern is confined to understanding behavior in one particular country. If, for example, some of the most profound influences upon how people vote in US presidential elections are features that are common across the United States, such as the relative weakness of the country's political parties or the ability of candidates to purchase air time on television, then US presidential elections can only be adequately analyzed and understood if they are compared with elections elsewhere where these circumstances do not pertain. In other words, comparative survey research can enable us better to understand not only the general but the particular as well.

So comparative cross‐national mass political research brings three main benefits. First, it enables us to assess the empirical generalizability of claims that we might make about the causes and consequences of political attitudes and behavior. Second, it enables us to widen the range of contextual influences on attitudes and behavior that can be analyzed, in particular making it possible to assess the impact of influences that are largely invariant within countries but do vary between them. Third, it can even contribute to the study of behavior within a particular country by providing points of comparison that make it possible to assess the impact of that country's particular social and political circumstances on behavior within that country.

2 The Problems of Comparative Surveys

The benefits of comparative analyses rest on a crucial assumption—that we can make valid comparisons between the results obtained by surveys conducted in different countries. All forms of survey research are subject to potential sources of bias that mean they fail to provide as reliable a guide to the characteristics of a population as sampling theory would lead us to expect ( Groves 1987 , 1989 ). In particular, some parts of a population may be omitted from or under‐represented in the coverage of a survey. Meanwhile, the questions asked in a survey may fail to measure accurately what the researcher may have been aiming to measure. The particular difficulty that arises in comparative cross‐national research is not that these problems exist, but that their incidence may vary from country to country. As a result we may wonder whether the differences between the results of a survey conducted in two or more countries reflect artifactual differences in how the survey was conducted in the two countries rather than real differences between their populations (Heath, Fisher, and Smith 2005) .

Between‐country variation in measurement error is perhaps the most obvious pitfall of comparative survey research. Clearly if different questions are asked in different countries, any differences in response may simply reflect differences in question wording. Thus most exercises in comparative survey research are based on a common questionnaire that ideally is administered in an identical manner in every country. But one immediately obvious limitation to the fulfillment of this ideal is that people in different countries speak different languages. Even if a questionnaire is translated faithfully from its internationally agreed original, differences in the structure of different languages and in the connotations associated with different words in different languages may well mean that the cognitive and affective meaning of a question in the minds of respondents varies from country to country. Meanwhile attempts to develop batteries of questions designed to measure adherence to underlying values such as equality or social liberalism may be undermined by the fact that the degree to which any particular question taps adherence to such values varies from country to country. More difficult still is the possibility that a concept may not exist in certain cultures. Previous research on attitudes towards religion, for example, has had to cope with the difficulty that the concept of “God” does not exist in Japanese culture (Jowell 1998) .

Less obvious, but no less important however, are differences between countries in survey practice. Survey research is commonly organized on a national basis. That is, while one fieldwork organization will usually be responsible for conducting a comparative survey in its country, a different organization will undertake the survey in another country, etc. In any event, even if this were not the case the way in which a survey is conducted may have to vary from country to country because of national differences of practice and circumstance. For example, the ability to undertake pure random sampling depends on the existence of (and access to) a full (and accurate) population register, or failing that a full list of households or addresses from which a random sample of individuals can be generated. Sampling procedures must inevitably vary depending on the existence and the form of such information. Meanwhile, in some countries securing interviews in rural areas may be difficult either because such populations are geographically widely spread or because of relatively low rates of literacy. Equally, differences of geography may mean that the degree to which samples are geographically clustered in order to keep survey costs down has to vary. Of course, both differences of national circumstance and in the quality of interviewing may produce substantial differences in response rates—and thus different levels of exposure to the possibility that samples may be biased because of differential non‐response.

In short—and the above discussion is far from exhaustive—comparative opinion research is exposed to the severe problem that alongside the substantive differences between countries whose impact such research is designed to discern there may well coexist methodological differences that in themselves make countries appear more or less similar to each other than in reality is the case. The comparative researcher's task is already often difficult enough because countries typically do not differ substantively in just one respect, but several, thereby potentially making it difficult to discern which substantive difference might account for any particular difference between the results of surveys conducted in two or more countries. Now it appears that in practice we cannot be sure that any difference we might uncover is not simply an artifact of difference in survey method rather than a real difference. The apparent analytic power of comparative opinion research appears to have crumbled all too readily in our hands.

3 Overcoming the Problems

How might that power possibly be restored? One obvious possibility is to reduce the degree of heterogeneity in survey practice. This is very much the approach that has been adopted by one recently instigated cross‐national collaboration, the European Social Survey (European Social Survey n.d. a ). One of its avowed aims has been to ensure that each participating survey is conducted to more or less the same high standard. Thus, for example, not only are strict guidelines for the implementation of random sampling laid down, but how it is proposed to implement those guidelines in each country has to be agreed by a coordinating methodological committee (Lynn et al. 2004) . Amongst the key features of these guidelines are that all interviewing has to take place face to face, with both a minimum target response rate of 70 percent, and a minimum effective sample size (that is after taking into account the impact of any geographical clustering of interviews and any unequal selection probabilities), of 1,500 (European Social Survey n.d. b ). Meanwhile, questionnaires are independently translated by two native speakers and then any differences are resolved by those with knowledge of survey design and the research topic as well as the languages in question. (For more on translation strategies see Smith 2002.)

Such an approach to comparative survey research places a premium on the quality of the work in those countries that do participate in a survey rather than on ensuring coverage of as many countries as possible—though in the event when the first ESS survey was conducted in 2002 no less than twenty‐two European countries endeavored to meet the organizers' exacting standards while twenty‐six did so in 2004. (Surveys are being conducted every other year.) In any case, there are other reasons why we might limit the range of countries included in a program of comparative opinion research. First, the more diverse the countries being covered, the greater the difficulty of ensuring that the questions asked have the same meaning to respondents across language and culture. Even if there are attempts to ensure similarity of meaning, when there is a diverse set of countries comparability may only be achieved at the cost of producing questions that are so general and abstract that respondents everywhere may have some difficulty discerning their meaning (Kuechler 1998) . Second, the larger the number of countries included, the greater the likelihood that those who attempt to analyze the resulting data do not have sufficient understanding of the social, political, economic, and cultural attributes of each country to be able to interpret the data sensitively and sensibly (Jowell 1998) . In short it could be argued that the approach of the first ever major piece of comparative opinion research, Almond and Verba's Civic Culture study (Almond and Verba 1963) , which confined its attention to five countries chosen for their theoretical interest, provides a model as to how comparative survey research should be conducted.

Indeed, it is interesting to note that many of the more recently instigated programs of comparative survey research have been “regional” rather than “global” in character (also see chapter by Kittilson in this volume). The trend began with the instigation of a range of “barometer” surveys in central and eastern Europe following the collapse of the Berlin Wall (Centre for the Study of Public Policy n.d.). An annual “Latinobarometer” that now covers eighteen countries in Latin America began in 1995 (Latinobarómetro Corporation n.d.), an “Afrobarometer” started in 1999 with twelve countries, with as many as eighteen participating in the third round conducted in 2005–6 (Afrobarometer n.d. a ), while no less than two collaborations, the East Asia Barometer and the Asia Barometer, have been instigated (in 2001 and annually since 2003 respectively), each covering around a dozen or so countries in overlapping parts of eastern and southern Asia (East Asia Barometer n.d.; Asia Barometer n.d.). These regional collaborations vary in the degree to which they have attempted to impose similarity of methodological rigor across their component countries, with perhaps the most impressive being the attempts of the Afrobarometer to promote the same high standards in countries that often lack a tradition of high‐quality survey research or indeed much of the infrastructure required to implement random sampling (Afrobarometer n.d. b ). Such regional collaborations enable these surveys to focus on those topics that are of common concern in their parts of the world rather than pursuing an agenda that simply reflects the intellectual concerns and assumptions of advanced industrial democracies.

In fact, the ability of the European Social Survey to obtain methodological consistency and rigor is dependent not so much on its regional character as on the organizational structure it has been able to develop. Unlike other collaborations, it has access to funding from cross‐national political and scientific institutions such as the European Union and the European Science Foundation. 2 Amongst other things this ensures that it has a comparatively well‐funded permanent secretariat and central infrastructure. Meanwhile, funding for the various national surveys typically comes from national scientific funding councils who are willing to support the high standards that ESS demands. In contrast, most comparative research projects are voluntary collaborations between national survey teams and have a relatively limited secretariat that may well be provided by just one country.

Indeed, of the three truly “global” comparative research survey collaborations that currently exist, two do not even attempt to undertake complete whole surveys within each country. Rather, both the International Social Survey Programme (ISSP), a collaboration between general social surveys, and the Comparative Study of Electoral Systems Project (CSES), a collaboration between academic national election studies, are collaborations between national surveys that ask about their own domestic agendas but agree to devote a part of their survey to a module of common questions. Indeed many of the surveys involved in these two collaborations are well‐established enterprises. Thus the ISSP was instigated in 1984 by three existing social surveys, the US General Social Survey, the British Social Attitudes survey, and the German ALLBUSS, together with survey researchers from the Australian National University, though it now has as many as forty‐one participants who collaborate annually. The CSES, which began in 1996, includes amongst its membership most of the long‐running national election studies in the well‐established democracies, as well as studies in countries where democratic elections let alone election studies have a much shorter history. Altogether, its first module covered elections in as many as thirty‐four countries, the second in over forty.

Collaborations between existing national studies have one key advantage. They are relatively inexpensive, as they do not require the full cost of mounting a survey to be found in every participating country. But they are inevitably limited in the degree of methodological consistency that can be achieved. Faced with the choice between maintaining existing domestic practices and changing those practices to meet international requirements the former pressure will tend to be greater, especially if changing the way an established survey is conducted might compromise the integrity of a domestic time series. Given that constraint the ISSP has relatively strict requirements that a survey must be able to satisfy before it is admitted to its membership (International Social Survey Programme 2003) . All surveys are, for example, meant to use random sampling and undertake at least a thousand interviews. Meanwhile, the questions in each annual module (with each module covering a rotating cycle of subjects) are agreed collaboratively by all of the participating members, thereby helping to ensure that they are crafted with sensitivity to cultural and linguistic differences. They are also all asked together in the same order in a block, albeit either face to face or as part of a self‐completion supplement. Nevertheless, the program does not have any rules on how questionnaires should be translated while an examination of the methodology actually being employed by its members revealed that not all of them necessarily followed the principles of random sampling that the program was meant to uphold (Park and Jowell 1997) .

The CSES is even less rigorous. While it supposedly requires its members to administer the module as whole in a block, not all of them follow this requirement. The module may be administered face to face, by telephone, or by self‐completion questionnaire (either as a supplement to a face‐to‐face survey or as part of a mail‐back survey). It does not insist on random sampling nor does it have any rules on translation. This latitude in part reflects the fact that, as we have already noted, the project has had to accept the fact that already well‐established national election studies are less willing to compromise their own domestic time series by changing how their surveys are conducted. In part, too, the secretariat lacks either the resources or indeed the authority in what is a voluntary collaboration to insist on greater conformity to a set of common standards. The one crucial requirement that the project does have, however, is that fieldwork should take place in the period immediately after a national parliamentary or presidential election, thereby enabling the project to capture as accurately as possible in each country the attitudes and behavior of the electorate at the occasion of an election. This makes the CSES a unique resource for the comparative study of electoral behavior, though it does mean that the fieldwork for each module has to be spread out over a five‐year period in order to ensure that an election has been held in each country that wishes to participate.

The comparative project whose subject matter covers political attitudes and behavior that has the widest reach of all, however, is the World Values Survey. This began life as a solely European survey (the European Values Survey) in the early 1980s, designed to look at social and moral values in the then western Europe. However, it was then promoted by Ronald Inglehart at the University of Michigan and adopted in a dozen non‐European countries. Thereafter the collaboration has blossomed (though the European countries continue to have their own organization and secretariat). Now, after four rounds, each around five years apart, it is being conducted in nearly eighty countries. While unlike both CSES and ISSP a whole survey is commissioned especially for the purpose, the project is reliant on teams within each country to raise the necessary funds and in practice the project is not notable for the similarity of methods employed in each country. Thus, while the survey work is nearly always undertaken face to face, it only aspires to follow random sampling “as closely as possible” (World Values Survey n.d. b ), while some samples are not fully nationally representative (Inglehart 1997, 346) . Meanwhile amongst the thirty‐two European countries that fielded the fourth round of the survey in 1999, around half back‐translated the questionnaire into English, while the other half did not. Nearly two‐thirds added one or more country‐specific questions in the middle of the common module. Equally, a third used some form of quota control at some point in the sampling process and around two‐thirds allowed some form of substitution for non‐contacts, while the remainder did not implement such procedures (European Values Study 1999) .

4 Many Countries or Few?

One feature that these three truly intercontinental projects have in common is that they challenge the earlier notion that in comparative survey research less may be better. They all suffer, even if rather less so in the case of ISSP, from between‐country methodological pluralism. They also appear to encourage the user to analyze data from countries about which he or she may know little or nothing. Yet, despite the undoubted disadvantages of their methodological diversity, these exercises are still highly valuable.

To see why this is the case we may perhaps need to remind ourselves as to the analytic purchase that we argued earlier comparative opinion research brings. This is that comparative survey research enables us to examine the links between circumstances that vary between countries (but usually not within them) and both the incidence of various political attitudes and behaviors and the relationships between them. From this perspective our interest in, for example, the United States lies not in the United States per se but in the politically relevant attributes that it has, such as the fact that it is a federal country, has a presidential system of government, uses a single‐member plurality electoral system or that health care is primarily funded by private insurance. Equally our interest in, for example, Sweden may lie in the fact that is a unitary state, has a parliamentary system of government, uses a party list electoral system, or that the state funds most health care.

In short, countries may be regarded as cases with theoretically relevant attributes. We can then assess the impact of these attributes by coding each country accordingly and including the resulting variables in our data analyses. While the coding of each attribute needs to be conducted accurately it does not require expert knowledge of the social, economic, political, and cultural attributes of a country. Note further that in such analyses any particular attribute may well be present in more than one country, and indeed will probably be present in several. Thus instead of being reliant on the evidence of just one country to assess the impact of a particular attribute we should have available to us the evidence of a number. This means we can begin to assess whether a general relationship exists between the presence of a particular circumstance and a particular attitude or behavior—and it is identifying the existence of such relationships that we have argued is the central task of the study of political behavior.

How does this mitigate the dangers of methodological pluralism? Quite simply because there is safety in numbers. While the results of a survey in any one country with a particular attribute may be more artifact than fact, the probability that this is true of all countries with that attribute is far less. So long as differences of methodological approach are not strongly correlated with the presence or absence of the attribute of interest, then those differences of approach cannot be responsible for any relationship that may be uncovered between that attribute and a particular attitude or behavior. The presence of methodological diversity will probably result in greater error variance between countries and, as a result, real relationships may well be attenuated in the survey data. Nevertheless, the more countries that a program of comparative research covers, the more likely it is to be insulated against the danger that substantive conclusions are drawn on the basis of artifactual differences.

5 The Need for Data about Countries

One important implication, however, flows from this approach. Comparative opinion research cannot be conducted using survey data alone. Rather it needs to analyze survey data alongside systematically collected and coded data that give details of the attributes of the countries included in the data set. In short, measurement of the (particularly national) context within which the survey data have been collected should be an integral part of any exercise in comparative opinion research. Note indeed that such data could also include information on the key attributes of the methodology deployed in each country, thereby making it possible to include in analyses the possible impact of between‐country methodological differences.

In practice, however, only two of the projects referred to so far include in their activities the provision to the wider community of relevant data about the context within which the data have been collected. The Comparative Study of Electoral Systems Project not only provides extensive information on the electoral system and constitutional structure of each country that is surveyed, but also information on the political parties, the issues at each election, and the election outcome, including some data at the level of the electoral district rather than just the country as a whole. Meanwhile the European Social Survey provides some social and economic indicators for each participating country, including some population data at regional level, as well as information on key events that took place in each country during the course of fieldwork. At the same time, the ESS has compiled an impressive set of web links to sources of data and information about individual countries, including data provided by key international organizations such as UNESCO and the OECD (European Social Survey 2003) . 3

6 Analyzing Comparative Survey Data

The challenges of comparative survey research are not, however, confined to the conduct of fieldwork or the collection of contextual data. There are also important questions about how best to analyze such data. Note first of all that the data may be regarded as either a sample of individuals or a sample of countries. In the former case, however, a pooled individual level data set from a comparative survey research project cannot be regarded as a simple random sample of individuals. The respondents to the surveys are not independent of each other, but rather are clustered by nation. This has to be allowed for either by using a multi‐level model (Snijders and Bosker 1999) or by using statistical routines that take into account the clustered nature of the samples (Seligson 2004) . Meanwhile, in the case of the latter approach at least some consideration has to be given as to the weight that each country's sample should have in the analysis. If some countries have included more respondents in their surveys than others, respondents from those countries will have more impact on any estimates derived from the survey data unless this imbalance is altered. One possibility is that the sample sizes should be weighted to be proportionate to population; this might be done if there is a wish to make statements about some coherent geographical entity such as the European Union. Another possibility is to regard each national sample as a separate reading of the phenomena under investigation and to equalize the sample sizes for each country

In addition, comparative survey data can be regarded as a sample of countries. At its simplest this means deriving frequencies and means from the individual‐level data for each country. The relationships between these readings across countries may then be analyzed, or analyzed in tandem with data about those countries from other sources. This, for example, is the approach that has been adopted by Inglehart in some of his most striking analyses of data from the World Values surveys, such as examining the relationship between the importance of postmaterialist values and affluence (Inglehart and Abramson 1995) , civic norms, and the longevity of democratic institutions (Inglehart 1997) , and between the degree of emphasis placed on self‐expression and the openness and accountability of a country's political institutions (Inglehart and Welzel 2004) . Such analyses may well uncover relationships that do not appear at the individual level—or indeed fail to corroborate relationships that do appear at the individual level. Such instances can tell us a great deal about the nature of the processes that underlie such relationships, and thus both forms of analysis need to be conducted if the full power of comparative opinion research is to be utilized.

Indeed, not only should individual‐level and aggregate‐level analysis be conducted, but they should also be brought together. Earlier in this chapter we noted that one of the possible roles of comparative opinion research is to identify the degree to which relationships, such as that between interest in politics and turnout, are contingent upon circumstance, such as the kind of electoral system in place. This implies bringing together aggregate data about a country (which may either be derived from the survey itself or from another source) and individual‐level data about the strength of the relationship between two or more variables. This may be done in more than one way (Franzese 2005) . One is to undertake a pooled individual‐level analysis in which interaction terms between aggregate‐level national circumstance and one or more independent individual‐level variables are included in the modeling. Another is to estimate the individual‐level relationship in each country, and then analyze the resulting data alongside other relevant country‐level data at the aggregate (country) level (Curtice forthcoming; Lewis and Linzer 2005 ; Jusko and Shiveley 2005 ).

7 Conclusion

Comparative opinion research is potentially a highly powerful instrument. Indeed it is difficult to see how the aspiration of political science to be able to make empirically sustained generalizations about what influences and structures political behavior can be achieved without it. Thus the substantial increase in the amount of such research that has occurred over the last decade or so represents a significant organizational advance in the study of political behavior.

Yet at the same time it is also methodologically at least a potentially fragile endeavor. Most comparative survey research is a voluntary collaboration between national teams, each of which operates in different circumstances and cultures. As a result there is a tendency for survey research to be undertaken differently in different countries—and even if the same survey instrument is administered in the same manner everywhere there is no guarantee that it has the same meaning for respondents everywhere. Such methodological diversity means that our attempts to study what makes countries different run the risk of being confounded by differences between countries in how surveys are conducted. It certainly means that much comparative opinion research tolerates a degree of methodological inconsistency that would not usually be tolerated on a national survey.

There are two possible responses to this problem. One is to attempt to secure greater consistency of methodological approach—and at a high standard. This is the route that has been taken by the European Social Survey, which is undoubtedly methodologically the most impressive exercise in comparative opinion research that has been undertaken to date. Yet it remains to be seen whether such an exercise can be conducted outside the unique circumstances created in Europe by the existence of a relatively powerful cross‐national institution such as the European Union. And if we are to reap fully the benefits of comparative opinion research we need to maximize the variety of countries (and thus of circumstance) that are covered. This suggests that, despite their methodological diversity, there will continue to be an important role for the substantively more diverse global endeavors too.

Afrobarometer. n.d. a . Home page at www.afrobarometer.org/index.html

—— n.d. b Sampling at www.afrobarometer.org/sampling.html

Almond, G. , and Verba, S.   1963 . The Civic Culture: Political Attitudes and Democracy in Five Nations . Princeton: Princeton University Press.

Google Scholar

Google Preview

Asia Barometer. n.d. Home page at http://avatoli.ioc.u-tokyo.ac.jp/~asiabarometer/pages/english/index.html

Blais, A. , and Carty, K.   1990 . Does propotional representation foster voter turnout?   European Journal of Political Research , 18: 167–91. 10.1111/j.1475-6765.1990.tb00227.x

—— and Dobrzynska, A.   1998 . Turnout in electoral democracies.   European Journal of Political Research , 33: 239–61. 10.1111/1475-6765.00382

Centre for the Study of Public Policy. n.d. Barometer Surveys at www.cspp.strath.ac.uk

Comparative Study of Electoral Systems. n.d. Home page at www.cses.org/

Curtice, J. Forthcoming. Elections as beauty contests: do the rules matter? In Political Leaders and Democratic Elections , ed. K. Aarts , A. Blais , and H. Schmitt . Oxford: Oxford University Press.

East Asia Barometer. n.d. Home page at http://eacsurvey.law.ntu.edu.tw/

Eijk, C. van der , and Franklin, M.   1996 . Choosing Europe? The European Electorate and National Politics in the Face of Union . Ann Arbor: Univ. of Michigan Press.

European Commission. n.d. Public Opinion at http://europa.eu.int/comm/public_opinion/index_en.htm

European Elections Study. n.d. Home page at www.europeanelectionstudies.net/

European Social Survey. n.d. a . Home page at www.europeansocialsurvey.org/

—— n.d. b . European Social Survey, Round 2: Specification for participating countries. Available at http://naticent02.uuhost.uk.uu.net/proj-spec/round_2/r2_spec_participating_countries.doc

—— 2003. Overview websites with free information on European countries at www.scp.nl/users/stoop/ess_events/links_contextual_data2003.htm

European Values Study. 1999. Methodological questionnaire. Available at www.za.uni-koeln.de/data/add_studies/kat50/EVS_1999_2000/ZA3811fb.pdf

Fisher, S. , Lessard‐Phillips, L. , Hobolt, S. , and Curtice, J. 2006. How the effect of political knowledge on turnout differs in plurality electoral systems. Paper presented at the Annual Meeting of the American Political Science Association, 2006.

Franklin, M.   2002 . The dynamics of electoral participation. Pp. 148–68 in Comparing Democracies 2 , ed. L. LeDuc , R. Niemi , and P. Norris . London: Sage.

Franzese, R.   2005 . Empirical strategies for various manifestations of multilevel data.   Political Analysis , 13: 430–46. 10.1093/pan/mpi024

Groves, R. 1987. Research on survey data quality. Public Opinion Quarterly . 50th anniversary issue: S156–72.

——  1989 . Survey Errors and Survey Costs . New York: John Wiley. 10.1002/0471725277

Heath, A. , Fisher, S. , and Smith, S.   2005 . The globilization of public opinion research.   Annual Review of Political Science , 8: 297–333. 10.1146/annurev.polisci.8.090203.103000

Inglehart, R.   1990 . Culture Shift in Advanced Industrial Society . Princeton: Princeton University Press.

——  1997 . Modernization and Postmodernization: Cultural, Economic and Political Change in 43 Societies . Princeton: Princeton University Press.

—— and Abramson, P.   1995 . Value Change in Global Perspective . Ann Arbor: University of Michigan Press.

—— and Welzel, C.   2004 . What insights can multi‐country surveys provide about people and societies?   APSA‐CP Newsletter , 15 (2): 14–18.

International Social Survey Programme. n.d. Home page at www.issp.org/homepage.htm

—— 2003. Working principles. Available at www.issp.org/Documents/isspchar.pdf

Jackman, R.   1987 . Political institutions and voter turnout in industrialized democracies.   American Political Science Review , 81: 405–23. 10.2307/1961959

Jowell, R.   1998 . How comparative is comparative research?   American Behavioral Scientist , 42: 168–77. 10.1177/0002764298042002004

Jusko, K. , and Shiveley, W.   2005 . Applying a two‐step strategy to the analysis of cross‐national public opinion data.   Political Analysis , 13: 327–44. 10.1093/pan/mpi030

Kaase, M. , and Newton, K.   1995 . Beliefs in Government . Oxford: Oxford University Press.

Katz, R.   1997 . Democracy and Elections . New York: Oxford University Press.

Kipling, R.   1910 . The English flag. In R. Kipling , Departmental Ditties and Ballads and Barrack‐room Ballads . New York: Doubleday.

Kuechler, M.   1998 , The survey method: an indispensable tool for social science research everywhere?   American Behavioral Scientist , 42: 178–200. 10.1177/0002764298042002005

Latinobarómetro Corporation. n.d. Home page at www.latinobarometro.org/index.php?id=150

Lewis, J. , and Linzer, D.   2005 . Estimating regression models in which the dependent variable is based on estimates.   Political Analysis , 13: 345–64. 10.1093/pan/mpi026

Lynn, P. , Häder, S. , Gabler, S. , and Laaksonen, S.   2004 . Methods for achieving equivalence of samples in cross‐national surveys: the European Social Survey experience. Working Papers of the Institute for Social and Economic Research . Paper 2004‐09. Colchester: University of Essex.

Norris, P.   2004 . Electoral Engineering: Voting Rules and Political Behavior . New York: Cambridge University Press.

Park, A. , and Jowell, R.   1997 . Consistencies and Differences in a Cross‐National Survey . London: Social and Community Planning Research.

Powell, G. B.   1986 . American voter turnout in comparative perspective.   American Political Science Review , 80: 17–43. 10.2307/1957082

Seligson, M.   2004 . Comparative survey research: is there a problem?   APSA‐CP Newsletter , 15 (2): 11–14.

Smith, T.   2002 . Developing comparable questions in cross‐national surveys. Pp. 69–91 in Cross‐Cultural Survey Methods , ed. J. Harkness , F. van de Vijver , and P. Mohler . London: Wiley Europe.

Snijders, T. , and Bosker, R.   1999 . Multilevel Modelling: An Introduction to Basic and Advanced Multilevel Modeling . London: Sage.

World Values Survey. n.d. a . Home page at www.worldvaluessurvey.org/

—— n.d. b . Constitution for the World Values Association. Available at www.worldvaluessurvey.org/organization/constitution.pdf

This chapter focuses on the methods and logic of comparative survey research. The chapter by Kittilson describes the wealth of comparative surveys that are available to researchers.

The influence that the existence of the European Union has had on comparative survey research is underlined by the fact that the Union has commissioned, funded, and undertaken the longest‐running and most intense program of cross‐national research anywhere in the world (European Commission n.d.). While its Eurobarometer surveys (including associated surveys in candidate members of the Union) are primarily concerned with the policy need of the Union to chart public support for its activities and institutions, the series has proved to be an invaluable resource in academic research ( e.g. Inglehart 1990 ; Kaase and Newton 1995 ). The surveys have been conducted twice a year since 1973. The initial survey covered the nine countries that were members of the EU at that time; it now covers the current twenty‐five members.

The introduction of direct elections to the Union's European Parliament in 1979 has also stimulated cross‐national survey research on voting behavior and attitudes at those elections, some of which research used the Eurobarometer as its survey platform, under the auspices of the European Election Study (European Elections Study n.d.; van der Eijk and Franklin 1996) .

The European Election Study has on one or more occasions undertaken content analysis of both media output at the time of European Parliament elections and of party manifestos, thereby making it possible to link survey results to the different national media contexts to which voters were exposed. The 1994 study also undertook surveys of candidates, thereby facilitating a study of political representation in the EU.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Characteristics of a Comparative Research Design

Hannah richardson, 28 jun 2018.

Characteristics of a Comparative Research Design

Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. Comparative studies can be used to increase understanding between cultures and societies and create a foundation for compromise and collaboration. These studies contain both quantitative and qualitative research methods.

Explore this article

  • Comparative Quantitative
  • Comparative Qualitative
  • When to Use It
  • When Not to Use It

1 Comparative Quantitative

Quantitative, or experimental, research is characterized by the manipulation of an independent variable to measure and explain its influence on a dependent variable. Because comparative research studies analyze two different groups -- which may have very different social contexts -- it is difficult to establish the parameters of research. Such studies might seek to compare, for example, large amounts of demographic or employment data from different nations that define or measure relevant research elements differently.

However, the methods for statistical analysis of data inherent in quantitative research are still helpful in establishing correlations in comparative studies. Also, the need for a specific research question in quantitative research helps comparative researchers narrow down and establish a more specific comparative research question.

2 Comparative Qualitative

Qualitative, or nonexperimental, is characterized by observation and recording outcomes without manipulation. In comparative research, data are collected primarily by observation, and the goal is to determine similarities and differences that are related to the particular situation or environment of the two groups. These similarities and differences are identified through qualitative observation methods. Additionally, some researchers have favored designing comparative studies around a variety of case studies in which individuals are observed and behaviors are recorded. The results of each case are then compared across people groups.

3 When to Use It

Comparative research studies should be used when comparing two people groups, often cross-nationally. These studies analyze the similarities and differences between these two groups in an attempt to better understand both groups. Comparisons lead to new insights and better understanding of all participants involved. These studies also require collaboration, strong teams, advanced technologies and access to international databases, making them more expensive. Use comparative research design when the necessary funding and resources are available.

4 When Not to Use It

Do not use comparative research design with little funding, limited access to necessary technology and few team members. Because of the larger scale of these studies, they should be conducted only if adequate population samples are available. Additionally, data within these studies require extensive measurement analysis; if the necessary organizational and technological resources are not available, a comparative study should not be used. Do not use a comparative design if data are not able to be measured accurately and analyzed with fidelity and validity.

  • 1 San Jose State University: Selected Issues in Study Design
  • 2 University of Surrey: Social Research Update 13: Comparative Research Methods

About the Author

Hannah Richardson has a Master's degree in Special Education from Vanderbilt University and a Bacheor of Arts in English. She has been a writer since 2004 and wrote regularly for the sports and features sections of "The Technician" newspaper, as well as "Coastwach" magazine. Richardson also served as the co-editor-in-chief of "Windhover," an award-winning literary and arts magazine. She is currently teaching at a middle school.

Related Articles

Research Study Design Types

Research Study Design Types

Correlational Methods vs. Experimental Methods

Correlational Methods vs. Experimental Methods

Different Types of Methodologies

Different Types of Methodologies

Quasi-Experiment Advantages & Disadvantages

Quasi-Experiment Advantages & Disadvantages

What Are the Advantages & Disadvantages of Non-Experimental Design?

What Are the Advantages & Disadvantages of Non-Experimental...

Independent vs. Dependent Variables in Sociology

Independent vs. Dependent Variables in Sociology

Methods of Research Design

Methods of Research Design

Qualitative Research Pros & Cons

Qualitative Research Pros & Cons

How to Form a Theoretical Study of a Dissertation

How to Form a Theoretical Study of a Dissertation

What Is the Difference Between Internal & External Validity of Research Study Design?

What Is the Difference Between Internal & External...

Difference Between Conceptual & Theoretical Framework

Difference Between Conceptual & Theoretical Framework

The Advantages of Exploratory Research Design

The Advantages of Exploratory Research Design

What Is Quantitative Research?

What Is Quantitative Research?

What is a Dissertation?

What is a Dissertation?

What Are the Advantages & Disadvantages of Correlation Research?

What Are the Advantages & Disadvantages of Correlation...

What Is the Meaning of the Descriptive Method in Research?

What Is the Meaning of the Descriptive Method in Research?

How to Use Qualitative Research Methods in a Case Study Research Project

How to Use Qualitative Research Methods in a Case Study...

How to Tabulate Survey Results

How to Tabulate Survey Results

How to Cross Validate Qualitative Research Results

How to Cross Validate Qualitative Research Results

Types of Descriptive Research Methods

Types of Descriptive Research Methods

Regardless of how old we are, we never stop learning. Classroom is the educational resource for people of all ages. Whether you’re studying times tables or applying to college, Classroom has the answers.

  • Accessibility
  • Terms of Use
  • Privacy Policy
  • Copyright Policy
  • Manage Preferences

© 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on the Word Net lexical database for the English Language. See disclaimer .

Internationally Comparative Research Designs in the Social Sciences: Fundamental Issues, Case Selection Logics, and Research Limitations

International vergleichende Forschungsdesigns in den Sozialwissenschaften: Grundlagen, Fallauswahlstrategien und Grenzen

  • Abhandlungen
  • Published: 29 April 2019
  • Volume 71 , pages 75–97, ( 2019 )

Cite this article

  • Achim Goerres 1 ,
  • Markus B. Siewert 2 &
  • Claudius Wagemann 2  

2335 Accesses

17 Citations

Explore all metrics

This paper synthesizes methodological knowledge derived from comparative survey research and comparative politics and aims to enable researches to make prudent research decisions. Starting from the data structure that can occur in international comparisons at different levels, it suggests basic definitions for cases and contexts, i. e. the main ingredients of international comparison. The paper then goes on to discuss the full variety of case selection strategies in order to highlight their relative advantages and disadvantages. Finally, it presents the limitations of internationally comparative social science research. Overall, the paper suggests that comparative research designs must be crafted cautiously, with careful regard to a variety of issues, and emphasizes the idea that there can be no one-fits-all solution.

Zusammenfassung

Dieser Beitrag bietet eine Synopse zentraler methodischer Aspekte der vergleichenden Politikwissenschaft und Umfrageforschung und zielt darauf ab, Sozialwissenschaftler zu reflektierten forschungspraktischen Entscheidungen zu befähigen. Ausgehend von der Datenstruktur, die bei internationalen Vergleichen auf verschiedenen Ebenen vorzufinden ist, werden grundsätzliche Definitionen für Fälle und Kontexte, d. h. die zentralen Bestandteile des internationalen Vergleichs, vorgestellt. Anschließend wird die gesamte Bandbreite an Strategien zur Fallauswahl diskutiert, wobei auf ihre jeweiligen Vor- und Nachteile eingegangen wird. Im letzten Teil werden die Grenzen international vergleichender Forschung in den Sozialwissenschaften dargelegt. Der Beitrag plädiert für ein umsichtiges Design vergleichender Forschung, welches einer Vielzahl von Aspekten Rechnung trägt; dabei wird ausdrücklich betont, dass es keine Universallösung gibt.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

comparative survey research design

Cross-National Comparative Research—Analytical Strategies, Results, and Explanations

Hans-Jürgen Andreß, Detlef Fetchenhauer & Heiner Meulemann

comparative survey research design

Theory Development in Comparative Social Research

Clemens Kroneberg

comparative survey research design

Multilevel Structural Equation Modeling for Cross-National Comparative Research

Bart Meuleman

One could argue that there are no N  = 1 studies at all, and that every case study is “comparative”. The rationale for such an opinion is that it is hard to imagine a case study which is conducted without any reference to other cases, including theoretically possible (but factually nonexisting) ideal cases, paradigmatic cases, counterfactual cases, etc.

This exposition might suggest that only the combinations of “most independent variables vary and the outcome is similar between cases” and “most independent variables are similar and the outcome differs between cases” are possible. Ragin’s ( 1987 , 2000 , 2008 ) proposal of QCA (see also Schneider and Wagemann 2012 ) however shows that diversity (Ragin 2008 , p. 19) can also lie on both sides. Only those designs in which nothing varies, i. e. where the cases are similar and also have similar outcomes, do not seem to be very analytically interesting.

Beach, Derek, and Rasmus Brun Pedersen. 2016a. Causal case study methods: foundations and guidelines for comparing, matching, and tracing. Ann Arbor, MI: University of Michigan Press.

Book   Google Scholar  

Beach, Derek, and Rasmus Brun Pedersen. 2016b. “electing appropriate cases when tracing causal mechanisms. Sociological Methods & Research, online first (January). https://doi.org/10.1177/0049124115622510 .

Google Scholar  

Beach, Derek, and Rasmus Brun Pedersen. 2019. Process-tracing methods: Foundations and guidelines. 2. ed. Ann Arbor: University of Michigan Press.

Behnke, Joachim. 2005. Lassen sich Signifikanztests auf Vollerhebungen anwenden? Einige essayistische Anmerkungen. (Can significance tests be applied to fully-fledged surveys? A few essayist remarks) Politische Vierteljahresschrift 46:1–15. https://doi.org/10.1007/s11615-005-0240-y .

Article   Google Scholar  

Bennett, Andrew, and Jeffrey T. Checkel. 2015. Process tracing: From philosophical roots to best practices. In Process tracing. From metaphor to analytic tool, eds. Andrew Bennett and Jeffrey T. Checkel, 3–37. Cambridge: Cambridge University Press.

Bennett, Andrew, and Colin Elman. 2006. Qualitative research: Recent developments in case study methods. Annual Review of Political Science 9:455–76. https://doi.org/10.1146/annurev.polisci.8.082103.104918 .

Berg-Schlosser, Dirk. 2012. Mixed methods in comparative politics: Principles and applications . Basingstoke: Palgrave Macmillan.

Berg-Schlosser, Dirk, and Gisèle De Meur. 2009. Comparative research design: Case and variable selection. In Configurational comparative methods: Qualitative comparative analysis, 19–32. Thousand Oaks: SAGE Publications, Inc.

Chapter   Google Scholar  

Berk, Richard A., Bruce Western and Robert E. Weiss. 1995. Statistical inference for apparent populations. Sociological Methodology 25:421–458.

Blatter, Joachim, and Markus Haverland. 2012. Designing case studies: Explanatory approaches in small-n research . Basingstoke: Palgrave Macmillan.

Brady, Henry E., and David Collier. Eds. 2004. Rethinking social inquiry: Diverse tools, shared standards. 1st ed. Lanham, Md: Rowman & Littlefield Publishers.

Brady, Henry E., and David Collier. Eds. 2010. Rethinking social inquiry: Diverse tools, shared standards. 2nd ed. Lanham, Md: Rowman & Littlefield Publishers.

Broscheid, Andreas, and Thomas Gschwend. 2005. Zur statistischen Analyse von Vollerhebungen. (On the statistical analysis of fully-fledged surveys) Politische Vierteljahresschrift 46:16–26. https://doi.org/10.1007/s11615-005-0241-x .

Caporaso, James A., and Alan L. Pelowski. 1971. Economic and Political Integration in Europe: A Time-Series Quasi-Experimental Analysis. American Political Science Review 65(2):418–433.

Coleman, James S. 1990. Foundations of social theory. Cambridge: The Belknap Press of Harvard University Press.

Collier, David. 2014. Symposium: The set-theoretic comparative method—critical assessment and the search for alternatives. SSRN Scholarly Paper ID 2463329. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2463329 .

Collier, David, and Robert Adcock. 1999. Democracy and dichotomies: A pragmatic approach to choices about concepts. Annual Review of Political Science 2:537–565.

Collier, David, and James Mahoney. 1996. Insights and pitfalls: Selection bias in qualitative research. World Politics 49:56–91. https://doi.org/10.1353/wp.1996.0023 .

Collier, David, Jason Seawright and Gerardo L. Munck. 2010. The quest for standards: King, Keohane, and Verba’s designing social inquiry. In Rethinking social inquiry. Diverse tools, shared standards, eds. Henry E. Brady and David Collier, 2nd edition, 33–64. Lanham: Rowman & Littlefield Publishers.

Dahl, Robert A. Ed. 1966. Political opposition in western democracies. Yale: Yale University Press.

Dion, Douglas. 2003. Evidence and inference in the comparative case study. In Necessary conditions: Theory, methodology, and applications , ed. Gary Goertz and Harvey Starr, 127–45. Lanham, Md: Rowman & Littlefield Publishers.

Eckstein, Harry. 1975. Case study and theory in political science. In Handbook of political science, eds. Fred I. Greenstein and Nelson W. Polsby, 79–137. Reading: Addison-Wesley.

Eijk, Cees van der, and Mark N. Franklin. 1996. Choosing Europe? The European electorate and national politics in the face of union. Ann Arbor: The University of Michigan Press.

Fearon, James D., and David D. Laitin. 2008. Integrating qualitative and quantitative methods. In The Oxford handbook of political methodology , eds. Janet M. Box-Steffensmeier, Henry E. Brady and David Collier. Oxford; New York: Oxford University Press.

Franklin, James C. 2008. Shame on you: The impact of human rights criticism on political repression in Latin America. International Studies Quarterly 52:187–211. https://doi.org/10.1111/j.1468-2478.2007.00496.x .

Galiani, Sebastian, Stephen Knack, Lixin Colin Xu and Ben Zou. 2017. The effect of aid on growth: Evidence from a quasi-experiment. Journal of Economic Growth 22:1–33. https://doi.org/10.1007/s10887-016-9137-4 .

Ganghof, Steffen. 2005. Vergleichen in Qualitativer und Quantitativer Politikwissenschaft: X‑Zentrierte Versus Y‑Zentrierte Forschungsstrategien. (Comparison in qualitative and quantitative political science. X‑centered v. Y‑centered research strategies) In Vergleichen in Der Politikwissenschaft, eds. Sabine Kropp and Michael Minkenberg, 76–93. Wiesbaden: VS Verlag.

Geddes, Barbara. 1990. How the cases you choose affect the answers you get: Selection bias in comparative politics. Political Analysis 2:131–150.

George, Alexander L., and Andrew Bennett. 2005. Case studies and theory development in the social sciences. Cambridge, Mass: The MIT Press.

Gerring, John. 2007. Case study research: Principles and practices. Cambridge; New York: Cambridge University Press.

Goerres, Achim, and Markus Tepe. 2010. Age-based self-interest, intergenerational solidarity and the welfare state: A comparative analysis of older people’s attitudes towards public childcare in 12 OECD countries. European Journal of Political Research 49:818–51. https://doi.org/10.1111/j.1475-6765.2010.01920.x .

Goertz, Gary. 2006. Social science concepts: A user’s guide. Princeton; Oxford: Princeton University Press.

Goertz, Gary. 2017. Multimethod research, causal mechanisms, and case studies: An integrated approach. Princeton, NJ: Princeton University Press.

Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton, N.J: Princeton University Press.

Goldthorpe, John H. 1997. Current issues in comparative macrosociology: A debate on methodological issues. Comparative Social Research 16:1–26.

Jahn, Detlef. 2006. Globalization as “Galton’s problem”: The missing link in the analysis of diffusion patterns in welfare state development. International Organization 60. https://doi.org/10.1017/S0020818306060127 .

King, Gary, Robert O. Keohane and Sidney Verba. 1994. Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.

Kittel, Bernhard. 2006. A crazy methodology?: On the limits of macro-quantitative social science research. International Sociology 21:647–77. https://doi.org/10.1177/0268580906067835 .

Lazarsfeld, Paul. 1937. Some remarks on typological procedures in social research. Zeitschrift für Sozialforschung 6:119–39.

Lieberman, Evan S. 2005. Nested analysis as a mixed-method strategy for comparative research. American Political Science Review 99:435–52. https://doi.org/10.1017/S0003055405051762 .

Lijphart, Arend. 1971. Comparative politics and the comparative method . American Political Science Review 65:682–93. https://doi.org/10.2307/1955513 .

Lundsgaarde, Erik, Christian Breunig and Aseem Prakash. 2010. Instrumental philanthropy: Trade and the allocation of foreign aid. Canadian Journal of Political Science 43:733–61.

Maggetti, Martino, Claudio Radaelli and Fabrizio Gilardi. 2013. Designing research in the social sciences. Thousand Oaks: SAGE.

Mahoney, James. 2003. Strategies of causal assessment in comparative historical analysis. In Comparative historical analysis in the social sciences , eds. Dietrich Rueschemeyer and James Mahoney, 337–72. Cambridge; New York: Cambridge University Press.

Mahoney, James. 2010. After KKV: The new methodology of qualitative research. World Politics 62:120–47. https://doi.org/10.1017/S0043887109990220 .

Mahoney, James, and Gary Goertz. 2004. The possibility principle: Choosing negative cases in comparative research. The American Political Science Review 98:653–69.

Mahoney, James, and Gary Goertz. 2006. A tale of two cultures: Contrasting quantitative and qualitative research. Political Analysis 14:227–49. https://doi.org/10.1093/pan/mpj017 .

Marks, Gary, Liesbet Hooghe, Moira Nelson and Erica Edwards. 2006. Party competition and European integration in the east and west. Comparative Political Studies 39:155–75. https://doi.org/10.1177/0010414005281932 .

Merton, Robert. 1957. Social theory and social structure. New York: Free Press.

Merz, Nicolas, Sven Regel and Jirka Lewandowski. 2016. The manifesto corpus: A new resource for research on political parties and quantitative text analysis. Research & Politics 3:205316801664334. https://doi.org/10.1177/2053168016643346 .

Michels, Robert. 1962. Political parties: A sociological study of the oligarchical tendencies of modern democracy . New York: Collier Books.

Nielsen, Richard A. 2016. Case selection via matching. Sociological Methods & Research 45:569–97. https://doi.org/10.1177/0049124114547054 .

Porta, Donatella della, and Michael Keating. 2008. How many approaches in the social sciences? An epistemological introduction. In Approaches and methodologies in the social sciences. A pluralist perspective, eds. Donatella della Porta and Michael Keating, 19–39. Cambridge; New York: Cambridge University Press.

Powell, G. Bingham, Russell J. Dalton and Kaare Strom. 2014. Comparative politics today: A world view. 11th ed. Boston: Pearson Educ.

Przeworski, Adam, and Henry J. Teune. 1970. The logic of comparative social inquiry . New York: John Wiley & Sons Inc.

Ragin, Charles C. 1987. The comparative method: Moving beyond qualitative and quantitative strategies. Berkley: University of California Press.

Ragin, Charles C. 2000. Fuzzy-set social science. Chicago: University of Chicago Press.

Ragin, Charles C. 2004. Turning the tables: How case-oriented research challenges variable-oriented research. In Rethinking social inquiry : Diverse tools, shared standards , eds. Henry E. Brady and David Collier, 123–38. Lanham, Md: Rowman & Littlefield Publishers.

Ragin, Charles C. 2008. Redesigning social inquiry: Fuzzy sets and beyond. Chicago: University of Chicago Press.

Ragin, Charles C., and Howard S. Becker. 1992. What is a case?: Exploring the foundations of social inquiry. Cambridge University Press.

Rohlfing, Ingo. 2012. Case studies and causal inference: An integrative framework . Basingstokes: Palgrave Macmillan.

Rohlfing, Ingo, and Carsten Q. Schneider. 2013. Improving research on necessary conditions: Formalized case selection for process tracing after QCA. Political Research Quarterly 66:220–35.

Rohlfing, Ingo, and Carsten Q. Schneider. 2016. A unifying framework for causal analysis in set-theoretic multimethod research. Sociological Methods & Research, online first (March). https://doi.org/10.1177/0049124115626170 .

Rueschemeyer, Dietrich. 2003. Can one or a few cases yield theoretical gains? In Comparative historical analysis in the social sciences , eds. Dietrich Rueschemeyer and James Mahoney, 305–36. Cambridge; New York: Cambridge University Press.

Sartori, Giovanni. 1970. Concept misformation in comparative politics. American Political Science Review 64:1033–53. https://doi.org/10.2307/1958356 .

Schmitter, Philippe C. 2008. The design of social and political research. Chinese Political Science Review . https://doi.org/10.1007/s41111-016-0044-9 .

Schneider, Carsten Q., and Ingo Rohlfing. 2016. Case studies nested in fuzzy-set QCA on sufficiency: Formalizing case selection and causal inference. Sociological Methods & Research 45:526–68. https://doi.org/10.1177/0049124114532446 .

Schneider, Carsten Q., and Claudius Wagemann. 2012. Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis. Cambridge: Cambridge University Press.

Seawright, Jason, and David Collier. 2010ra. Glossary.”In Rethinking social inquiry. Diverse tools, shared standards , eds. Henry E. Brady and David Collier, 2nd ed., 313–60. Lanham, Md: Rowman & Littlefield Publishers.

Seawright, Jason, and John Gerring. 2008. Case selection techniques in case study research, a menu of qualitative and quantitative options. Political Research Quarterly 61:294–308.

Shapiro, Ian. 2002. Problems, methods, and theories in the study of politics, or what’s wrong with political science and what to do about it. Political Theory 30:588–611.

Simmons, Beth A., and Zachary Elkins. 2004. The globalization of liberalization: Policy diffusion in the international political economy. American Political Science Review 98:171–89. https://doi.org/10.1017/S0003055404001078 .

Skocpol, Theda, and Margaret Somers. 1980. The uses of comparative history in macrosocial inquriy. Comparative Studies in Society and History 22:174–97.

Snyder, Richard. 2001. Scaling down: The subnational comparative method. Studies in Comparative International Development 36:93–110. https://doi.org/10.1007/BF02687586 .

Steenbergen, Marco, and Bradford S. Jones. 2002. Modeling multilevel data structures. American Journal of Political Science 46:218–37.

Wagemann, Claudius, Achim Goerres and Markus Siewert. Eds. 2019. Handbuch Methoden der Politikwissenschaft, Wiesbaden: Springer, online available at https://link.springer.com/referencework/10.1007/978-3-658-16937-4

Weisskopf, Thomas E. 1975. China and India: Contrasting Experiences in Economic Development. The American Economic Review 65:356–364.

Weller, Nicholas, and Jeb Barnes. 2014. Finding pathways: Mixed-method research for studying causal mechanisms . Cambridge: Cambridge University Press.

Wright Mills, C. 1959. The sociological imagination . Oxford: Oxford University Press.

Download references

Acknowledgements

Equal authors listed in alphabetical order. We would like to thank Ingo Rohlfing, Anne-Kathrin Fischer, Heiner Meulemann and Hans-Jürgen Andreß for their detailed feedback, and all the participants of the book workshop for their further comments. We are grateful to Jonas Elis for his linguistic suggestions.

Author information

Authors and affiliations.

Fakultät für Gesellschaftswissenschaften, Institut für Politikwissenschaft, Universität Duisburg-Essen, Lotharstr. 65, 47057, Duisburg, Germany

Achim Goerres

Fachbereich Gesellschaftswissenschaften, Institut für Politikwissenschaft, Goethe-Universität Frankfurt, Theodor-W.-Adorno Platz 6, 60323, Frankfurt am Main, Germany

Markus B. Siewert & Claudius Wagemann

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Achim Goerres .

Rights and permissions

Reprints and permissions

About this article

Goerres, A., Siewert, M.B. & Wagemann, C. Internationally Comparative Research Designs in the Social Sciences: Fundamental Issues, Case Selection Logics, and Research Limitations. Köln Z Soziol 71 (Suppl 1), 75–97 (2019). https://doi.org/10.1007/s11577-019-00600-2

Download citation

Published : 29 April 2019

Issue Date : 03 June 2019

DOI : https://doi.org/10.1007/s11577-019-00600-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • International comparison
  • Comparative designs
  • Quantitative and qualitative comparisons
  • Case selection

Schlüsselwörter

  • Internationaler Vergleich
  • Vergleichende Studiendesigns
  • Quantitativer und qualitativer Vergleich
  • Fallauswahl
  • Find a journal
  • Publish with us
  • Track your research

comparative survey research design

  • Origins of CSDI
  • CSDI Executive Committee
  • Cross Cultural Survey Guidelines
  • Data Harmonization
  • 2024 CSDI Program
  • 2024 CSDI Workshop Registration
  • Travel and Venue Information
  • 2023 Program
  • 2022 Program
  • COVID-19 Travel Information
  • 2021 CSDI Virtual Workshop Schedule
  • Live Panel Sessions
  • Virtual Social Event
  • Pre-recorded Sessions
  • 2021 CSDI Workshop Participants’ Bios
  • Virtual Workshop Video Tutorial
  • Recording and Presentation Guidelines
  • 2019 Workshop Program
  • 2019 CSDI Presentations
  • 2019 Participant Bios
  • 2019 Abstracts
  • 2021 User Program
  • 2021 Registration
  • Hotel and Venue Information
  • Programs, Abstracts, and Biosketches
  • 2018 Presentations
  • 2017 Program
  • 2017 Presentations
  • 2016 Presentations
  • 2016 Schedule
  • 2016 Program
  • 2015 Program
  • 2015 Presentations
  • 2015 Bio-sketches
  • 2015 Abstract
  • 2014 Program
  • 2014 Presentations
  • 2014 Bio-sketches
  • 2014 Abstracts
  • 2013 Program
  • 2013 Biosketches
  • 2013 Abstract
  • 2012 Program
  • 2012 Presentations
  • 2011 Program
  • 2011 Presentations
  • 2010 Program
  • 2010 Presentations
  • 2009 Program
  • 2009 Presentations
  • 2008 Presentations
  • 2007 Programme
  • 2007 Participants
  • 2007 Biosketches
  • 2006 Program
  • 2006 Biosketches
  • 2006 Abstracts
  • 2005 Program
  • 2005 Participants
  • 2005 Report
  • 2004 Programme
  • 2004 Biosketches
  • Cross-cultural Survey Guidelines
  • International Surveys Short Courses
  • Webinar: Showcase of Free Online Courses and Guidelines

We are pleased to announce that the SHARE BERLIN Institute GmbH will host the 2024 CSDI Workshop. The workshop will take place March 18th – 20th at the SHARE BERLIN Institute GmbH in Berlin, Germany.

Call for Individual Abstracts 

Below is a list of suggested topics. If your topic is not listed, please feel free to submit a session abstract for any topic that relates to comparative survey design and implementation.

The individual abstracts are due January 5, 2024 .

  • Equivalency measures
  • Achieving comparability
  • Questionnaire development and testing
  • Translation, adaptation, and assessment
  • Minimizing measurement error
  • Interviewer effects
  • Sampling innovations
  • Data collection challenges and solutions
  • Quality control
  • Innovative uses of technology
  • Paradata use across the lifecycle
  • Metadata use
  • Comparative standard demographics
  • Data curation and dissemination
  • Comparative analyses
  • Computational comparability measures

To submit your abstract (up to 300 words), please use this link: Submit Individual  Abstract.  

For your convenience, here is a list of important dates: 

  • January 5, 2024 – Individual abstracts due (session organizers are responsible for ensuring their session participants submit individual abstracts) 
  • January 15, 2024 – Abstract notification sent, and online registration opens
  • March 1, 2024 – Online registration closes
  • March 18-20, 2024 – CSDI Workshop

Updating CSDI Database

The Comparative Survey Design and Implementation (CSDI) group is doing a little housekeeping and we are in the process of updating contact information for people who have attended past events (e.g., CSDI Workshops or 3MC Conferences) or who have asked to be added to our email list to receive announcements and updates.

If you would like to continue to receive emails from CSDI, please take a minute (I promise it will only take a minute!) to update your contact information using this form . Please complete the form by December 1 st to ensure you do not miss out on future announcements and updates.

Advances in Comparative Survey Methods: Multinational, Multiregional and Multicultural Contexts (3MC)

comparative survey research design

Since the publication of the last 3MC monograph (2010), there have been substantial methodological, operational and technical advances in comparative survey research. There are also whole new areas of methodological development including collecting biomarkers, the human subject regulatory environment, innovations in data collection methodology and sampling techniques, use of paradata across the survey lifecycle, metadata standards for dissemination, as well as new analytical techniques. This new monograph follows the survey lifecycle and include chapters on study design and considerations, sampling, questionnaire design (assuming multi-language surveys), translation, mixed mode, the regulatory environment, data collection, quality assurance and control, analysis techniques and data documentation and dissemination. The table of contents can be accessed here .

Johnson, T. P., Pennell, B.-E., Stoop, I., & Dorer, B. (Eds.). (2018).  Advances in comparative survey methods: Multinational, multiregional and multicultural contexts (3MC) . Hoboken, New Jersey: John Wiley & Sons Inc.

Survey Methods in Multicultural, Multinational, and Multiregional Contexts

comparative survey research design

Harkness, J. A., Braun, M., Edwards, B., Johnson, T. P., & Lyberg, L. E. (2010).  Survey methods in multicultural, multinational, and multiregional contexts . Hoboken, New Jersey: John Wiley & Sons.

New and Expanded Cross-Cultural Survey Guidelines

First published in 2008, the Cross-Cultural Survey Guidelines have recently undergone a significant update and expansion (Beta release: July 2016). The new edition includes over 800 pages of content with major updates and the expansion of all existing chapters, as well as the addition of new chapters on study design, study management, paradata, and statistical analysis. More than 70 professionals from 35 organizations contributed to this effort. The senior editor was Tom W. Smith of NORC at the University of Chicago. See: http://ccsg.isr.umich.edu/index.php/about-us/contributions for a complete list of contributors. The Cross-Cultural Survey Guidelines were developed to provide information on best practices across the survey lifecycle in a world in which the number and scope of studies covering multiple cultures, languages, nations, or regions has increased significantly. They were the product of an initiative of the International Workshop on Comparative Survey Design and Implementation (http://www.csdiworkshop.org/). The initiative was led by Beth-Ellen Pennell, currently the director of international survey operations at the Survey Research Center, Institute for Social Research at the University of Michigan. The aim of the initiative was to develop and promote internationally recognized guidelines that highlight best practice for the conduct of comparative survey research across cultures and countries. The guidelines address the gap in the existing literature on the details of implementing surveys that are specifically designed for comparative research, including what aspects should be standardized and when local adaptation is appropriate. The intended audience for the guidelines includes researchers and survey practitioners planning or engaged in what are increasingly referred to as multinational, multiregional, or multicultural (3MC) surveys, although much of the material is also relevant for single country surveys. The guidelines cover all aspects of the survey life cycle and include the following chapters: Study Design and Organizational Structure; Study Management; Tenders, Bids and Contracts; Sample Design; Questionnaire Design; Adaptation; Translation; Instrument Technical Design; Interviewer Recruitment, Selection, and Training; Pretesting; Data Collection; Paradata and Other Auxiliary Data; Data Harmonization; Data Processing and Statistical Adjustment; Data Dissemination; Survey Quality and Ethical Considerations. The guidelines can be found at: http://ccsg.isr.umich.edu . We welcome feedback and suggestions.

Janet A. Harkness Student Paper Award

comparative survey research design

http://wapor.org/janet-a-harkness-student-paper-award

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Descriptive Research Design – Types, Methods and Examples

Descriptive Research Design – Types, Methods and Examples

Table of Contents

Descriptive Research Design

Descriptive Research Design

Definition:

Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied.

Descriptive research design does not attempt to establish cause-and-effect relationships between variables or make predictions about future outcomes. Instead, it focuses on providing a detailed and accurate representation of the data collected, which can be useful for generating hypotheses, exploring trends, and identifying patterns in the data.

Types of Descriptive Research Design

Types of Descriptive Research Design are as follows:

Cross-sectional Study

This involves collecting data at a single point in time from a sample or population to describe their characteristics or behaviors. For example, a researcher may conduct a cross-sectional study to investigate the prevalence of certain health conditions among a population, or to describe the attitudes and beliefs of a particular group.

Longitudinal Study

This involves collecting data over an extended period of time, often through repeated observations or surveys of the same group or population. Longitudinal studies can be used to track changes in attitudes, behaviors, or outcomes over time, or to investigate the effects of interventions or treatments.

This involves an in-depth examination of a single individual, group, or situation to gain a detailed understanding of its characteristics or dynamics. Case studies are often used in psychology, sociology, and business to explore complex phenomena or to generate hypotheses for further research.

Survey Research

This involves collecting data from a sample or population through standardized questionnaires or interviews. Surveys can be used to describe attitudes, opinions, behaviors, or demographic characteristics of a group, and can be conducted in person, by phone, or online.

Observational Research

This involves observing and documenting the behavior or interactions of individuals or groups in a natural or controlled setting. Observational studies can be used to describe social, cultural, or environmental phenomena, or to investigate the effects of interventions or treatments.

Correlational Research

This involves examining the relationships between two or more variables to describe their patterns or associations. Correlational studies can be used to identify potential causal relationships or to explore the strength and direction of relationships between variables.

Data Analysis Methods

Descriptive research design data analysis methods depend on the type of data collected and the research question being addressed. Here are some common methods of data analysis for descriptive research:

Descriptive Statistics

This method involves analyzing data to summarize and describe the key features of a sample or population. Descriptive statistics can include measures of central tendency (e.g., mean, median, mode) and measures of variability (e.g., range, standard deviation).

Cross-tabulation

This method involves analyzing data by creating a table that shows the frequency of two or more variables together. Cross-tabulation can help identify patterns or relationships between variables.

Content Analysis

This method involves analyzing qualitative data (e.g., text, images, audio) to identify themes, patterns, or trends. Content analysis can be used to describe the characteristics of a sample or population, or to identify factors that influence attitudes or behaviors.

Qualitative Coding

This method involves analyzing qualitative data by assigning codes to segments of data based on their meaning or content. Qualitative coding can be used to identify common themes, patterns, or categories within the data.

Visualization

This method involves creating graphs or charts to represent data visually. Visualization can help identify patterns or relationships between variables and make it easier to communicate findings to others.

Comparative Analysis

This method involves comparing data across different groups or time periods to identify similarities and differences. Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups within a population.

Applications of Descriptive Research Design

Descriptive research design has numerous applications in various fields. Some of the common applications of descriptive research design are:

  • Market research: Descriptive research design is widely used in market research to understand consumer preferences, behavior, and attitudes. This helps companies to develop new products and services, improve marketing strategies, and increase customer satisfaction.
  • Health research: Descriptive research design is used in health research to describe the prevalence and distribution of a disease or health condition in a population. This helps healthcare providers to develop prevention and treatment strategies.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs. This helps educators to improve teaching methods and develop effective educational programs.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs. This helps researchers to understand social behavior and develop effective policies.
  • Public opinion research: Descriptive research design is used in public opinion research to understand the opinions and attitudes of the general public on various issues. This helps policymakers to develop effective policies that are aligned with public opinion.
  • Environmental research: Descriptive research design is used in environmental research to describe the environmental conditions of a particular region or ecosystem. This helps policymakers and environmentalists to develop effective conservation and preservation strategies.

Descriptive Research Design Examples

Here are some real-time examples of descriptive research designs:

  • A restaurant chain wants to understand the demographics and attitudes of its customers. They conduct a survey asking customers about their age, gender, income, frequency of visits, favorite menu items, and overall satisfaction. The survey data is analyzed using descriptive statistics and cross-tabulation to describe the characteristics of their customer base.
  • A medical researcher wants to describe the prevalence and risk factors of a particular disease in a population. They conduct a cross-sectional study in which they collect data from a sample of individuals using a standardized questionnaire. The data is analyzed using descriptive statistics and cross-tabulation to identify patterns in the prevalence and risk factors of the disease.
  • An education researcher wants to describe the learning outcomes of students in a particular school district. They collect test scores from a representative sample of students in the district and use descriptive statistics to calculate the mean, median, and standard deviation of the scores. They also create visualizations such as histograms and box plots to show the distribution of scores.
  • A marketing team wants to understand the attitudes and behaviors of consumers towards a new product. They conduct a series of focus groups and use qualitative coding to identify common themes and patterns in the data. They also create visualizations such as word clouds to show the most frequently mentioned topics.
  • An environmental scientist wants to describe the biodiversity of a particular ecosystem. They conduct an observational study in which they collect data on the species and abundance of plants and animals in the ecosystem. The data is analyzed using descriptive statistics to describe the diversity and richness of the ecosystem.

How to Conduct Descriptive Research Design

To conduct a descriptive research design, you can follow these general steps:

  • Define your research question: Clearly define the research question or problem that you want to address. Your research question should be specific and focused to guide your data collection and analysis.
  • Choose your research method: Select the most appropriate research method for your research question. As discussed earlier, common research methods for descriptive research include surveys, case studies, observational studies, cross-sectional studies, and longitudinal studies.
  • Design your study: Plan the details of your study, including the sampling strategy, data collection methods, and data analysis plan. Determine the sample size and sampling method, decide on the data collection tools (such as questionnaires, interviews, or observations), and outline your data analysis plan.
  • Collect data: Collect data from your sample or population using the data collection tools you have chosen. Ensure that you follow ethical guidelines for research and obtain informed consent from participants.
  • Analyze data: Use appropriate statistical or qualitative analysis methods to analyze your data. As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis.
  • I nterpret results: Interpret your findings in light of your research question and objectives. Identify patterns, trends, and relationships in the data, and describe the characteristics of your sample or population.
  • Draw conclusions and report results: Draw conclusions based on your analysis and interpretation of the data. Report your results in a clear and concise manner, using appropriate tables, graphs, or figures to present your findings. Ensure that your report follows accepted research standards and guidelines.

When to Use Descriptive Research Design

Descriptive research design is used in situations where the researcher wants to describe a population or phenomenon in detail. It is used to gather information about the current status or condition of a group or phenomenon without making any causal inferences. Descriptive research design is useful in the following situations:

  • Exploratory research: Descriptive research design is often used in exploratory research to gain an initial understanding of a phenomenon or population.
  • Identifying trends: Descriptive research design can be used to identify trends or patterns in a population, such as changes in consumer behavior or attitudes over time.
  • Market research: Descriptive research design is commonly used in market research to understand consumer preferences, behavior, and attitudes.
  • Health research: Descriptive research design is useful in health research to describe the prevalence and distribution of a disease or health condition in a population.
  • Social science research: Descriptive research design is used in social science research to describe social phenomena such as cultural norms, values, and beliefs.
  • Educational research: Descriptive research design is used in educational research to describe the performance of students, schools, or educational programs.

Purpose of Descriptive Research Design

The main purpose of descriptive research design is to describe and measure the characteristics of a population or phenomenon in a systematic and objective manner. It involves collecting data that describe the current status or condition of the population or phenomenon of interest, without manipulating or altering any variables.

The purpose of descriptive research design can be summarized as follows:

  • To provide an accurate description of a population or phenomenon: Descriptive research design aims to provide a comprehensive and accurate description of a population or phenomenon of interest. This can help researchers to develop a better understanding of the characteristics of the population or phenomenon.
  • To identify trends and patterns: Descriptive research design can help researchers to identify trends and patterns in the data, such as changes in behavior or attitudes over time. This can be useful for making predictions and developing strategies.
  • To generate hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • To establish a baseline: Descriptive research design can establish a baseline or starting point for future research. This can be useful for comparing data from different time periods or populations.

Characteristics of Descriptive Research Design

Descriptive research design has several key characteristics that distinguish it from other research designs. Some of the main characteristics of descriptive research design are:

  • Objective : Descriptive research design is objective in nature, which means that it focuses on collecting factual and accurate data without any personal bias. The researcher aims to report the data objectively without any personal interpretation.
  • Non-experimental: Descriptive research design is non-experimental, which means that the researcher does not manipulate any variables. The researcher simply observes and records the behavior or characteristics of the population or phenomenon of interest.
  • Quantitative : Descriptive research design is quantitative in nature, which means that it involves collecting numerical data that can be analyzed using statistical techniques. This helps to provide a more precise and accurate description of the population or phenomenon.
  • Cross-sectional: Descriptive research design is often cross-sectional, which means that the data is collected at a single point in time. This can be useful for understanding the current state of the population or phenomenon, but it may not provide information about changes over time.
  • Large sample size: Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Systematic and structured: Descriptive research design involves a systematic and structured approach to data collection, which helps to ensure that the data is accurate and reliable. This involves using standardized procedures for data collection, such as surveys, questionnaires, or observation checklists.

Advantages of Descriptive Research Design

Descriptive research design has several advantages that make it a popular choice for researchers. Some of the main advantages of descriptive research design are:

  • Provides an accurate description: Descriptive research design is focused on accurately describing the characteristics of a population or phenomenon. This can help researchers to develop a better understanding of the subject of interest.
  • Easy to conduct: Descriptive research design is relatively easy to conduct and requires minimal resources compared to other research designs. It can be conducted quickly and efficiently, and data can be collected through surveys, questionnaires, or observations.
  • Useful for generating hypotheses: Descriptive research design can be used to generate hypotheses or research questions that can be tested in future studies. For example, if a descriptive study finds a correlation between two variables, this could lead to the development of a hypothesis about the causal relationship between the variables.
  • Large sample size : Descriptive research design typically involves a large sample size, which helps to ensure that the data is representative of the population of interest. A large sample size also helps to increase the reliability and validity of the data.
  • Can be used to monitor changes : Descriptive research design can be used to monitor changes over time in a population or phenomenon. This can be useful for identifying trends and patterns, and for making predictions about future behavior or attitudes.
  • Can be used in a variety of fields : Descriptive research design can be used in a variety of fields, including social sciences, healthcare, business, and education.

Limitation of Descriptive Research Design

Descriptive research design also has some limitations that researchers should consider before using this design. Some of the main limitations of descriptive research design are:

  • Cannot establish cause and effect: Descriptive research design cannot establish cause and effect relationships between variables. It only provides a description of the characteristics of the population or phenomenon of interest.
  • Limited generalizability: The results of a descriptive study may not be generalizable to other populations or situations. This is because descriptive research design often involves a specific sample or situation, which may not be representative of the broader population.
  • Potential for bias: Descriptive research design can be subject to bias, particularly if the researcher is not objective in their data collection or interpretation. This can lead to inaccurate or incomplete descriptions of the population or phenomenon of interest.
  • Limited depth: Descriptive research design may provide a superficial description of the population or phenomenon of interest. It does not delve into the underlying causes or mechanisms behind the observed behavior or characteristics.
  • Limited utility for theory development: Descriptive research design may not be useful for developing theories about the relationship between variables. It only provides a description of the variables themselves.
  • Relies on self-report data: Descriptive research design often relies on self-report data, such as surveys or questionnaires. This type of data may be subject to biases, such as social desirability bias or recall bias.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

IMAGES

  1. 🐈 Causal comparative research method. CAUSAL. 2022-10-30

    comparative survey research design

  2. Comparison of design-based research with experimental studies

    comparative survey research design

  3. FREE 9+ Comparative Research Templates in PDF

    comparative survey research design

  4. PPT

    comparative survey research design

  5. What is Comparative Research? Definition, Types, Uses

    comparative survey research design

  6. Comparative Research

    comparative survey research design

VIDEO

  1. Different types of Research Designs|Quantitative|Qualitative|English| part 1|

  2. Research Design, Research Method: What's the Difference?

  3. CITI Program Course Preview

  4. A Comparative Survey of Geometric Light Source Calibration Methods

  5. Survey Research Method in Psychology] Urdu/ Hindi #wellnessbyfarah #psychologylessons #psychology

  6. Comparison of Research Designs

COMMENTS

  1. Chapter 2 Comparative survey research

    Chapter 2 Comparative survey research. Cross-national and cross-cultural comparative surveys are a very important resource for the Social Sciences. According to the Overview of Comparative Surveys Worldwide, more than 90 cross-national comparative surveys have been conducted around the world since 1948.. Even though surveys can aim to fulfill different purposes, generally they aim to estimate ...

  2. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  3. Comparative Research Methods

    The communication field is likely to see more comparative survey research in the future, partly due to the growing data availability on media use from projects such as the World Internet Project. But even in this project, which was designed with a comparative goal from the outset, meaningful conclusions can only be drawn after careful tests of ...

  4. PDF SURVEY AND CORRELATIONAL RESEARCH DESIGNS

    The survey research design is the use of a survey, administered either in written form or orally, to quan-tify, describe, or characterize an individual or a group. A survey is a series of questions or statements, called items, used in a questionnaire or an interview to mea-sure the self-reports or responses of respondents.

  5. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  6. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  7. Comparative research

    Comparative research is a research methodology in the social sciences exemplified in cross-cultural or comparative studies that aims to make comparisons across different countries or cultures.A major problem in comparative research is that the data sets in different countries may define categories differently (for example by using different definitions of poverty) or may not use the same ...

  8. Comparative survey research: Goal and challenges.

    We spend some time therefore on issues of standardization and implementation, on question design and on question adaptation and translation. Among the topics not dealt with here, but of obvious relevance for comparative survey research, are sampling, analysis, instrument testing, study documentation, and ethical considerations.

  9. Comparative Survey Methodology

    The chapter considers the special nature of comparative surveys, and how comparability may drive design decisions. It also considers recent changes in comparative survey research methods and practice. The final section of the chapter considers ongoing challenges and the current outlook. Controlled Vocabulary Terms. data collection

  10. Comparative Research Methods

    Comparative Research Methods FRANK ESSER University of Zurich, Switzerland ... the research design. Second, the macro-level units of comparison must be clearly delineated, irrespective of how the boundaries are defined. ... In comparative survey research, a problem on the side of. COMPARATIVERESEARCH METHODS 11

  11. Comparative Designs

    A comparative design involves studying variation by comparing a limited number of cases without using statistical probability analyses. Such designs are particularly useful for knowledge development when we lack the conditions for control through variable-centred, quasi-experimentaldesigns. Comparative designs often combine different research ...

  12. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  13. Editorial: Comparative Survey Research

    Only by dealing with these challenges on top of the standard study-design issues can scientifically-credible comparative research emerge (Smith, 2002, 2004). The basic goal of cross-national survey research is to construct questionnaires that are functionally equivalent across populations. 1 Questions need not only be valid, but also must have ...

  14. PDF Internationally Comparative Research Designs in the Social Sciences

    parative survey research and comparative politics and aims to enable researches to make prudent research decisions. Starting from the data structure that can occur in ... that a research design can, and even must, undergo necessary adjustments during the course of research (Schmitter 2008), (iii) and that, at the end of the day, ev-

  15. Designing Research With Qualitative Comparative Analysis (QCA

    Berg-Schlosser D., De Meur G. 2009. "Comparative Research Design: Case and Variable Selection." Pp. 19-32 in Configurational Comparative ... Walter A. 2014. "QCA, the Truth Table Analysis and Large-N Survey Data: The Benefits of Calibration and the Importance of Robustness Tests." COMPASS Working Paper 2014-2079. Available at ...

  16. 48 Comparative Opinion Surveys

    This article discusses comparative opinion surveys and comparative survey research. It first identifies the problems of comparative surveys and how these problems can be solved or overcome. It then tries to determine whether these surveys should include more or fewer countries during the research process. The last two sections discuss the need ...

  17. Cross-National Comparative Research—Analytical Strategies, Results, and

    This introductory article reviews the history of cross-national comparative research, discusses its typical research designs and research questions, and ul ... and methodologies (see Goerres et al. 2019), most of them can basically be regarded as examples of four types of research design. ... Advances in comparative survey methods ...

  18. Characteristics of a Comparative Research Design

    Comparative research essentially compares two groups in an attempt to draw a conclusion about them. Researchers attempt to identify and analyze similarities and differences between groups, and these studies are most often cross-national, comparing two separate people groups. ... making them more expensive. Use comparative research design when ...

  19. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  20. Internationally Comparative Research Designs in the Social ...

    This paper synthesizes methodological knowledge derived from comparative survey research and comparative politics and aims to enable researches to make prudent research decisions. Starting from the data structure that can occur in international comparisons at different levels, it suggests basic definitions for cases and contexts, i. e. the main ingredients of international comparison. The ...

  21. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  22. CSDI Workshop

    Updating CSDI Database. The Comparative Survey Design and Implementation (CSDI) group is doing a little housekeeping and we are in the process of updating contact information for people who have attended past events (e.g., CSDI Workshops or 3MC Conferences) or who have asked to be added to our email list to receive announcements and updates.

  23. Descriptive Research Design

    Descriptive research design is a type of research methodology that aims to describe or document the characteristics, behaviors, attitudes, opinions, or perceptions of a group or population being studied. ... Survey Research. ... Comparative analysis can help describe changes in attitudes or behaviors over time or differences between subgroups ...