• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Definition of Random Assignment According to Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

in a random assignment

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

in a random assignment

Materio / Getty Images

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group. In clinical research, randomized clinical trials are known as the gold standard for meaningful results.

Simple random assignment techniques might involve tactics such as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to a list of participants. It is important to note that random assignment differs from random selection .

While random selection refers to how participants are randomly chosen from a target population as representatives of that population, random assignment refers to how those chosen participants are then assigned to experimental groups.

Random Assignment In Research

To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some predictable impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable , while the variable that they will then measure for different outcomes is known as the dependent variable. While there are different ways to look at relationships between variables, an experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

Random Selection

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 60% female and 40% male, then the sample should reflect those same percentages.

Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands an equal chance of being chosen to minimize any bias. Once a pool of participants has been selected, it is time to assign them to groups.

By randomly assigning the participants into groups, the experimenters can be fairly sure that each group will have the same characteristics before the independent variable is applied.

Participants might be randomly assigned to the control group , which does not receive the treatment in question. The control group may receive a placebo or receive the standard treatment. Participants may also be randomly assigned to the experimental group , which receives the treatment of interest. In larger studies, there can be multiple treatment groups for comparison.

There are simple methods of random assignment, like rolling the die. However, there are more complex techniques that involve random number generators to remove any human error.

There can also be random assignment to groups with pre-established rules or parameters. For example, if you want to have an equal number of men and women in each of your study groups, you might separate your sample into two groups (by sex) before randomly assigning each of those groups into the treatment group and control group.

Random assignment is essential because it increases the likelihood that the groups are the same at the outset. With all characteristics being equal between groups, other than the application of the independent variable, any differences found between group outcomes can be more confidently attributed to the effect of the intervention.

Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group.

The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test.

Participants in both groups then take the test, and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an important role in the psychology research process. Not only does this process help eliminate possible sources of bias, but it also makes it easier to generalize the results of a tested sample of participants to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population of interest. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Lin Y, Zhu M, Su Z. The pursuit of balance: An overview of covariate-adaptive randomization techniques in clinical trials . Contemp Clin Trials. 2015;45(Pt A):21-25. doi:10.1016/j.cct.2015.07.011

Sullivan L. Random assignment versus random selection . In: The SAGE Glossary of the Social and Behavioral Sciences. SAGE Publications, Inc.; 2009. doi:10.4135/9781412972024.n2108

Alferes VR. Methods of Randomization in Experimental Design . SAGE Publications, Inc.; 2012. doi:10.4135/9781452270012

Nestor PG, Schutt RK. Research Methods in Psychology: Investigating Human Behavior. (2nd Ed.). SAGE Publications, Inc.; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Random Assignment in Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.

In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization. 

Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.

The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.

When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study. 

In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.

Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.

Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.

The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.

Importance 

Random assignment ensures that each group in the experiment is identical before applying the independent variable.

In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.

Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.

Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.

Random Selection vs. Random Assignment 

Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.

On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. 

Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups. 

Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.

Random Assignment vs Random Sampling

Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.

Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.

This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.

When to Use Random Assignment

Random assignment is used in experiments with a between-groups or independent measures design.

In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.

How to Use Random Assignment

There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods: 

  • Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
  • Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
  • Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups) 
  • Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.

When is Random Assignment not used?

  • When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects. 
  • When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment. 
  • When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.

Drawbacks of Random Assignment

While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.

Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.

Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.

Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.

Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level. 

Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.

Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations. 

What is the difference between random sampling and random assignment?

Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.

Does random assignment increase internal validity?

Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .

Does random assignment reduce sampling error?

Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.

Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors. 

When is random assignment not possible?

Random assignment is not possible when the experimenters cannot control the treatment or independent variable.

For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.

Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.

Does random assignment eliminate confounding variables?

Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.

Why is random assignment of participants to treatment conditions in an experiment used?

Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.

Further Reading

  • Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem .  Journal of Economic theory ,  100 (2), 295-328.
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do .  Journal of Clinical Psychology ,  59 (7), 751-766.

Print Friendly, PDF & Email

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

AP®︎/College Statistics

Course: ap®︎/college statistics   >   unit 6.

  • Statistical significance of experiment

Random sampling vs. random assignment (scope of inference)

  • Conclusions in observational studies versus experiments
  • Finding errors in study conclusions
  • (Choice A)   Just the residents involved in Hilary's study. A Just the residents involved in Hilary's study.
  • (Choice B)   All residents in Hilary's town. B All residents in Hilary's town.
  • (Choice C)   All residents in Hilary's country. C All residents in Hilary's country.
  • (Choice A)   Yes A Yes
  • (Choice B)   No B No
  • (Choice A)   Just the residents in Hilary's study. A Just the residents in Hilary's study.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Good Answer

Random Assignment in Psychology (Definition + 40 Examples)

practical psychology logo

Have you ever wondered how researchers discover new ways to help people learn, make decisions, or overcome challenges? A hidden hero in this adventure of discovery is a method called random assignment, a cornerstone in psychological research that helps scientists uncover the truths about the human mind and behavior.

Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

By doing so, researchers can be confident that any differences observed are likely due to the variable being tested, rather than other factors.

In this article, we’ll explore the intriguing world of random assignment, diving into its history, principles, real-world examples, and the impact it has had on the field of psychology.

History of Random Assignment

two women in different conditions

Stepping back in time, we delve into the origins of random assignment, which finds its roots in the early 20th century.

The pioneering mind behind this innovative technique was Sir Ronald A. Fisher , a British statistician and biologist. Fisher introduced the concept of random assignment in the 1920s, aiming to improve the quality and reliability of experimental research .

His contributions laid the groundwork for the method's evolution and its widespread adoption in various fields, particularly in psychology.

Fisher’s groundbreaking work on random assignment was motivated by his desire to control for confounding variables – those pesky factors that could muddy the waters of research findings.

By assigning participants to different groups purely by chance, he realized that the influence of these confounding variables could be minimized, paving the way for more accurate and trustworthy results.

Early Studies Utilizing Random Assignment

Following Fisher's initial development, random assignment started to gain traction in the research community. Early studies adopting this methodology focused on a variety of topics, from agriculture (which was Fisher’s primary field of interest) to medicine and psychology.

The approach allowed researchers to draw stronger conclusions from their experiments, bolstering the development of new theories and practices.

One notable early study utilizing random assignment was conducted in the field of educational psychology. Researchers were keen to understand the impact of different teaching methods on student outcomes.

By randomly assigning students to various instructional approaches, they were able to isolate the effects of the teaching methods, leading to valuable insights and recommendations for educators.

Evolution of the Methodology

As the decades rolled on, random assignment continued to evolve and adapt to the changing landscape of research.

Advances in technology introduced new tools and techniques for implementing randomization, such as computerized random number generators, which offered greater precision and ease of use.

The application of random assignment expanded beyond the confines of the laboratory, finding its way into field studies and large-scale surveys.

Researchers across diverse disciplines embraced the methodology, recognizing its potential to enhance the validity of their findings and contribute to the advancement of knowledge.

From its humble beginnings in the early 20th century to its widespread use today, random assignment has proven to be a cornerstone of scientific inquiry.

Its development and evolution have played a pivotal role in shaping the landscape of psychological research, driving discoveries that have improved lives and deepened our understanding of the human experience.

Principles of Random Assignment

Delving into the heart of random assignment, we uncover the theories and principles that form its foundation.

The method is steeped in the basics of probability theory and statistical inference, ensuring that each participant has an equal chance of being placed in any group, thus fostering fair and unbiased results.

Basic Principles of Random Assignment

Understanding the core principles of random assignment is key to grasping its significance in research. There are three principles: equal probability of selection, reduction of bias, and ensuring representativeness.

The first principle, equal probability of selection , ensures that every participant has an identical chance of being assigned to any group in the study. This randomness is crucial as it mitigates the risk of bias and establishes a level playing field.

The second principle focuses on the reduction of bias . Random assignment acts as a safeguard, ensuring that the groups being compared are alike in all essential aspects before the experiment begins.

This similarity between groups allows researchers to attribute any differences observed in the outcomes directly to the independent variable being studied.

Lastly, ensuring representativeness is a vital principle. When participants are assigned randomly, the resulting groups are more likely to be representative of the larger population.

This characteristic is crucial for the generalizability of the study’s findings, allowing researchers to apply their insights broadly.

Theoretical Foundation

The theoretical foundation of random assignment lies in probability theory and statistical inference .

Probability theory deals with the likelihood of different outcomes, providing a mathematical framework for analyzing random phenomena. In the context of random assignment, it helps in ensuring that each participant has an equal chance of being placed in any group.

Statistical inference, on the other hand, allows researchers to draw conclusions about a population based on a sample of data drawn from that population. It is the mechanism through which the results of a study can be generalized to a broader context.

Random assignment enhances the reliability of statistical inferences by reducing biases and ensuring that the sample is representative.

Differentiating Random Assignment from Random Selection

It’s essential to distinguish between random assignment and random selection, as the two terms, while related, have distinct meanings in the realm of research.

Random assignment refers to how participants are placed into different groups in an experiment, aiming to control for confounding variables and help determine causes.

In contrast, random selection pertains to how individuals are chosen to participate in a study. This method is used to ensure that the sample of participants is representative of the larger population, which is vital for the external validity of the research.

While both methods are rooted in randomness and probability, they serve different purposes in the research process.

Understanding the theories, principles, and distinctions of random assignment illuminates its pivotal role in psychological research.

This method, anchored in probability theory and statistical inference, serves as a beacon of reliability, guiding researchers in their quest for knowledge and ensuring that their findings stand the test of validity and applicability.

Methodology of Random Assignment

woman sleeping with a brain monitor

Implementing random assignment in a study is a meticulous process that involves several crucial steps.

The initial step is participant selection, where individuals are chosen to partake in the study. This stage is critical to ensure that the pool of participants is diverse and representative of the population the study aims to generalize to.

Once the pool of participants has been established, the actual assignment process begins. In this step, each participant is allocated randomly to one of the groups in the study.

Researchers use various tools, such as random number generators or computerized methods, to ensure that this assignment is genuinely random and free from biases.

Monitoring and adjusting form the final step in the implementation of random assignment. Researchers need to continuously observe the groups to ensure that they remain comparable in all essential aspects throughout the study.

If any significant discrepancies arise, adjustments might be necessary to maintain the study’s integrity and validity.

Tools and Techniques Used

The evolution of technology has introduced a variety of tools and techniques to facilitate random assignment.

Random number generators, both manual and computerized, are commonly used to assign participants to different groups. These generators ensure that each individual has an equal chance of being placed in any group, upholding the principle of equal probability of selection.

In addition to random number generators, researchers often use specialized computer software designed for statistical analysis and experimental design.

These software programs offer advanced features that allow for precise and efficient random assignment, minimizing the risk of human error and enhancing the study’s reliability.

Ethical Considerations

The implementation of random assignment is not devoid of ethical considerations. Informed consent is a fundamental ethical principle that researchers must uphold.

Informed consent means that every participant should be fully informed about the nature of the study, the procedures involved, and any potential risks or benefits, ensuring that they voluntarily agree to participate.

Beyond informed consent, researchers must conduct a thorough risk and benefit analysis. The potential benefits of the study should outweigh any risks or harms to the participants.

Safeguarding the well-being of participants is paramount, and any study employing random assignment must adhere to established ethical guidelines and standards.

Conclusion of Methodology

The methodology of random assignment, while seemingly straightforward, is a multifaceted process that demands precision, fairness, and ethical integrity. From participant selection to assignment and monitoring, each step is crucial to ensure the validity of the study’s findings.

The tools and techniques employed, coupled with a steadfast commitment to ethical principles, underscore the significance of random assignment as a cornerstone of robust psychological research.

Benefits of Random Assignment in Psychological Research

The impact and importance of random assignment in psychological research cannot be overstated. It is fundamental for ensuring the study is accurate, allowing the researchers to determine if their study actually caused the results they saw, and making sure the findings can be applied to the real world.

Facilitating Causal Inferences

When participants are randomly assigned to different groups, researchers can be more confident that the observed effects are due to the independent variable being changed, and not other factors.

This ability to determine the cause is called causal inference .

This confidence allows for the drawing of causal relationships, which are foundational for theory development and application in psychology.

Ensuring Internal Validity

One of the foremost impacts of random assignment is its ability to enhance the internal validity of an experiment.

Internal validity refers to the extent to which a researcher can assert that changes in the dependent variable are solely due to manipulations of the independent variable , and not due to confounding variables.

By ensuring that each participant has an equal chance of being in any condition of the experiment, random assignment helps control for participant characteristics that could otherwise complicate the results.

Enhancing Generalizability

Beyond internal validity, random assignment also plays a crucial role in enhancing the generalizability of research findings.

When done correctly, it ensures that the sample groups are representative of the larger population, so can allow researchers to apply their findings more broadly.

This representative nature is essential for the practical application of research, impacting policy, interventions, and psychological therapies.

Limitations of Random Assignment

Potential for implementation issues.

While the principles of random assignment are robust, the method can face implementation issues.

One of the most common problems is logistical constraints. Some studies, due to their nature or the specific population being studied, find it challenging to implement random assignment effectively.

For instance, in educational settings, logistical issues such as class schedules and school policies might stop the random allocation of students to different teaching methods .

Ethical Dilemmas

Random assignment, while methodologically sound, can also present ethical dilemmas.

In some cases, withholding a potentially beneficial treatment from one of the groups of participants can raise serious ethical questions, especially in medical or clinical research where participants' well-being might be directly affected.

Researchers must navigate these ethical waters carefully, balancing the pursuit of knowledge with the well-being of participants.

Generalizability Concerns

Even when implemented correctly, random assignment does not always guarantee generalizable results.

The types of people in the participant pool, the specific context of the study, and the nature of the variables being studied can all influence the extent to which the findings can be applied to the broader population.

Researchers must be cautious in making broad generalizations from studies, even those employing strict random assignment.

Practical and Real-World Limitations

In the real world, many variables cannot be manipulated for ethical or practical reasons, limiting the applicability of random assignment.

For instance, researchers cannot randomly assign individuals to different levels of intelligence, socioeconomic status, or cultural backgrounds.

This limitation necessitates the use of other research designs, such as correlational or observational studies , when exploring relationships involving such variables.

Response to Critiques

In response to these critiques, people in favor of random assignment argue that the method, despite its limitations, remains one of the most reliable ways to establish cause and effect in experimental research.

They acknowledge the challenges and ethical considerations but emphasize the rigorous frameworks in place to address them.

The ongoing discussion around the limitations and critiques of random assignment contributes to the evolution of the method, making sure it is continuously relevant and applicable in psychological research.

While random assignment is a powerful tool in experimental research, it is not without its critiques and limitations. Implementation issues, ethical dilemmas, generalizability concerns, and real-world limitations can pose significant challenges.

However, the continued discourse and refinement around these issues underline the method's enduring significance in the pursuit of knowledge in psychology.

By being careful with how we do things and doing what's right, random assignment stays a really important part of studying how people act and think.

Real-World Applications and Examples

man on a treadmill

Random assignment has been employed in many studies across various fields of psychology, leading to significant discoveries and advancements.

Here are some real-world applications and examples illustrating the diversity and impact of this method:

  • Medicine and Health Psychology: Randomized Controlled Trials (RCTs) are the gold standard in medical research. In these studies, participants are randomly assigned to either the treatment or control group to test the efficacy of new medications or interventions.
  • Educational Psychology: Studies in this field have used random assignment to explore the effects of different teaching methods, classroom environments, and educational technologies on student learning and outcomes.
  • Cognitive Psychology: Researchers have employed random assignment to investigate various aspects of human cognition, including memory, attention, and problem-solving, leading to a deeper understanding of how the mind works.
  • Social Psychology: Random assignment has been instrumental in studying social phenomena, such as conformity, aggression, and prosocial behavior, shedding light on the intricate dynamics of human interaction.

Let's get into some specific examples. You'll need to know one term though, and that is "control group." A control group is a set of participants in a study who do not receive the treatment or intervention being tested , serving as a baseline to compare with the group that does, in order to assess the effectiveness of the treatment.

  • Smoking Cessation Study: Researchers used random assignment to put participants into two groups. One group received a new anti-smoking program, while the other did not. This helped determine if the program was effective in helping people quit smoking.
  • Math Tutoring Program: A study on students used random assignment to place them into two groups. One group received additional math tutoring, while the other continued with regular classes, to see if the extra help improved their grades.
  • Exercise and Mental Health: Adults were randomly assigned to either an exercise group or a control group to study the impact of physical activity on mental health and mood.
  • Diet and Weight Loss: A study randomly assigned participants to different diet plans to compare their effectiveness in promoting weight loss and improving health markers.
  • Sleep and Learning: Researchers randomly assigned students to either a sleep extension group or a regular sleep group to study the impact of sleep on learning and memory.
  • Classroom Seating Arrangement: Teachers used random assignment to place students in different seating arrangements to examine the effect on focus and academic performance.
  • Music and Productivity: Employees were randomly assigned to listen to music or work in silence to investigate the effect of music on workplace productivity.
  • Medication for ADHD: Children with ADHD were randomly assigned to receive either medication, behavioral therapy, or a placebo to compare treatment effectiveness.
  • Mindfulness Meditation for Stress: Adults were randomly assigned to a mindfulness meditation group or a waitlist control group to study the impact on stress levels.
  • Video Games and Aggression: A study randomly assigned participants to play either violent or non-violent video games and then measured their aggression levels.
  • Online Learning Platforms: Students were randomly assigned to use different online learning platforms to evaluate their effectiveness in enhancing learning outcomes.
  • Hand Sanitizers in Schools: Schools were randomly assigned to use hand sanitizers or not to study the impact on student illness and absenteeism.
  • Caffeine and Alertness: Participants were randomly assigned to consume caffeinated or decaffeinated beverages to measure the effects on alertness and cognitive performance.
  • Green Spaces and Well-being: Neighborhoods were randomly assigned to receive green space interventions to study the impact on residents’ well-being and community connections.
  • Pet Therapy for Hospital Patients: Patients were randomly assigned to receive pet therapy or standard care to assess the impact on recovery and mood.
  • Yoga for Chronic Pain: Individuals with chronic pain were randomly assigned to a yoga intervention group or a control group to study the effect on pain levels and quality of life.
  • Flu Vaccines Effectiveness: Different groups of people were randomly assigned to receive either the flu vaccine or a placebo to determine the vaccine’s effectiveness.
  • Reading Strategies for Dyslexia: Children with dyslexia were randomly assigned to different reading intervention strategies to compare their effectiveness.
  • Physical Environment and Creativity: Participants were randomly assigned to different room setups to study the impact of physical environment on creative thinking.
  • Laughter Therapy for Depression: Individuals with depression were randomly assigned to laughter therapy sessions or control groups to assess the impact on mood.
  • Financial Incentives for Exercise: Participants were randomly assigned to receive financial incentives for exercising to study the impact on physical activity levels.
  • Art Therapy for Anxiety: Individuals with anxiety were randomly assigned to art therapy sessions or a waitlist control group to measure the effect on anxiety levels.
  • Natural Light in Offices: Employees were randomly assigned to workspaces with natural or artificial light to study the impact on productivity and job satisfaction.
  • School Start Times and Academic Performance: Schools were randomly assigned different start times to study the effect on student academic performance and well-being.
  • Horticulture Therapy for Seniors: Older adults were randomly assigned to participate in horticulture therapy or traditional activities to study the impact on cognitive function and life satisfaction.
  • Hydration and Cognitive Function: Participants were randomly assigned to different hydration levels to measure the impact on cognitive function and alertness.
  • Intergenerational Programs: Seniors and young people were randomly assigned to intergenerational programs to study the effects on well-being and cross-generational understanding.
  • Therapeutic Horseback Riding for Autism: Children with autism were randomly assigned to therapeutic horseback riding or traditional therapy to study the impact on social communication skills.
  • Active Commuting and Health: Employees were randomly assigned to active commuting (cycling, walking) or passive commuting to study the effect on physical health.
  • Mindful Eating for Weight Management: Individuals were randomly assigned to mindful eating workshops or control groups to study the impact on weight management and eating habits.
  • Noise Levels and Learning: Students were randomly assigned to classrooms with different noise levels to study the effect on learning and concentration.
  • Bilingual Education Methods: Schools were randomly assigned different bilingual education methods to compare their effectiveness in language acquisition.
  • Outdoor Play and Child Development: Children were randomly assigned to different amounts of outdoor playtime to study the impact on physical and cognitive development.
  • Social Media Detox: Participants were randomly assigned to a social media detox or regular usage to study the impact on mental health and well-being.
  • Therapeutic Writing for Trauma Survivors: Individuals who experienced trauma were randomly assigned to therapeutic writing sessions or control groups to study the impact on psychological well-being.
  • Mentoring Programs for At-risk Youth: At-risk youth were randomly assigned to mentoring programs or control groups to assess the impact on academic achievement and behavior.
  • Dance Therapy for Parkinson’s Disease: Individuals with Parkinson’s disease were randomly assigned to dance therapy or traditional exercise to study the effect on motor function and quality of life.
  • Aquaponics in Schools: Schools were randomly assigned to implement aquaponics programs to study the impact on student engagement and environmental awareness.
  • Virtual Reality for Phobia Treatment: Individuals with phobias were randomly assigned to virtual reality exposure therapy or traditional therapy to compare effectiveness.
  • Gardening and Mental Health: Participants were randomly assigned to engage in gardening or other leisure activities to study the impact on mental health and stress reduction.

Each of these studies exemplifies how random assignment is utilized in various fields and settings, shedding light on the multitude of ways it can be applied to glean valuable insights and knowledge.

Real-world Impact of Random Assignment

old lady gardening

Random assignment is like a key tool in the world of learning about people's minds and behaviors. It’s super important and helps in many different areas of our everyday lives. It helps make better rules, creates new ways to help people, and is used in lots of different fields.

Health and Medicine

In health and medicine, random assignment has helped doctors and scientists make lots of discoveries. It’s a big part of tests that help create new medicines and treatments.

By putting people into different groups by chance, scientists can really see if a medicine works.

This has led to new ways to help people with all sorts of health problems, like diabetes, heart disease, and mental health issues like depression and anxiety.

Schools and education have also learned a lot from random assignment. Researchers have used it to look at different ways of teaching, what kind of classrooms are best, and how technology can help learning.

This knowledge has helped make better school rules, develop what we learn in school, and find the best ways to teach students of all ages and backgrounds.

Workplace and Organizational Behavior

Random assignment helps us understand how people act at work and what makes a workplace good or bad.

Studies have looked at different kinds of workplaces, how bosses should act, and how teams should be put together. This has helped companies make better rules and create places to work that are helpful and make people happy.

Environmental and Social Changes

Random assignment is also used to see how changes in the community and environment affect people. Studies have looked at community projects, changes to the environment, and social programs to see how they help or hurt people’s well-being.

This has led to better community projects, efforts to protect the environment, and programs to help people in society.

Technology and Human Interaction

In our world where technology is always changing, studies with random assignment help us see how tech like social media, virtual reality, and online stuff affect how we act and feel.

This has helped make better and safer technology and rules about using it so that everyone can benefit.

The effects of random assignment go far and wide, way beyond just a science lab. It helps us understand lots of different things, leads to new and improved ways to do things, and really makes a difference in the world around us.

From making healthcare and schools better to creating positive changes in communities and the environment, the real-world impact of random assignment shows just how important it is in helping us learn and make the world a better place.

So, what have we learned? Random assignment is like a super tool in learning about how people think and act. It's like a detective helping us find clues and solve mysteries in many parts of our lives.

From creating new medicines to helping kids learn better in school, and from making workplaces happier to protecting the environment, it’s got a big job!

This method isn’t just something scientists use in labs; it reaches out and touches our everyday lives. It helps make positive changes and teaches us valuable lessons.

Whether we are talking about technology, health, education, or the environment, random assignment is there, working behind the scenes, making things better and safer for all of us.

In the end, the simple act of putting people into groups by chance helps us make big discoveries and improvements. It’s like throwing a small stone into a pond and watching the ripples spread out far and wide.

Thanks to random assignment, we are always learning, growing, and finding new ways to make our world a happier and healthier place for everyone!

Related posts:

  • 19+ Experimental Design Examples (Methods + Types)
  • Cluster Sampling vs Stratified Sampling
  • 41+ White Collar Job Examples (Salary + Path)
  • 47+ Blue Collar Job Examples (Salary + Path)
  • McDonaldization of Society (Definition + Examples)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Explore Psychology

What Is Random Assignment in Psychology?

Categories Research Methods

What Is Random Assignment in Psychology?

Sharing is caring!

Random assignment means that every participant has the same chance of being chosen for the experimental or control group. It involves using procedures that rely on chance to assign participants to groups. Doing this means that every participant in a study has an equal opportunity to be assigned to any group.

For example, in a psychology experiment, participants might be assigned to either a control or experimental group. Some experiments might only have one experimental group, while others may have several treatment variations.

Using random assignment means that each participant has the same chance of being assigned to any of these groups.

Table of Contents

How to Use Random Assignment

So what type of procedures might psychologists utilize for random assignment? Strategies can include:

  • Flipping a coin
  • Assigning random numbers
  • Rolling dice
  • Drawing names out of a hat

How Does Random Assignment Work?

A psychology experiment aims to determine if changes in one variable lead to changes in another variable. Researchers will first begin by coming up with a hypothesis. Once researchers have an idea of what they think they might find in a population, they will come up with an experimental design and then recruit participants for their study.

Once they have a pool of participants representative of the population they are interested in looking at, they will randomly assign the participants to their groups.

  • Control group : Some participants will end up in the control group, which serves as a baseline and does not receive the independent variables.
  • Experimental group : Other participants will end up in the experimental groups that receive some form of the independent variables.

By using random assignment, the researchers make it more likely that the groups are equal at the start of the experiment. Since the groups are the same on other variables, it can be assumed that any changes that occur are the result of varying the independent variables.

After a treatment has been administered, the researchers will then collect data in order to determine if the independent variable had any impact on the dependent variable.

Random Assignment vs. Random Selection

It is important to remember that random assignment is not the same thing as random selection , also known as random sampling.

Random selection instead involves how people are chosen to be in a study. Using random selection, every member of a population stands an equal chance of being chosen for a study or experiment.

So random sampling affects how participants are chosen for a study, while random assignment affects how participants are then assigned to groups.

Examples of Random Assignment

Imagine that a psychology researcher is conducting an experiment to determine if getting adequate sleep the night before an exam results in better test scores.

Forming a Hypothesis

They hypothesize that participants who get 8 hours of sleep will do better on a math exam than participants who only get 4 hours of sleep.

Obtaining Participants

The researcher starts by obtaining a pool of participants. They find 100 participants from a local university. Half of the participants are female, and half are male.

Randomly Assign Participants to Groups

The researcher then assigns random numbers to each participant and uses a random number generator to randomly assign each number to either the 4-hour or 8-hour sleep groups.

Conduct the Experiment

Those in the 8-hour sleep group agree to sleep for 8 hours that night, while those in the 4-hour group agree to wake up after only 4 hours. The following day, all of the participants meet in a classroom.

Collect and Analyze Data

Everyone takes the same math test. The test scores are then compared to see if the amount of sleep the night before had any impact on test scores.

Why Is Random Assignment Important in Psychology Research?

Random assignment is important in psychology research because it helps improve a study’s internal validity. This means that the researchers are sure that the study demonstrates a cause-and-effect relationship between an independent and dependent variable.

Random assignment improves the internal validity by minimizing the risk that there are systematic differences in the participants who are in each group.

Key Points to Remember About Random Assignment

  • Random assignment in psychology involves each participant having an equal chance of being chosen for any of the groups, including the control and experimental groups.
  • It helps control for potential confounding variables, reducing the likelihood of pre-existing differences between groups.
  • This method enhances the internal validity of experiments, allowing researchers to draw more reliable conclusions about cause-and-effect relationships.
  • Random assignment is crucial for creating comparable groups and increasing the scientific rigor of psychological studies.

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions ( Research Randomizer ).

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

helpful professor logo

Random Assignment in Psychology (Intro for Students)

random assignment examples and definition, explained below

Random assignment is a research procedure used to randomly assign participants to different experimental conditions (or ‘groups’). This introduces the element of chance, ensuring that each participant has an equal likelihood of being placed in any condition group for the study.

It is absolutely essential that the treatment condition and the control condition are the same in all ways except for the variable being manipulated.

Using random assignment to place participants in different conditions helps to achieve this.

It ensures that those conditions are the same in regards to all potential confounding variables and extraneous factors .

Why Researchers Use Random Assignment

Researchers use random assignment to control for confounds in research.

Confounds refer to unwanted and often unaccounted-for variables that might affect the outcome of a study. These confounding variables can skew the results, rendering the experiment unreliable.

For example, below is a study with two groups. Note how there are more ‘red’ individuals in the first group than the second:

a representation of a treatment condition showing 12 red people in the cohort

There is likely a confounding variable in this experiment explaining why more red people ended up in the treatment condition and less in the control condition. The red people might have self-selected, for example, leading to a skew of them in one group over the other.

Ideally, we’d want a more even distribution, like below:

a representation of a treatment condition showing 4 red people in the cohort

To achieve better balance in our two conditions, we use randomized sampling.

Fact File: Experiments 101

Random assignment is used in the type of research called the experiment.

An experiment involves manipulating the level of one variable and examining how it affects another variable. These are the independent and dependent variables :

  • Independent Variable: The variable manipulated is called the independent variable (IV)
  • Dependent Variable: The variable that it is expected to affect is called the dependent variable (DV).

The most basic form of the experiment involves two conditions: the treatment and the control .

  • The Treatment Condition: The treatment condition involves the participants being exposed to the IV.
  • The Control Condition: The control condition involves the absence of the IV. Therefore, the IV has two levels: zero and some quantity.

Researchers utilize random assignment to determine which participants go into which conditions.

Methods of Random Assignment

There are several procedures that researchers can use to randomly assign participants to different conditions.

1. Random number generator

There are several websites that offer computer-generated random numbers. Simply indicate how many conditions are in the experiment and then click. If there are 4 conditions, the program will randomly generate a number between 1 and 4 each time it is clicked.

2. Flipping a coin

If there are two conditions in an experiment, then the simplest way to implement random assignment is to flip a coin for each participant. Heads means being assigned to the treatment and tails means being assigned to the control (or vice versa).

3. Rolling a die

Rolling a single die is another way to randomly assign participants. If the experiment has three conditions, then numbers 1 and 2 mean being assigned to the control; numbers 3 and 4 mean treatment condition one; and numbers 5 and 6 mean treatment condition two.

4. Condition names in a hat

In some studies, the researcher will write the name of the treatment condition(s) or control on slips of paper and place them in a hat. If there are 4 conditions and 1 control, then there are 5 slips of paper.

The researcher closes their eyes and selects one slip for each participant. That person is then assigned to one of the conditions in the study and that slip of paper is placed back in the hat. Repeat as necessary.

There are other ways of trying to ensure that the groups of participants are equal in all ways with the exception of the IV. However, random assignment is the most often used because it is so effective at reducing confounds.

Read About More Methods and Examples of Random Assignment Here

Potential Confounding Effects

Random assignment is all about minimizing confounding effects.

Here are six types of confounds that can be controlled for using random assignment:

  • Individual Differences: Participants in a study will naturally vary in terms of personality, intelligence, mood, prior knowledge, and many other characteristics. If one group happens to have more people with a particular characteristic, this could affect the results. Random assignment ensures that these individual differences are spread out equally among the experimental groups, making it less likely that they will unduly influence the outcome.
  • Temporal or Time-Related Confounds: Events or situations that occur at a particular time can influence the outcome of an experiment. For example, a participant might be tested after a stressful event, while another might be tested after a relaxing weekend. Random assignment ensures that such effects are equally distributed among groups, thus controlling for their potential influence.
  • Order Effects: If participants are exposed to multiple treatments or tests, the order in which they experience them can influence their responses. Randomly assigning the order of treatments for different participants helps control for this.
  • Location or Environmental Confounds: The environment in which the study is conducted can influence the results. One group might be tested in a noisy room, while another might be in a quiet room. Randomly assigning participants to different locations can control for these effects.
  • Instrumentation Confounds: These occur when there are variations in the calibration or functioning of measurement instruments across conditions. If one group’s responses are being measured using a slightly different tool or scale, it can introduce a confound. Random assignment can ensure that any such potential inconsistencies in instrumentation are equally distributed among groups.
  • Experimenter Effects: Sometimes, the behavior or expectations of the person administering the experiment can unintentionally influence the participants’ behavior or responses. For instance, if an experimenter believes one treatment is superior, they might unconsciously communicate this belief to participants. Randomly assigning experimenters or using a double-blind procedure (where neither the participant nor the experimenter knows the treatment being given) can help control for this.

Random assignment helps balance out these and other potential confounds across groups, ensuring that any observed differences are more likely due to the manipulated independent variable rather than some extraneous factor.

Limitations of the Random Assignment Procedure

Although random assignment is extremely effective at eliminating the presence of participant-related confounds, there are several scenarios in which it cannot be used.

  • Ethics: The most obvious scenario is when it would be unethical. For example, if wanting to investigate the effects of emotional abuse on children, it would be unethical to randomly assign children to either received abuse or not.  Even if a researcher were to propose such a study, it would not receive approval from the Institutional Review Board (IRB) which oversees research by university faculty.
  • Practicality: Other scenarios involve matters of practicality. For example, randomly assigning people to specific types of diet over a 10-year period would be interesting, but it would be highly unlikely that participants would be diligent enough to make the study valid. This is why examining these types of subjects has to be carried out through observational studies . The data is correlational, which is informative, but falls short of the scientist’s ultimate goal of identifying causality.
  • Small Sample Size: The smaller the sample size being assigned to conditions, the more likely it is that the two groups will be unequal. For example, if you flip a coin many times in a row then you will notice that sometimes there will be a string of heads or tails that come up consecutively. This means that one condition may have a build-up of participants that share the same characteristics. However, if you continue flipping the coin, over the long-term, there will be a balance of heads and tails. Unfortunately, how large a sample size is necessary has been the subject of considerable debate (Bloom, 2006; Shadish et al., 2002).

“It is well known that larger sample sizes reduce the probability that random assignment will result in conditions that are unequal” (Goldberg, 2019, p. 2).

Applications of Random Assignment

The importance of random assignment has been recognized in a wide range of scientific and applied disciplines (Bloom, 2006).

Random assignment began as a tool in agricultural research by Fisher (1925, 1935). After WWII, it became extensively used in medical research to test the effectiveness of new treatments and pharmaceuticals (Marks, 1997).

Today it is widely used in industrial engineering (Box, Hunter, and Hunter, 2005), educational research (Lindquist, 1953; Ong-Dean et al., 2011)), psychology (Myers, 1972), and social policy studies (Boruch, 1998; Orr, 1999).

One of the biggest obstacles to the validity of an experiment is the confound. If the group of participants in the treatment condition are substantially different from the group in the control condition, then it is impossible to determine if the IV has an affect or if the confound has an effect.

Thankfully, random assignment is highly effective at eliminating confounds that are known and unknown. Because each participant has an equal chance of being placed in each condition, they are equally distributed.

There are several ways of implementing random assignment, including flipping a coin or using a random number generator.

Random assignment has become an essential procedure in research in a wide range of subjects such as psychology, education, and social policy.

Alferes, V. R. (2012). Methods of randomization in experimental design . Sage Publications.

Bloom, H. S. (2008). The core analytics of randomized experiments for social research. The SAGE Handbook of Social Research Methods , 115-133.

Boruch, R. F. (1998). Randomized controlled experiments for evaluation and planning. Handbook of applied social research methods , 161-191.

Box, G. E., Hunter, W. G., & Hunter, J. S. (2005). Design of experiments: Statistics for Experimenters: Design, Innovation and Discovery.

Dehue, T. (1997). Deception, efficiency, and random groups: Psychology and the gradual origination of the random group design. Isis , 88 (4), 653-673.

Fisher, R.A. (1925). Statistical methods for research workers (11th ed. rev.). Oliver and Boyd: Edinburgh.

Fisher, R. A. (1935). The Design of Experiments. Edinburgh: Oliver and Boyd.

Goldberg, M. H. (2019). How often does random assignment fail? Estimates and recommendations. Journal of Environmental Psychology , 66 , 101351.

Jamison, J. C. (2019). The entry of randomized assignment into the social sciences. Journal of Causal Inference , 7 (1), 20170025.

Lindquist, E. F. (1953). Design and analysis of experiments in psychology and education . Boston: Houghton Mifflin Company.

Marks, H. M. (1997). The progress of experiment: Science and therapeutic reform in the United States, 1900-1990 . Cambridge University Press.

Myers, J. L. (1972). Fundamentals of experimental design (2nd ed.). Allyn & Bacon.

Ong-Dean, C., Huie Hofstetter, C., & Strick, B. R. (2011). Challenges and dilemmas in implementing random assignment in educational research. American Journal of Evaluation , 32 (1), 29-49.

Orr, L. L. (1999). Social experiments: Evaluating public programs with experimental methods . Sage.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Quasi-experiments: interrupted time-series designs. Experimental and quasi-experimental designs for generalized causal inference , 171-205.

Stigler, S. M. (1992). A historical view of statistical concepts in psychology and educational research. American Journal of Education , 101 (1), 60-70.

Dave

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Positive Punishment Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Dissociation Examples (Psychology)
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 15 Zone of Proximal Development Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ Perception Checking: 15 Examples and Definition

Chris

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

  • Chris Drew (PhD) #molongui-disabled-link 25 Positive Punishment Examples
  • Chris Drew (PhD) #molongui-disabled-link 25 Dissociation Examples (Psychology)
  • Chris Drew (PhD) #molongui-disabled-link 15 Zone of Proximal Development Examples
  • Chris Drew (PhD) #molongui-disabled-link Perception Checking: 15 Examples and Definition

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

in a random assignment

The Plagiarism Checker Online For Your Academic Work

Start Plagiarism Check

Editing & Proofreading for Your Research Paper

Get it proofread now

Online Printing & Binding with Free Express Delivery

Configure binding now

  • Academic essay overview
  • The writing process
  • Structuring academic essays
  • Types of academic essays
  • Academic writing overview
  • Sentence structure
  • Academic writing process
  • Improving your academic writing
  • Titles and headings
  • APA style overview
  • APA citation & referencing
  • APA structure & sections
  • Citation & referencing
  • Structure and sections
  • APA examples overview
  • Commonly used citations
  • Other examples
  • British English vs. American English
  • Chicago style overview
  • Chicago citation & referencing
  • Chicago structure & sections
  • Chicago style examples
  • Citing sources overview
  • Citation format
  • Citation examples
  • College essay overview
  • Application
  • How to write a college essay
  • Types of college essays
  • Commonly confused words
  • Definitions
  • Dissertation overview
  • Dissertation structure & sections
  • Dissertation writing process
  • Graduate school overview
  • Application & admission
  • Study abroad
  • Master degree
  • Harvard referencing overview
  • Language rules overview
  • Grammatical rules & structures
  • Parts of speech
  • Punctuation
  • Methodology overview
  • Analyzing data
  • Experiments
  • Observations
  • Inductive vs. Deductive
  • Qualitative vs. Quantitative
  • Types of validity
  • Types of reliability
  • Sampling methods
  • Theories & Concepts
  • Types of research studies
  • Types of variables
  • MLA style overview
  • MLA examples
  • MLA citation & referencing
  • MLA structure & sections
  • Plagiarism overview
  • Plagiarism checker
  • Types of plagiarism
  • Printing production overview
  • Research bias overview
  • Types of research bias
  • Example sections
  • Types of research papers
  • Research process overview
  • Problem statement
  • Research proposal
  • Research topic
  • Statistics overview
  • Levels of measurment
  • Frequency distribution
  • Measures of central tendency
  • Measures of variability
  • Hypothesis testing
  • Parameters & test statistics
  • Types of distributions
  • Correlation
  • Effect size
  • Hypothesis testing assumptions
  • Types of ANOVAs
  • Types of chi-square
  • Statistical data
  • Statistical models
  • Spelling mistakes
  • Tips overview
  • Academic writing tips
  • Dissertation tips
  • Sources tips
  • Working with sources overview
  • Evaluating sources
  • Finding sources
  • Including sources
  • Types of sources

Your Step to Success

Plagiarism Check within 10min

Printing & Binding with 3D Live Preview

Random Assignment – A Simple Introduction with Examples

How do you like this article cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Random-assignment-Definition

Completing a research or thesis paper is more work than most students imagine. For instance, you must conduct experiments before coming up with conclusions. Random assignment, a key methodology in academic research, ensures every participant has an equal chance of being placed in any group within an experiment. In experimental studies, the random assignment of participants is a vital element, which this article will discuss.

Inhaltsverzeichnis

  • 1 Random Assignment – In a Nutshell
  • 2 Definition: Random assignment
  • 3 Importance of random assignment
  • 4 Random assignment vs. random sampling
  • 5 How to use random assignment
  • 6 When random assignment is not used

Random Assignment – In a Nutshell

  • Random assignment is where you randomly place research participants into specific groups.
  • This method eliminates bias in the results by ensuring that all participants have an equal chance of getting into either group.
  • Random assignment is usually used in independent measures or between-group experiment designs.

Definition: Random assignment

Pearson Correlation is a descriptive statistical procedure that describes the measure of linear dependence between two variables. It entails a sample, control group , experimental design , and randomized design. In this statistical procedure, random assignment is used. Random assignment is the random placement of participants into different groups in experimental research.

Ireland

Importance of random assignment

Random assessment is essential for strengthening the internal validity of experimental research. Internal validity helps make a casual relationship’s conclusions reliable and trustworthy.

In experimental research, researchers isolate independent variables and manipulate them as they assess the impact while managing other variables. To achieve this, an independent variable for diverse member groups is vital. This experimental design is called an independent or between-group design.

Example: Different levels of independent variables

  • In a medical study, you can research the impact of nutrient supplements on the immune (nutrient supplements = independent variable, immune = dependent variable)

Three independent participant levels are applicable here:

  • Control group (given 0 dosages of iron supplements)
  • The experimental group (low dosage)
  • The second experimental group (high dosage)

This assignment technique in experiments ensures no bias in the treatment sets at the beginning of the trials. Therefore, if you do not use this technique, you won’t be able to exclude any alternate clarifications for your findings.

In the research experiment above, you can recruit participants randomly by handing out flyers at public spaces like gyms, cafés, and community centers. Then:

  • Place the group from cafés in the control group
  • Community center group in the low prescription trial group
  • Gym group in the high-prescription group

Even with random participant assignment, other extraneous variables may still create bias in experiment results. However, these variations are usually low, hence should not hinder your research. Therefore, using random placement in experiments is highly necessary, especially where it is ethically required or makes sense for your research subject.

Random assignment vs. random sampling

Simple random sampling is a method of choosing the participants for a study. On the other hand, the random assignment involves sorting the participants selected through random sampling. Another difference between random sampling and random assignment is that the former is used in several types of studies, while the latter is only applied in between-subject experimental designs.

Your study researches the impact of technology on productivity in a specific company.

In such a case, you have contact with the entire staff. So, you can assign each employee a quantity and apply a random number generator to pick a specific sample.

For instance, from 500 employees, you can pick 200. So, the full sample is 200.

Random sampling enhances external validity, as it guarantees that the study sample is unbiased, and that an entire population is represented. This way, you can conclude that the results of your studies can be accredited to the autonomous variable.

After determining the full sample, you can break it down into two groups using random assignment. In this case, the groups are:

  • The control group (does get access to technology)
  • The experimental group (gets access to technology)

Using random assignment assures you that any differences in the productivity results for each group are not biased and will help the company make a decision.

Random-assignment-vs-random-sampling

How to use random assignment

Firstly, give each participant a unique number as an identifier. Then, use a specific tool to simplify assigning the participants to the sample groups. Some tools you can use are:

Random member assignment is a prevailing technique for placing participants in specific groups because each person has a fair opportunity of being put in either group.

Random assignment in block experimental designs

In complex experimental designs , you must group your participants into blocks before using the random assignment technique.

You can create participant blocks depending on demographic variables, working hours, or scores. However, the blocks imply that you will require a bigger sample to attain high statistical power.

After grouping the participants in blocks, you can use random assignments inside each block to allocate the members to a specific treatment condition. Doing this will help you examine if quality impacts the result of the treatment.

Depending on their unique characteristics, you can also use blocking in experimental matched designs before matching the participants in each block. Then, you can randomly allot each partaker to one of the treatments in the research and examine the results.

When random assignment is not used

As powerful a tool as it is, random assignment does not apply in all situations. Like the following:

Comparing different groups

When the purpose of your study is to assess the differences between the participants, random member assignment may not work.

If you want to compare teens and the elderly with and without specific health conditions, you must ensure that the participants have specific characteristics. Therefore, you cannot pick them randomly.

In such a study, the medical condition (quality of interest) is the independent variable, and the participants are grouped based on their ages (different levels). Also, all partakers are tried similarly to ensure they have the medical condition, and their outcomes are tested per group level.

No ethical justifiability

Another situation where you cannot use random assignment is if it is ethically not permitted.

If your study involves unhealthy or dangerous behaviors or subjects, such as drug use. Instead of assigning random partakers to sets, you can conduct quasi-experimental research.

When using a quasi-experimental design , you examine the conclusions of pre-existing groups you have no control over, such as existing drug users. While you cannot randomly assign them to groups, you can use variables like their age, years of drug use, or socioeconomic status to group the participants.

What is the definition of random assignment?

It is an experimental research technique that involves randomly placing participants from your samples into different groups. It ensures that every sample member has the same opportunity of being in whichever group (control or experimental group).

When is random assignment applicable?

You can use this placement technique in experiments featuring an independent measures design. It helps ensure that all your sample groups are comparable.

What is the importance of random assignment?

It can help you enhance your study’s validity . This technique also helps ensure that every sample has an equal opportunity of being assigned to a control or trial group.

When should you NOT use random assignment

You should not use this technique if your study focuses on group comparisons or if it is not legally ethical.

We use cookies on our website. Some of them are essential, while others help us to improve this website and your experience.

  • External Media

Individual Privacy Preferences

Cookie Details Privacy Policy Imprint

Here you will find an overview of all cookies used. You can give your consent to whole categories or display further information and select certain cookies.

Accept all Save

Essential cookies enable basic functions and are necessary for the proper function of the website.

Show Cookie Information Hide Cookie Information

Statistics cookies collect information anonymously. This information helps us to understand how our visitors use our website.

Content from video platforms and social media platforms is blocked by default. If External Media cookies are accepted, access to those contents no longer requires manual consent.

Privacy Policy Imprint

Statistical Thinking: A Simulation Approach to Modeling Uncertainty (UM STAT 216 edition)

3.6 causation and random assignment.

Medical researchers may be interested in showing that a drug helps improve people’s health (the cause of improvement is the drug), while educational researchers may be interested in showing a curricular innovation improves students’ learning (the curricular innovation causes improved learning).

To attribute a causal relationship, there are three criteria a researcher needs to establish:

  • Association of the Cause and Effect: There needs to be a association between the cause and effect.
  • Timing: The cause needs to happen BEFORE the effect.
  • No Plausible Alternative Explanations: ALL other possible explanations for the effect need to be ruled out.

Please read more about each of these criteria at the Web Center for Social Research Methods .

The third criterion can be quite difficult to meet. To rule out ALL other possible explanations for the effect, we want to compare the world with the cause applied to the world without the cause. In practice, we do this by comparing two different groups: a “treatment” group that gets the cause applied to them, and a “control” group that does not. To rule out alternative explanations, the groups need to be “identical” with respect to every possible characteristic (aside from the treatment) that could explain differences. This way the only characteristic that will be different is that the treatment group gets the treatment and the control group doesn’t. If there are differences in the outcome, then it must be attributable to the treatment, because the other possible explanations are ruled out.

So, the key is to make the control and treatment groups “identical” when you are forming them. One thing that makes this task (slightly) easier is that they don’t have to be exactly identical, only probabilistically equivalent . This means, for example, that if you were matching groups on age that you don’t need the two groups to have identical age distributions; they would only need to have roughly the same AVERAGE age. Here roughly means “the average ages should be the same within what we expect because of sampling error.”

Now we just need to create the groups so that they have, on average, the same characteristics … for EVERY POSSIBLE CHARCTERISTIC that could explain differences in the outcome.

It turns out that creating probabilistically equivalent groups is a really difficult problem. One method that works pretty well for doing this is to randomly assign participants to the groups. This works best when you have large sample sizes, but even with small sample sizes random assignment has the advantage of at least removing the systematic bias between the two groups (any differences are due to chance and will probably even out between the groups). As Wikipedia’s page on random assignment points out,

Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment. … Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it.

We use the term internal validity to describe the degree to which cause-and-effect inferences are accurate and meaningful. Causal attribution is the goal for many researchers. Thus, by using random assignment we have a pretty high degree of evidence for internal validity; we have a much higher belief in causal inferences. Much like evidence used in a court of law, it is useful to think about validity evidence on a continuum. For example, a visualization of the internal validity evidence for a study that employed random assignment in the design might be:

in a random assignment

The degree of internal validity evidence is high (in the upper-third). How high depends on other factors such as sample size.

To learn more about random assignment, you can read the following:

  • The research report, Random Assignment Evaluation Studies: A Guide for Out-of-School Time Program Practitioners

3.6.1 Example: Does sleep deprivation cause an decrease in performance?

Let’s consider the criteria with respect to the sleep deprivation study we explored in class.

3.6.1.1 Association of cause and effect

First, we ask, Is there an association between the cause and the effect? In the sleep deprivation study, we would ask, “Is sleep deprivation associated with an decrease in performance?”

This is what a hypothesis test helps us answer! If the result is statistically significant , then we have an association between the cause and the effect. If the result is not statistically significant, then there is not sufficient evidence for an association between cause and effect.

In the case of the sleep deprivation experiment, the result was statistically significant, so we can say that sleep deprivation is associated with a decrease in performance.

3.6.1.2 Timing

Second, we ask, Did the cause come before the effect? In the sleep deprivation study, the answer is yes. The participants were sleep deprived before their performance was tested. It may seem like this is a silly question to ask, but as the link above describes, it is not always so clear to establish the timing. Thus, it is important to consider this question any time we are interested in establishing causality.

3.6.1.3 No plausible alternative explanations

Finally, we ask Are there any plausible alternative explanations for the observed effect? In the sleep deprivation study, we would ask, “Are there plausible alternative explanations for the observed difference between the groups, other than sleep deprivation?” Because this is a question about plausibility, human judgment comes into play. Researchers must make an argument about why there are no plausible alternatives. As described above, a strong study design can help to strengthen the argument.

At first, it may seem like there are a lot of plausible alternative explanations for the difference in performance. There are a lot of things that might affect someone’s performance on a visual task! Sleep deprivation is just one of them! For example, artists may be more adept at visual discrimination than other people. This is an example of a potential confounding variable. A confounding variable is a variable that might affect the results, other than the causal variable that we are interested in.

Here’s the thing though. We are not interested in figuring out why any particular person got the score that they did. Instead, we are interested in determining why one group was different from another group. In the sleep deprivation study, the participants were randomly assigned. This means that the there is no systematic difference between the groups, with respect to any confounding variables. Yes—artistic experience is a possible confounding variable, and it may be the reason why two people score differently. BUT: There is no systematic difference between the groups with respect to artistic experience, and so artistic experience is not a plausible explanation as to why the groups would be different. The same can be said for any possible confounding variable. Because the groups were randomly assigned, it is not plausible to say that the groups are different with respect to any confounding variable. Random assignment helps us rule out plausible alternatives.

3.6.1.4 Making a causal claim

Now, let’s see about make a causal claim for the sleep deprivation study:

  • Association: There is a statistically significant result, so the cause is associated with the effect
  • Timing: The participants were sleep deprived before their performance was measured, so the cause came before the effect
  • Plausible alternative explanations: The participants were randomly assigned, so the groups are not systematically different on any confounding variable. The only systematic difference between the groups was sleep deprivation. Thus, there are no plausible alternative explanations for the difference between the groups, other than sleep deprivation

Thus, the internal validity evidence for this study is high, and we can make a causal claim. For the participants in this study, we can say that sleep deprivation caused a decrease in performance.

Key points: Causation and internal validity

To make a cause-and-effect inference, you need to consider three criteria:

  • Association of the Cause and Effect: There needs to be a association between the cause and effect. This can be established by a hypothesis test.

Random assignment removes any systematic differences between the groups (other than the treatment), and thus helps to rule out plausible alternative explanations.

Internal validity describes the degree to which cause-and-effect inferences are accurate and meaningful.

Confounding variables are variables that might affect the results, other than the causal variable that we are interested in.

Probabilistic equivalence means that there is not a systematic difference between groups. The groups are the same on average.

How can we make "equivalent" experimental groups?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 29 April 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

We're sorry, but some features of Research Randomizer require JavaScript. If you cannot enable JavaScript, we suggest you use an alternative random number generator such as the one available at Random.org .

RESEARCH RANDOMIZER

Random sampling and random assignment made easy.

Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research.

GENERATE NUMBERS

In some cases, you may wish to generate more than one set of numbers at a time (e.g., when randomly assigning people to experimental conditions in a "blocked" research design). If you wish to generate multiple sets of random numbers, simply enter the number of sets you want, and Research Randomizer will display all sets in the results.

Specify how many numbers you want Research Randomizer to generate in each set. For example, a request for 5 numbers might yield the following set of random numbers: 2, 17, 23, 42, 50.

Specify the lowest and highest value of the numbers you want to generate. For example, a range of 1 up to 50 would only generate random numbers between 1 and 50 (e.g., 2, 17, 23, 42, 50). Enter the lowest number you want in the "From" field and the highest number you want in the "To" field.

Selecting "Yes" means that any particular number will appear only once in a given set (e.g., 2, 17, 23, 42, 50). Selecting "No" means that numbers may repeat within a given set (e.g., 2, 17, 17, 42, 50). Please note: Numbers will remain unique only within a single set, not across multiple sets. If you request multiple sets, any particular number in Set 1 may still show up again in Set 2.

Sorting your numbers can be helpful if you are performing random sampling, but it is not desirable if you are performing random assignment. To learn more about the difference between random sampling and random assignment, please see the Research Randomizer Quick Tutorial.

Place Markers let you know where in the sequence a particular random number falls (by marking it with a small number immediately to the left). Examples: With Place Markers Off, your results will look something like this: Set #1: 2, 17, 23, 42, 50 Set #2: 5, 3, 42, 18, 20 This is the default layout Research Randomizer uses. With Place Markers Within, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p1=5, p2=3, p3=42, p4=18, p5=20 This layout allows you to know instantly that the number 23 is the third number in Set #1, whereas the number 18 is the fourth number in Set #2. Notice that with this option, the Place Markers begin again at p1 in each set. With Place Markers Across, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p6=5, p7=3, p8=42, p9=18, p10=20 This layout allows you to know that 23 is the third number in the sequence, and 18 is the ninth number over both sets. As discussed in the Quick Tutorial, this option is especially helpful for doing random assignment by blocks.

Please note: By using this service, you agree to abide by the SPN User Policy and to hold Research Randomizer and its staff harmless in the event that you experience a problem with the program or its results. Although every effort has been made to develop a useful means of generating random numbers, Research Randomizer and its staff do not guarantee the quality or randomness of numbers generated. Any use to which these numbers are put remains the sole responsibility of the user who generated them.

Note: By using Research Randomizer, you agree to its Terms of Service .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Hum Reprod Sci
  • v.4(1); Jan-Apr 2011

This article has been retracted.

An overview of randomization techniques: an unbiased assessment of outcome in clinical research.

Department of Biostatics, National Institute of Animal Nutrition & Physiology (NIANP), Adugodi, Bangalore, India

Randomization as a method of experimental control has been extensively used in human clinical trials and other biological experiments. It prevents the selection bias and insures against the accidental bias. It produces the comparable groups and eliminates the source of bias in treatment assignments. Finally, it permits the use of probability theory to express the likelihood of chance as a source for the difference of end outcome. This paper discusses the different methods of randomization and use of online statistical computing web programming ( www.graphpad.com /quickcalcs or www.randomization.com ) to generate the randomization schedule. Issues related to randomization are also discussed in this paper.

INTRODUCTION

A good experiment or trial minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown. Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each groups receives. It also provides a basis for the statistical methods used in analyzing the data. The basic benefits of randomization are as follows: it eliminates the selection bias, balances the groups with respect to many known and unknown confounding or prognostic variables, and forms the basis for statistical tests, a basis for an assumption of free statistical test of the equality of treatments. In general, a randomized experiment is an essential tool for testing the efficacy of the treatment.

In practice, randomization requires generating randomization schedules, which should be reproducible. Generation of a randomization schedule usually includes obtaining the random numbers and assigning random numbers to each subject or treatment conditions. Random numbers can be generated by computers or can come from random number tables found in the most statistical text books. For simple experiments with small number of subjects, randomization can be performed easily by assigning the random numbers from random number tables to the treatment conditions. However, in the large sample size situation or if restricted randomization or stratified randomization to be performed for an experiment or if an unbalanced allocation ratio will be used, it is better to use the computer programming to do the randomization such as SAS, R environment etc.[ 1 – 6 ]

REASON FOR RANDOMIZATION

Researchers in life science research demand randomization for several reasons. First, subjects in various groups should not differ in any systematic way. In a clinical research, if treatment groups are systematically different, research results will be biased. Suppose that subjects are assigned to control and treatment groups in a study examining the efficacy of a surgical intervention. If a greater proportion of older subjects are assigned to the treatment group, then the outcome of the surgical intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result.[ 7 , 8 ]

Second, proper randomization ensures no a priori knowledge of group assignment (i.e., allocation concealment). That is, researchers, subject or patients or participants, and others should not know to which group the subject will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data.[ 9 ] Schul and Grimes stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the research can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical research. However, the interpretation of this post adjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates.[ 1 ] One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates. The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical research (before the adjustment procedure) instead of post data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

TYPES OF RANDOMIZATION

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable and valid results for your study. Use of online software to generate randomization code using block randomization procedure will be presented.

Simple randomization

Randomization based on a single sequence of random assignments is known as simple randomization.[ 3 ] This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with two treatment groups (control versus treatment), the side of the coin (i.e., heads - control, tails - treatment) determines the assignment of each subject. Other methods include using a shuffled deck of cards (e.g., even - control, odd - treatment) or throwing a dice (e.g., below and equal to 3 - control, over 3 - treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects.

This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups. However, randomization results could be problematic in relatively small sample size clinical research, resulting in an unequal number of participants among groups.

Block randomization

The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of subjects in each group similar at all times.[ 1 , 2 ] The block size is determined by the researcher and should be a multiple of the number of groups (i.e., with two treatment groups, block size of either 4, 6, or 8). Blocks are best used in smaller increments as researchers can more easily control balance.[ 10 ]

After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. For example, one group may have more participants with secondary diseases (e.g., diabetes, multiple sclerosis, cancer, hypertension, etc.) that could confound the data and may negatively influence the results of the clinical trial.[ 11 ] Pocock and Simon stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. Hence, sample size and covariates must be balanced in clinical research.

Stratified randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of subjects’ baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.

The stratified randomization method controls for the possible influence of covariates that would jeopardize the conclusions of the clinical research. For example, a clinical research of different rehabilitation techniques after a surgical procedure will have a number of covariates. It is well known that the age of the subject affects the rate of prognosis. Thus, age could be a confounding variable and influence the outcome of the clinical research. Stratified randomization can balance the control and treatment groups for age or other identified covariates. Although stratified randomization is a relatively simple and useful technique, especially for smaller clinical trials, it becomes complicated to implement if many covariates must be controlled.[ 12 ] Stratified randomization has another limitation; it works only when all subjects have been identified before group assignment. However, this method is rarely applicable because clinical research subjects are often enrolled one at a time on a continuous basis. When baseline characteristics of all subjects are not available before assignment, using stratified randomization is difficult.[ 10 ]

Covariate adaptive randomization

One potential problem with small to moderate size clinical research is that simple randomization (with or without taking stratification of prognostic variables into account) may result in imbalance of important covariates among treatment groups. Imbalance of covariates is important because of its potential to influence the interpretation of a research results. Covariate adaptive randomization has been recommended by many researchers as a valid alternative randomization method for clinical research.[ 8 , 13 ] In covariate adaptive randomization, a new participant is sequentially assigned to a particular treatment group by taking into account the specific covariates and previous assignments of participants.[ 7 ] Covariate adaptive randomization uses the method of minimization by assessing the imbalance of sample size among several covariates.

Using the online randomization http://www.graphpad.com/quickcalcs/index.cfm , researcher can generate randomization plan for treatment assignment to patients. This online software is very simple and easy to implement. Up to 10 treatments can be allocated to patients and the replication of treatment can also be performed up to 9 times. The major limitations of this software is that once the randomization plan is generated, same randomization plan cannot be generated as this uses the seed point of local computer clock and is not displayed for further use. Other limitation of this online software Maximum of only 10 treatments can be assigned to patients. Entering the web address http://www.graphpad.com/quickcalcs/index.cfm on address bar of any browser, the page of graphpad appears with number of options. Select the option of “Random Numbers” and then press continue, Random Number Calculator with three options appears. Select the tab “Randomly assign subjects to groups” and press continue. In the next page, enter the number of subjects in each group in the tab “Assign” and select the number of groups from the tab “Subjects to each group” and keep number 1 in repeat tab if there is no replication in the study. For example, the total number of patients in a three group experimental study is 30 and each group will assigned to 10 patients. Type 10 in the “Assign” tab and select 3 in the tab “Subjects to each group” and then press “do it” button. The results is obtained as shown as below (partial output is presented)

Another randomization online software, which can be used to generate randomization plan is http://www.randomization.com . The seed for the random number generator[ 14 , 15 ] (Wichmann and Hill, 1982, as modified by McLeod, 1985) is obtained from the clock of the local computer and is printed at the bottom of the randomization plan. If a seed is included in the request, it overrides the value obtained from the clock and can be used to reproduce or verify a particular plan. Up to 20 treatments can be specified. The randomization plan is not affected by the order in which the treatments are entered or the particular boxes left blank if not all are needed. The program begins by sorting treatment names internally. The sorting is case sensitive, however, so the same capitalization should be used when recreating an earlier plan. Example of 10 patients allocating to two groups (each with 5 patients), first the enter the treatment labels in the boxes, and enter the total number of patients that is 10 in the tab “Number of subjects per block” and enter the 1 in the tab “Number of blocks” for simple randomization or more than one for Block randomization. The output of this online software is presented as follows.

The benefits of randomization are numerous. It ensures against the accidental bias in the experiment and produces comparable groups in all the respect except the intervention each group received. The purpose of this paper is to introduce the randomization, including concept and significance and to review several randomization techniques to guide the researchers and practitioners to better design their randomized clinical trials. Use of online randomization was effectively demonstrated in this article for benefit of researchers. Simple randomization works well for the large clinical trails ( n >100) and for small to moderate clinical trials ( n <100) without covariates, use of block randomization helps to achieve the balance. For small to moderate size clinical trials with several prognostic factors or covariates, the adaptive randomization method could be more useful in providing a means to achieve treatment balance.

Source of Support: Nil

Conflict of Interest: None declared.

Storage Assignment Using Nested Metropolis Sampling and Approximations of Order Batching Travel Costs

  • Original Research
  • Open access
  • Published: 23 April 2024
  • Volume 5 , article number  477 , ( 2024 )

Cite this article

You have full access to this open access article

in a random assignment

  • Johan Oxenstierna   ORCID: orcid.org/0000-0002-6608-9621 1 , 2 ,
  • Jacek Malec   ORCID: orcid.org/0000-0002-2121-1937 1 &
  • Volker Krueger   ORCID: orcid.org/0000-0002-8836-8816 1  

71 Accesses

Explore all metrics

The Storage Location Assignment Problem (SLAP) is of central importance in warehouse operations. An important research challenge lies in generalizing the SLAP such that it is not tied to certain order-picking methodologies, constraints, or warehouse layouts. We propose the OBP-based SLAP, where the quality of a location assignment is obtained by optimizing an Order Batching Problem (OBP). For the optimization of the OBP-based SLAP, we propose a nested Metropolis algorithm. The algorithm includes an OBP-optimizer to obtain the cost of an assignment, as well as a filter which approximates OBP costs using a model based on the Quadratic Assignment Problem (QAP). In experiments, we tune two key parameters in the QAP model, and test whether its predictive quality warrants its use within the SLAP optimizer. Results show that the QAP model’s per-sample accuracy is only marginally better than a random baseline, but that it delivers predictions much faster than the OBP optimizer, implying that it can be used as an effective filter. We then run the SLAP optimizer with and without using the QAP model on industrial data. We observe a cost improvement of around 23% over 1 h with the QAP model, and 17% without it. We share results for public instances on the TSPLIB format.

Similar content being viewed by others

in a random assignment

Optimization of the Storage Location Assignment Problem Using Nested Annealing

in a random assignment

Efficient Order Batching Optimization Using Seed Heuristics and the Metropolis Algorithm

in a random assignment

Robust Storage Assignment in Warehouses with Correlated Demand

Avoid common mistakes on your manuscript.

Introduction

Charris et al. [ 7 ] gives the following definition of a Storage Location Assignment Problem (SLAP): The “allocation of products into a storage space and optimization of the material handling (…) or storage space utilization [costs]”. The relationship between material handling costs, on the one hand, and storage assignment, on the other, can be showcased in an example: If a vehicle needs to pick a set of products, its travel cost clearly depends on where the products are stored in the warehouse. At the same time, the development of an effective storage strategy needs to consider various features in material handling, such as vehicle constraints, traffic conventions and picking methodologies.

In this paper, we work with a version of the SLAP which is particularly generalizable. Kübler et al. [ 18 ], name this version the “joint storage location assignment, order batching and picker routing problem”. The main characteristic of this version is the inclusion of two optimization problems in the SLAP:

The Order Batching Problem (OBP), where vehicles are assigned to carry sets of orders (an order is a set of products) [ 17 ].

The Picker Routing Problem , where a short picking path of a vehicle is found for the products that the vehicle is assigned to pick. The Picker Routing Problem is a Traveling Salesman Problem (TSP) applied in a warehouse environment [ 25 ].

Henceforth, we refer to this version as the OBP-based SLAP. A key advantage of using the OBP within the SLAP is the added flexibility and generality of the order on a conceptual level: For example, optimizing the OBP-based SLAP gives opportunity to also optimize the TSP-based SLAP [ 23 ]. When it comes to product locations, the sole difference between the OBP and the OBP-based SLAP is that locations for all products are assumed fixed in the former while, in the latter, they are assumed mutable (for a subset of locations in our case).

It is of scientific importance to be able to compare optimization approaches and solutions. For the SLAP, this is made difficult by the many versions of the problem. As the extensive literature review by Charris et al. [ 7 ] shows, there is little consensus regarding which versions are more important, or specifically, which features would represent a standardized version. Examples of such features are dynamicity, warehouse layout, vehicle types, cost functions, reassignment scenarios and picking methodologies. There is also a shortage of benchmark datasets for any version of the SLAP, which prevents the reproducibility of experiments [ 2 , 16 ]. As part of our contribution for a standardized version, we suggest a modified TSPLIB format [ 26 ] (section “ Datasets ”). There are several ways in which to balance between simplicity, reproducibility and industrial applicability when developing SLAP versions and corresponding instances, however. From the generalization perspective, our model is advantageous in two main areas: Order-picking methodology and warehouse layout. But it is weak in two other areas: dynamicity and reassignment scenarios. We describe the meaning of these choices further in the light of prior work (section “ Related Work ”) and in our problem formulation (section “ Problem Formulation ”). We invite the community to debate which features are more or less important for a standardized version.

In section “ Optimization Algorithm ”, we introduce our SLAP optimizer. It is based on the Metropolis algorithm, a type of Markov Chain Monte Carlo (MCMC) method. A core feature of the optimizer is that the quality of a location assignment candidate is retrieved by optimizing an OBP. Due to the OBP’s NP-hardness, it must be optimized in a way that trades off solution quality with CPU-time. For this purpose, we use an OBP optimizer with a high degree of computational efficiency [ 22 ]. Within the SLAP optimizer, the OBP optimizer is still computationally expensive, and we show that it can be assisted by fast cost approximations from a Quadratic Assignment Problem (QAP) model. Finally, we test the performance of the SLAP optimizer with and without inclusion of the QAP approximations. Cost improvements are around 23% over 1 h with the QAP model, and 17% without. In summary, we make three concrete contributions:

Formulation of an OBP-based SLAP optimization model and a corresponding benchmark instance standard.

QAP approximation model to predict OBP travel costs and experiments on generated instances to test whether the use of QAP approximations within a SLAP optimizer can be justified.

An OBP-based SLAP optimizer (QAP-OBP) and experiments on industry instances to test its computational efficiency. Comparison of results with and without usage of QAP approximations.

Related Work

This section goes through general strategies for conducting storage location assignment, as well as ways in which their quality can be evaluated. Various SLAP formulations and proposed optimization algorithms are covered. Our primary focus will be on the standard picker-to-parts arrangement. We specifically refer to the work of Kübler et al. [ 18 ], as their proposed model aligns with ours.

There exist numerous general strategies for conducting storage location assignment [ 7 ]. Three key strategies are Dedicated, Class-based and Random:

Dedicated Each product is assigned to a specific location which never changes. This strategy is suitable if the product collection changes rarely and simplicity is desired. Additionally, human pickers can leverage this strategy by familiarizing themselves with specific products and their corresponding locations, which might speed up their picking [ 35 ].

Random Each product can be assigned any available location in the warehouse. This is suitable whenever the product collection changes frequently.

Class-based (zoning) The warehouse is partitioned into sections, and the products are classified based on their demand. Each class is assigned a zone. The outline of the zone can be regarded as dedicated in that it does not change, whereas the placement of each product in a zone is assumed to be random [ 21 ]. Class-based storage assignment can therefore be regarded as a middle ground between dedicated and random.

The quality of a location assignment is commonly evaluated based on some model of aggregate travel cost. For this purpose, a simplified simulation of order-picking in the warehouse can be used [ 7 , 21 ]. Some proposals include the simulation of order-picking by the Cube per Order Index (COI) [ 15 ]. COI includes the volume of a product and the frequency with which it is picked (historically or future-forecasted). Products with high pick frequency and relatively low volume are subsequently assigned to locations close to the depot. Since orders may contain products which are not located close to each other, COI is only adequate for order-picking scenarios where orders contain one product and vehicles carry one product at a time. This may be sufficient for pallet picking or when certain types of robots are used [ 3 ]. Mantel et al. [ 21 ], introduced Order Oriented Slotting (OOS) where the number of products in an order may be greater than one. A similar model to OOS is used by Fontana and Nepomuceno [ 10 ], Lee et al. [ 20 ] and Žulj et al. [ 37 ]. The picking cost of an order in OOS can in some cases be modeled using a Quadratic Assignment Problem (QAP) [ 21 ]. The QAP computes the sum of element-wise products of weights and frequencies [ 1 ] and for an order this can be translated into distances between products and how often they are picked. Nevertheless, a QAP on its own is often not sufficient to model a SLAP without extensive use of heuristics and constraints for warehouse layouts and picking methodologies [ 21 ]. For a layout-agnostic OBP-based SLAP, graph-based QAP techniques could be attempted, but hitherto they have only been applied on related problems [ 31 , 36 ].

There is only limited research on SLAPs where vehicles are expected to carry multiple orders and where an Order Batching Problem (OBP) is integrated into the SLAP optimization process. One example is Xiang et al. (2018) and [ 33 ], who use this approach in a robotic warehouse where the vehicles are pods or mobile racks, which is not easily comparable to a picker-to-parts system. Another example is Kübler et al. [ 18 ], which we look closer at below.

Travel distance or time are commonly used to evaluate SLAP solution quality in the above mentioned models, but there are several alternatives and extensions. Lee et al. [ 20 ], for example, study the effect of location assignment and traffic congestion in the warehouse. Assigning too many products to locations close to the depot (the goal in common COI) may lead to traffic congestion, which should ideally be considered in an industrial model. Lee et al. [ 20 ], formulate Correlated and Traffic Balanced Storage Assignment (C&TBSA) as a multi-objective problem with travel cost on the one hand, and traffic congestion avoidance on the other. Larco et al. [ 19 ], include worker welfare in their evaluation of solution quality. If picking is conducted by humans who move products from shelves onto a vehicle, the weight and volume, as well as the height of the shelf the product is placed on, can have an impact on worker welfare. Parameters such as "ergonomic loading," "human energy expenditure," or "worker discomfort" [ 7 ] can be used to quantify worker welfare.

The SLAP can be categorized into two main groups based on the number of location assignments required. Either the assignment is a “re-warehousing” operation, which means that a large portion of the warehouse’s products are (re)assigned [ 16 ]. Often, however, only a small subset of products are (re)assigned a location and this is called “healing” [ 16 ]. Solution proposals involving healing often look closely at different types of scenarios for carrying out initial assignments for new products in the warehouse, or reassignments for products already in the warehouse. Kübler et al. [ 18 ], propose four such scenarios.

Empty storage location A product is assigned to a previously unoccupied location.

Direct exchange A product changes location with another product.

Indirect exchange 1 A product is moved to another location which is occupied by another product. The latter product is moved to a third, empty location.

Indirect exchange 2 A product is moved to a new location which is occupied by a second product. The second product is moved to a new location which is occupied by a third product. The third product is moved to the original location of the first product.

The above scenarios are all associated with varying levels of effort, ranging from the lightest in scenario I, to the heaviest in IV. Kübler et al. quantify these efforts by including both physical and administrative times, which are transformed to effort terms by proposed proportionalities.

Concerning SLAP optimizers, proposals include models capable of obtaining optimal solutions, such as Mixed Integer Linear Programming (MILP), dynamic programming and branch and bound algorithms [ 7 ]. The warehouse environment is often simplified to a significant degree when optimal solutions are sought [ 7 , 13 , 16 , 19 ]. The main simplification relates to order-picking using COI or OOS. Other simplifications involve limiting the number of products [ 13 ], number of locations [ 30 ], or by requiring the conventional warehouse rack layout [ 18 ]. The conventional layout assumes Manhattan style blocks of aisles and cross-aisles, and it is used almost exclusively in existing literature on the SLAP (we are only aware of two exception cases using the “fishbone” and “cascade” layouts [ 6 , 7 ].

Most proposed SLAP optimizers provide non-exact solutions using heuristics or meta-heuristics. One example is multi-phase optimization where the first phase proposes possible locations for products, and the second phase carries out the assignments and evaluates them [ 32 ]. In Kübler et al. [ 18 ], a heuristic zoning optimizer is used to generate location assignments, and a Discrete Evolutionary Particle Swarm Optimizer (DEPSO) is used to optimize an OBP for the evaluation of the assignments. DEPSO is a modification of a standard PSO algorithm that addresses the risk of convergence on local minima and allows for a discrete search space. Other heuristic or meta-heuristic approaches include Genetic and Evolutionary Algorithms [ 9 , 20 ], Ant Colony Optimization [ 34 ] and Simulated Annealing [ 35 ]. If TSP optimization is desired within a SLAP, S-shape or Largest Gap algorithms [ 28 ] are often utilized. For TSP-optimization on unconventional layouts with a pre-computed distance matrix, Google OR-tools or Concorde have been proposed [ 22 , 27 ].

Evaluating the quality of results in prior work is challenging due to the variability of SLAP models. Below are a few examples where result quality is judged based on a percentage saving in travel distance or time: For conventional warehouse layouts, reassignment costs and dynamic picking patterns, Kofler et al. [ 16 ], report best savings around 21%. Kubler et al. (2020), report best savings around 22% in a similar scenario. Zhang et al. [ 35 ] report best savings around 18% on simulated data with thousands of product locations, but without reassignment costs. In a similar setting, for a few hundred products, Trindade et al. (2022) report best savings around 33%.

Nested Metropolis Sampling

The proposed optimizer (section “ Optimization Algorithm ”) is based on a nested Metropolis algorithm first introduced by Christen and Fox [ 8 ]. The Metropolis algorithm is a type of Markov Chain Monte Carlo (MCMC) method, which first draws a sample \({x}_{i+1}\) based on a desired feature distance (excluding costs) to a previous sample \({x}_{i}\) . The distance is given by some probability distribution \(q\left({x}_{i+1}|{x}_{i}\right)\) , and it is usually chosen such that the distance between \({x}_{i+1}\) and \({x}_{i}\) is low with a high probability (Mackay 1998). The accept probability is then computed based on some function that takes the costs of the new and previous samples as input [ 29 ]. Common Metropolis sampling assumes that there is only one cost function, \({f}^{*}\) , and since we wish to include an approximation of this cost, \(f\) , we use a modification [ 8 ]. Nested Metropolis sampling is shown in flowchart form in Fig.  1 .

figure 1

Nested Metropolis Sampling. The inner loop computes a cheap (in terms of CPU-time) approximation of a sample cost and if the approximation is strong, the sample is promoted to the outer loop where an expensive ground-truth cost is computed

After a first sample \({x}_{i}\) has been initialized (i), a new sample \({x}_{i+1}\) is generated (ii) and its cost approximated \(f\left({x}_{i+1}\right)\) (iii). If the approximation is deemed strong enough (probabilistically) relative to \(f\left({x}_{i}\right)\) , the sample is promoted (iv) to the next step where its ground-truth cost \({f}^{*}\left({x}_{i+1}\right)\) is computed (v). The accept filter (vi) is only used for promoted samples.

For a cost minimization problem, the promote and accept probabilities can be computed based on the following equations [ 8 ]:

where \(\alpha \left({x}_{i+1}|{x}_{i}\right)\) denotes the promote probability and \({\alpha }^{*}\left({x}_{i+1}|{x}_{i}\right)\) the accept probability.

Problem Formulation

Objective function.

The objective function in the OBP-based SLAP is based on the ones formulated in Henn and Wäscher [ 14 ] and Oxenstierna et al. [24], i.e., the minimization of cost in an Order Batching Problem (OBP):

where \(\mathcal{O}\) denotes orders, where \(\mathcal{B}\) denotes batches and where \({D}^{x}\left(b\right)\) denotes the distance of a TSP solution, i.e., the distance needed to pick batch \(b\) . Batch \(b\) is a set of orders and \(v\in V\) denotes a vehicle. Each vehicle can carry one batch and the number of orders that can fit in the batch is governed by vehicle capacity (such as dimensions, bins, number of orders or products). \({a}_{vb}\) denotes a binary variable set to 1 if vehicle \(v\) is assigned to pick \(b\) and 0 otherwise. Orders consist of products \(\mathcal{O}\in {2}^{\mathcal{P}}\) , where each product \(p\in \mathcal{P}\) is a tuple consisting of a unique key (Stock Keeping Unit), a Cartesian location \(loc\left(p\right)\) , and a positive quantity of how many \(p\) are available at \(loc\left(p\right)\) . The locations of all products are given by location assignment vector \(x\) , where the elements represent products and the indices locations (each index is mapped to a Cartesian coordinate).

The mapping of location keys to coordinates and computation of distances between pairs of locations is based on a digitization pipeline for warehouses on any 2D obstacle layout and usage of the Floyd-Warshall graph algorithm. Details on this digitization pipeline and the OBP (including TSP-optimization for \({D}^{x}(b)\) and usage of vehicle capacity in \({a}_{vb}\) ) are beyond the scope of this paper, so for specifics we refer to Oxenstierna et al. [24] and Rensburg [ 27 ].

The difference between the OBP and the OBP-based SLAP mainly concerns product locations. In Oxenstierna, van Rensburg, et al. (2021) each product p  ∈  “has a [fixed] location”, meaning that \(x\) in \({f}^{*}\left(x\right)\) is immutable. In the OBP-based SLAP, however, a subset of products \({\mathcal{P}}_{s}\subset \mathcal{P}\) do not have fixed locations, which means that some elements in \(x\) can change indices in the vector. The OBP-based SLAP objective consists of finding location assignment \(x\) such that the OBP in Eq.  3 is minimized:

This objective lacks reassignment costs and is therefore a version of the “empty storage location” scenario I in Kübler et al. [ 18 ] (section “ Related Work ”). Exclusion of reassignment costs is motivated for this scenario, since the initial location assignment of new products in a warehouse is not optional, but a requirement. The other of Kübler et al.’s scenarios are all reassignments. Contrary to the initial assignments that we work with, reassignments are optional and potential gains in travel cost must there be weighed against reassignment costs.

Although reassignments should ideally be included in a complete SLAP model, a standardized SLAP needs to be a trade-off between simplicity and complexity. In the TSP-based SLAP [ 23 ] it is shown that the optimization of reassignments is NP-hard and not easily combined with order-picking optimization within a SLAP. The TSP-based SLAP includes reassignments, but uses the TSP instead of the OBP to optimize order-picking. The OBP-based SLAP excludes reassignments, but includes the OBP, a significantly more challenging problem than the TSP. As is often the case in literature on the SLAP, choice of optimization model depends on which features are considered more important for the usecase at hand.

Fast OBP Cost Approximation

One key difficulty with the OBP-based SLAP is that the OBP poses a highly intractable problem. Even for relatively small OBP instances, a significant amount of CPU-time is needed to obtain substantial cost improvements [ 18 , 22 ]. In the case of the OBP-based SLAP, this means that it would require a large amount of CPU-time to minimize cost for many assignment candidates \(x\) (Eq.  4 ). To resolve this problem, we propose to include an approximation of \({f}^{*}\left(x\right)\) :

where \(w\) denotes weight, where \({d}_{{l}_{1}{l}_{2}}^{x}\) denotes distance between two locations \({l}_{1},{l}_{2}\) and \(a\left(p,l\right)\) a function which returns 1 if product \(p\) is located at location \(l\) and 0 otherwise. \(f\left(x\right)\) is the element-wise summation of weights times distances. The cell values in the weight matrix represent the number of times two products, \({p}_{1},{p}_{2}\) , appear in the same order \(o\in \mathcal{O}\) . The (shortest) distances between all pairs of product locations are assumed pre-computed and stored in memory. We refer to Eq.  5 as the Quadratic Assignment Problem (QAP) model. Note that we never minimize it. For the \(f\left(x\right)\) approximation to be of use, we proceed to discuss how its ability to predict \({f}^{*}\left(x\right)\) can be evaluated.

Assuming a dataset of finite samples with approximated and ground truth costs \(\left(x,{f\left(x\right), f}^{*}\left(x\right)\right)\in X,|X|\in {\mathbb{Z}}^{+}\) , \({f\left(x\right), f}^{*}\left(x\right) \in {\mathbb{R}}^{+}\) , the predictive quality of \(f\left(X\right)\) versus \({f}^{*}\left(X\right)\) is obtainable through softmax cross-entropy [ 4 , 5 ]:

where \({\mathbb{P}}\left(f\left({x}_{i}\right)\right)\) and \({\mathbb{P}}\left({f}^{*}\left({x}_{i}\right)\right)\) denote the probabilities of approximate and ground truth costs of sample \({x}_{i}\) , respectively, where \(\left({x}_{i},{f\left({x}_{i}\right), f}^{*}\left({x}_{i}\right)\right)\in X\) . \(L\) is the loss , i.e., a distance heuristic between \(f\left(X\right)\) and \({f}^{*}\left(X\right)\) . This approach can be extended into Normalized Discounted Cumulative Gain (NDCG) [ 4 ].

\({\pi }_{f\left(X\right)}\) is a ranking (an ordering of samples \(X\) according to their costs \(f(X)\) ) and \(rel({\pi }_{f\left(X\right)}\left(i\right))\) is the relevance at rank \({\pi }_{f\left(X\right)}\left(i\right)\) . \(IDCG\) denotes an ideal value, where \(rel({\pi }_{{f}^{*}\left(X\right)}\left(1\right))>rel({\pi }_{{f}^{*}\left(X\right)}\left(2\right))>\dots > rel({\pi }_{{f}^{*}\left(X\right)}\left(|X|\right))\) , i.e., the case when the relevance of a sample corresponds with how highly it is ranked. Bruch et al. [ 4 ] argue that NDCG is a stronger choice than softmax cross-entropy whenever cost is non-binary, which is the case in \({f}^{*}\left(x\right)\) (Eq.  3 ). In Fig.  13 (Appendix) an example is shown where NDCG is computed from \(\left|X\right|\) samples.

In summary, we can quantify the predictive quality of the QAP model by its ability to rank a list of samples \(X\) against a ground truth ranking by the OBP optimizer. Since the nested Metropolis algorithm in section “ Nested Metropolis Sampling ” only stores two samples at any iteration, we modify the algorithm to instead work with more samples (section “ Optimization Algorithm ”). We also want to avoid the computation of \({f}^{*}\left(X\right)\) in each iteration, so in the optimization algorithm we only compute \({f}^{*}\left(argmi{n}_{x} f\left(X\right)\right)\) . In section “ Experiments ”, we conduct an experiment to test the validity of using the NDCG-based \({f}^{*}\left(argmi{n}_{x} f\left(X\right)\right)\) in SLAP optimization. In section “ Datasets ” we also discuss choice of datatype for the relevance values.

Optimization Algorithm

The proposed optimization algorithm includes three main modules: 1. a sample (location assignment) generator. 2. a fast cost approximator based on a model of the Quadratic Assignment Problem (QAP). 3. an Order Batching Problem (OBP) optimizer. In this paper, we mainly focus on how QAP approximations can be effectively utilized within the nested Metropolis sampler described in section “ Nested Metropolis Sampling ”. In sections “ Sample Generator ” and “ Promote and Accept Thresholds and Cost Computations ”, we therefore describe two main modifications. The final version (QAP-OBP) is shown in flowchart form in Fig.  2 and pseudocode in Algorithm 1.

figure 2

QAP-OBP optimization algorithm

figure a

Sample \(x\) contains both the assigned products (products already in the warehouse) and the unassigned products \({\mathcal{P}}_{s}\) (section “ Problem Formulation ”). \({x}_{1}\) is initialized such that products \({\mathcal{P}}_{s}\) are assigned locations randomly without replacement. Choices for iterations \(K\) , the cost distance function \(\Delta\) and constant \({c}_{1}\) are discussed in section “ Experiments ”.

Sample Generator

The input to the sample generator (step ii in Fig.  2 ) is a single sample \({x}_{i}\) and the output is a list of new samples \({X}_{i+1}\) . There are two main parameters in use by the sample generator. \(N\in {\mathbb{Z}}^{+}\) dictates how many new samples are generated, i.e., \(|{X}_{i+1}|\) , and \(\uplambda \in {\mathbb{R}}^{+}\) dictates how much each new sample in \({X}_{i+1}\) differs from \({x}_{i}\) . The way \(N\) and \(\uplambda\) are utilized to generate new samples is shown in Algorithm 2.

figure b

Every time the sample generator is called, an empty list is first initialized. Then, for \(N\) iterations, a new sample \(x\) is generated by first copying \({x}_{i}\) and then by computing \(m\) , the number of products for which the index in \(x\) can change. For \(m\) we use a truncated Poisson distribution with rate \(\uplambda\) and upper bound \(m\le {|\mathcal{P}}_{s}|\) . A uniform random selection of \(m\) products, \({\mathcal{P}}_{m}\) , are then removed from \(x\) . For each \(p\in {\mathcal{P}}_{m}\) , a uniform random free index (either an empty location or an index holding a product in \({\mathcal{P}}_{s}\) ) in \(x\) is then selected such that the quantity ( \(q\) ) of the product does not exceed the location’s capacity. After \(x\) has been filled, it is appended to \({X}_{i+1}\) .

Promote and Accept Thresholds and Cost Computations

After a list of samples \({X}_{i+1}\) has been generated (step ii in Fig.  2 ), their costs are approximated using the QAP model (iii). The sample with the lowest cost approximation is then always promoted (iv). Steps ii, iii and iv in both the nested Metropolis sampler and QAP-OBP (Figs.  1 and 2 , respectively) are the same considering that the final output is a single promoted sample. There are advantages and disadvantages of both versions regarding how they conduct this selection. In the nested Metropolis sampler, the promote probability depends on the ratio of approximated cost between previous and new single samples. In QAP-OBP, the sample generator is instead set to output \(N=\left|{X}_{i+1}\right|\) candidates, followed by argmin (compare step iv in Figs.  1 and 2 ). This modification simplifies evaluation of the QAP model’s accuracy, since we can set up an experiment to compute OBP costs on the same samples (Fig.  5 ). Generating multiple samples could also facilitate parallelization, which, for future work, could reduce the QAP model’s CPU-time. The main consideration, however, is that it simplifies the original algorithm for a particularly complex optimization scenario, where it cannot be expected to behave according to Christen and Fox’s [ 8 ] performance guarantees. The problem with the original algorithm is that it assumes optimal ground-truth costs, but these are not generally available for OBPs [ 22 ] (as far as we are aware, there exists no proposal for how to obtain optimal results for but the smallest OBP instances within reasonable CPU-time). A relatively minor problem with the modification is that it requires tuning of the number of samples ( \(N\) ) that the sample generator is outputting each iteration. The reason we use a Metropolis algorithm instead of possibly more capable meta-heuristic alternatives, is mainly due to implementation. The Metropolis algorithm does not have many parameters which could be tuned based on iterations \(K\) (such as the temperature in Simulated Annealing) and therefore, a time-based condition can be used instead of \(K\) to terminate the algorithm (we will use this in section “ SLAP optimization with and without QAP approximation ”).

Concerning computation of \({f}^{*}(x)\) we use the Single Batch Iterated (SBI) optimizer and its main features are its high computational efficiency and its ability to handle warehouses with unconventional rack layouts [ 22 ]. OBP optimization and its internal use of TSP optimization, is beyond the scope of this paper, and we here treat SBI as a black-box which outputs a \({f}^{*}\left(x\right)\) for Eq.  3 . The sample \(x\) with the lowest \({f}^{*}\left(x\right)\) found is always stored throughout the optimization procedure (sample storage is omitted in Figs.  1 , 2 and the pseudo-code).

For this paper, we have generated and shared instances in L17_533, Footnote 1 which are based on OBP instances in L6_203 Footnote 2 and L09_251. Footnote 3 We also use data from a real warehouse (Aba Skol AB). The generated instances use the TSPLIB format [ 26 ] with certain amendments for the SLAP, including 6 types of warehouse obstacle layouts, various depot configurations, vehicle capacities and orders (see Fig.  3 for an example of one of the layouts). L17_533 does not include any unidirectional travel rules, meaning that the distance between any two locations is equal both ways. The number of orders range between 4 to 1000 and number of products range between 10 to 3000. The products that are to be assigned a location, \({\mathcal{P}}_{s}\) , are tagged as “SKUsToSlot” in the instance set. The “assignmentOptions” includes the available empty locations and how cost is to be computed (it is always set to the “empty storage location” scenario). For analysis, instances are categorized according to vehicle capacities, number of orders, products and parameters \(N\) and \(\uplambda\) .

figure 3

Example storage assignment of four products and subsequent order-picking for the SLAP model used in the paper. Rectangles denote warehouse racks. Red and blue diamonds denote origin/destination for picking paths. Colored dots denote products and the four orders they belong to. Black crosses denote available locations for the new products. Note that products are often more spread out than what is shown in this example

The industrial warehouse dataset (Fig.  4 ) contains 210,277 products in 37,014 orders collected using batch picking over a 4-month period. There are 1289 pick-locations (in the graph representation) and most batches exist within one of six picking zones, but 24.4% include picks from several zones. As with the generated instances, shortest distances and paths between any two locations are assumed equal. For a proof of concept, we select product subsets from this data to be of relevance to warehouse management and real-world utility, on the one hand, and comparability to the generated instances, on the other. We build 150 subsets from 3-week periods with selections of between 50–1800 products for \(\mathcal{P}\) and between 10 and 225 corresponding products for \({\mathcal{P}}_{s}\) . The subset selection is random apart from that the products in a subset must exist within the same 3 week period. Number of free locations is given on a per-product basis, since each product has specific constraints regarding where it can be placed, and on average it varies between 50 – 481 locations. For parameters \(N\) and \(\uplambda\) , we explore suitable values on the generated instances within shorter optimization runs, followed by longer runs with chosen constants on the real dataset.

figure 4

Top-view of the Aba Skol AB warehouse. The picking zones are color-coded. The red circle denotes the most commonly used depot location

Experiments

Overview and constants.

The experiments are divided into two parts. The first part involves tuning the QAP model and comparing its ability to rank SLAP assignment samples against an OBP ground truth model and a random baseline (Fig.  5 ).

figure 5

Steps involved to obtain QAP predictive quality on samples generated from an instance

A SLAP test-instance (orders with products) is first loaded (i) and \({x}_{1}\) initialized (products \({\mathcal{P}}_{s}\) are assigned free locations in \({x}_{1}\) randomly) (ii). Then, \(N\) location assignments, \({X}_{i+1}\) , are generated according to Algorithm 2 (iii). The cost of the generated assignments is estimated using the QAP model and the OBP optimizer SBI (iv). The samples and costs are used to compute IDCG and DCG (v). IDCG is computed from the ranking of costs according to the OBP optimizer and DCG is computed from the ranking of costs according to the QAP model. A random DCG value is also pre-computed using the average of \({10}^{6}\) random rankings. This random baseline represents the case when \(f\left({X}_{i+1}\right)\) and \(argmi{n}_{x+1} f\left({X}_{i+1}\right)\) (steps iii and iv in Fig.  3 ) cannot help produce a lower value in \({f}^{*}\left({x}_{i+1}\right)\) (step v) [ 11 , 12 ]. Relevance values \(rel({\pi }_{{f}^{*}\left(X\right)})\) and \(rel({\pi }_{f\left(X\right)})\) are chosen to be the ordinal ranks of samples \(x\) according to respective cost functions. For \(N\) samples, the values are \(rel\left({\pi }_{{f}^{*}\left(X\right)}\right)={(\pi }_{{f}^{*}\left(X\right)}\left(N\right), {\pi }_{{f}^{*}\left(X\right)}\left(N-1\right), \dots , {\pi }_{{f}^{*}\left(X\right)}\left(1\right))\) and \(rel\left({\pi }_{f\left(X\right)}\right)={(\pi }_{f\left(X\right)}\left(N\right), {\pi }_{f\left(X\right)}\left(N-1\right), \dots , {\pi }_{f\left(X\right)}\left(1\right))\) (this corresponds to the set up shown in Fig.  13 in Appendix). The DCG value obtained from the QAP model is then used to compute NDCG according to Eq.  9 (vi). The predictive quality is finally calculated by subtracting the achieved NDCG value with the random NDCG baseline, with a positive value implying that the QAP model is stronger. We also record the CPU-time needed for the QAP model and the OBP-optimizer, respectively. The tuning of the QAP model concerns parameters \(N\) (number of samples) and \(\uplambda\) (rate of change for the samples) to maximize NDCG. We further investigate whether NDCG is impacted by other factors, including warehouse layout and instance size. Instance size is used to provide a quantification of instance difficulty, and here we restrict it to number of orders, total number of products \(\left|\mathcal{P}\right|\) and products which are to be assigned a location \({|\mathcal{P}}_{s}|\) . The latter number, \({|\mathcal{P}}_{s}|\) , is computed as 5–10% of \(\left|\mathcal{P}\right|\) in the instance.

We proceed with a second experiments part, where we run the SLAP optimizer (Algorithm 1) on the industrial instances with and without the QAP model. For the experiments without the QAP model, \(N=1\) and lines 11 and 12 in Algorithm 1 are removed. This second part is carried out after suitable constants for \(N\) and \(\uplambda\) values have been found on the L17_533. In order to find such constants, we run the steps in Fig.  5 for 10 \(N\) values ranged between 1–200 and 10 \(\uplambda\) values set between 5–50% of \({|\mathcal{P}}_{s}|\) . For the experiments to test \(N\) , we use \(\uplambda =15\mathrm{\%}\) of \({|\mathcal{P}}_{s}|\) . For the experiment to test \(\uplambda\) , we use \(N= 50\) . For the cost distance function \(\Delta\) we use a scaled sigmoid, which is set to approach 1 when the ratio \({f}^{*}\left({x}_{i}\right)/{f}^{*}\left({x}_{i+1}\right)\) exceeds 1.05. This means that sample \({x}_{i+1}\) is unlikely to be accepted if its cost is 5% higher than that of \({x}_{i}\) . For each instance, the global best OBP result is tracked and uploaded as the current best result. We refer to the documentation in L17_533 for further details. We use Intel Core i7-4710MQ 2.5 GZ 4 cores, 32 GB RAM, Python3, Cython and C.

The Impact of Parameters \({\varvec{N}}\) and \(\uplambda\) on QAP Predictive Quality

Concerning \(N\) , we first observe that the average predictive quality of the QAP model is equivalent to the random baseline when \(N=1\) (Fig.  6 ). We further observe that mean predictive quality rises steadily until \(N\) is 20, after which it tapers off.

figure 6

Boxplot showing number of samples ( \(N\) ) against QAP predictive quality. The red line denotes the NDCG random baseline. The box edges show the first and third quartiles of the data (Q1, Q3) and the whiskers show (Q1 – 1.5 * IQR, Q3 + 1.5 * IQR), where IQR is the Inter Quartile Range

The result clearly shows that the QAP model is able to rank samples better than the random baseline (negative values imply the opposite). The positive initial trend could be impacted by the choice of ordinal relevance values \(rel({\pi }_{f\left(X\right)})\) for the NDCG computation (section “ Overview and Constants ”), which could favour the baseline for smaller \(N\) .

Concerning rate of change of new samples \(\uplambda\) , the best results are achieved when it is set toward the lower end of the 5–50% range of \(\left|{\mathcal{P}}_{S}\right|\) (Fig.  7 ). This provides some validation for the use of a Metropolis algorithm, since it shows that a Markov Chain can be used to nudge samples closer towards lower costs. Otherwise, NDCG would be similar regardless of the x-axis in Fig.  7 . This result is in line with Oxenstierna et al. [ 23 ], where a slightly stronger pattern is observed on the related TSP-based SLAP.

figure 7

How much new samples are changed compared to previous samples ( \(\uplambda\) ) against QAP predictive power

The Impact of Other Factors on QAP Predictive Quality

Results for all factors are shown in Tables  1 , 2 and 3 (Appendix). We find that QAP predictive quality decreases as instance size increases (Fig.  8 ). This may be due to that the quality of \({f}^{*}(x)\) costs provided by the OBP optimizer decrease with instance size (they are sub-optimal, see section “ Promote and Accept Thresholds and Cost Computations ”), making analysis of results for larger instance classes more difficult in general. We find that the fraction of CPU-time required by the QAP model versus the OBP optimizer is between 0.006–0.019, or around 50–150 times faster. The difference is largest for the largest instances and smallest for the smallest instances (Table  2 ). We do not observe any relationship between QAP predictive quality and warehouse layout.

figure 8

Instance size in terms of number of orders, versus the predictive quality of the QAP model and the random baseline

Overall, the result provides evidence that QAP approximations of OBP costs within an OBP-based SLAP optimizer may be justified. Its predictive quality may decrease with instance size, relative to the OBP optimizer (Fig.  7 ), but its relative usage of CPU-time also decreases. Another way to visualize the performance difference between the QAP model and the random baseline is through a frequency distribution (Fig.  9 ).

figure 9

Frequency distribution of NDCG values (20 bins) from QAP and random ranking of samples when \(N=20\) and \(\lambda =10\%\) (of \(\left|{\mathcal{P}}_{S}\right|\) )

SLAP Optimization With and Without QAP Approximation

We report results from running the QAP-OBP SLAP optimizer (section “ Optimization Algorithm ”) on the industrial dataset with and without the use of QAP approximations. Apart from general settings (section “ Overview and Constants ”), \(K\) is set to \({10}^{8}\) and the algorithm is set to terminate after 60 min (which, given maximum OBP and QAP CPU-times, ensures iterations never exceed \(K\) ). \(\lambda\) is set to \(10\%\) of \(\left|{\mathcal{P}}_{S}\right|\) and \({c}_{1}=1\) . \(N\) is set to 20, which means that the QAP model will have a relatively small impact on overall CPU-time. \(N\) could theoretically be set to a much larger number, but this may not necessarily yield better results. The QAP model in the form of Eq.  5 likely needs to be further developed before its extended use can be motivated. One risk with setting \(N\) to a large number is that the SLAP optimizer will spend too much time in search regions with a low QAP cost, rather than in regions with a low OBP cost.

In Fig.  10 , we see that Algorithm 1, on average, improves cost by around 23% in 1 h. Without QAP approximations, cost improves by around 17%.

figure 10

SLAP optimization cost improvements with and without the QAP model during 1 h. The shaded areas denote 95% confidence intervals

The size of the instances has a significant impact on computational efficiency. In Figs.  11 and 12 , we see that the impact of instance size, in terms of number of products that are assigned a location, | \({\mathcal{P}}_{s}\) |, has a similar effect on computational efficiency regardless of whether the QAP is used. The stronger performance of the smaller instances can largely be attributed to more samples being generated within the 60 min. On average, cost improvement continues throughout the time, which is explainable due to the large SLAP search space.

figure 11

QAP-OBP SLAP cost improvement using QAP approximations for 5 categories of instance sizes (in terms of | \({\mathcal{P}}_{s}\) |). Shaded areas denote data within 1 standard deviation

figure 12

Same as Fig.  11 , but without using QAP approximations

In this paper, we:

formulate an optimization model for the Storage Location Assignment Problem (SLAP), where the costs of assignments are evaluated using Order Batching Problem (OBP) optimization.

share generated SLAP test instances, with the goal to standardize formats and comparability between solution approaches.

propose a Quadratic Assignment Problem (QAP) model to quickly approximate OBP costs in SLAP optimization. The QAP model is tested and tuned on the generated instances.

propose a SLAP optimizer (QAP-OBP), which we test on industrial instances with a 1 h optimization timeout.

Within the QAP-OBP optimizer, the QAP and OBP modules are utilized in a Metropolis algorithm, where samples are modified by a variable amount each iteration. The algorithm is nested such that OBP costs are only computed for samples with a relatively strong QAP cost approximation.

In order to motive the use of the QAP model within the algorithm, experiments are first conducted to test its predictive quality against costs obtained by the OBP optimizer and a random baseline. Results show that QAP predictive quality is stronger than the baseline, and that they are around 50–150 times faster to compute than the cost obtained through OBP optimization.

We then proceed to run the SLAP optimizer with and without the QAP approximations. We find that the optimizer performs better when using the QAP approximations, with cost improvements of around 23% after 1 h. This result is in line with results in related work on SLAPs that are less difficult in some regards (for example concerning warehouse layouts), but more difficult in others (dynamicity or larger number of products).

For future work, the parameter which controls the number of samples that should be approximated by the QAP model for every OBP cost computation, \(N\) , could be tuned. The QAP computations could be significantly sped up by the use of parallelization and Graphical Processing Units (GPU), extending its utility within the SLAP optimizer for larger \(N\) . Also, alternative optimization approaches could be explored. These include meta-heuristic techniques such as Simulated Annealing or Particle Swarm Optimization. The QAP cost approximator could also be developed for a Machine Learning approach and used in a similar fashion as the weak estimators in boosting or aggregate bootstrapping. The factorial search space remains a fundamental problem for learning, however. Finally, we invite discussions into how to best represent SLAP features in public benchmark data and which features to choose for a standardized version of the problem.

https://github.com/johanoxenstierna/L17_533 , collected 13–02–2023.

https://github.com/johanoxenstierna/OBP_instances , collected 15–01–2023.

https://github.com/johanoxenstierna/L09_251 , collected 15–01–2023.

Abdel-Basset M, Manogaran G, Rashad H, et al. A comprehensive review of quadratic assignment problem: variants, hybrids and applications. J Ambient Intell Human Comput. 2018. https://doi.org/10.1007/s12652-018-0917-x .

Article   Google Scholar  

Aerts B, Cornelissens T, Sörensen K. The joint order batching and picker routing problem: modelled and solved as a clustered vehicle routing problem. Comput Oper Res. 2021;129: 105168. https://doi.org/10.1016/j.cor.2020.105168 .

Article   MathSciNet   Google Scholar  

Azadeh K, De Koster R, Roy D. Robotized warehouse systems: developments and research opportunities. ERIM report series research in management Erasmus Research Institute of Management. ERS-2017-009-LIS. 2017.

Bruch S, Wang X, Bendersky M, Najork M. An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In: Proceedings of the 2019 ACM SIGIR international conference on the theory of information retrieval (ICTIR 2019). 2019. pp. 75–8.

Cao Z, Qin T, Liu T-Y, Tsai M-F, Li H. Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th international conference on machine learning, vol. 227. 2007. pp. 129–36. https://doi.org/10.1145/1273496.1273513 .

Cardona LF, Rivera L, Martínez HJ. Analytical study of the fishbone warehouse layout. Int J Log Res Appl. 2012;15(6):365–88.

Charris E, Rojas-Reyes J, Montoya-Torres J. The storage location assignment problem: a literature review. Int J Ind Eng Comput. 2018;10.

Christen JA, Fox C. Markov Chain Monte Carlo using an approximation. J Comput Graph Stat. 2005;14(4):795–810.

Ene S, Öztürk N. Storage location assignment and order picking optimization in the automotive industry. Int J Adv Manuf Technol. 2011;60:1–11. https://doi.org/10.1007/s00170-011-3593-y .

Fontana ME, Nepomuceno VS. Multi-criteria approach for products classification and their storage location assignment. Int J Adv Manuf Technol. 2017;88(9):3205–16.

Freund Y, Iyer R, Schapire RE, Singer Y. An efficient boosting algorithm for combining preferences. J Mach Learn Res. 2003;4(Nov):933–69.

MathSciNet   Google Scholar  

Freund Y, Schapire RE. Experiments with a new boosting algorithm. 1996.

Garfinkel M. Minimizing multi-zone orders in the correlated storage assignment problem. School of Industrial and Systems Engineering, Georgia Institute of Technology. 2005.

Henn S, Wäscher G. Tabu search heuristics for the order batching problem in manual order picking systems. Eur J Oper Res. 2012;222(3):484–94.

Kallina C, Lynn J. Application of the cube-per-order index rule for stock location in a distribution warehouse. Interfaces. 1976;7(1):37–46.

Kofler M, Beham A, Wagner S, Affenzeller M. Affinity based slotting in warehouses with dynamic order patterns. Advanced methods and applications in computational intelligence. 2014. pp. 123–43.

de Koster R, Le-Duc T, Roodbergen KJ. Design and control of warehouse order picking: a literature review. Eur J Oper Res. 2007;182(2):481–501.

Kübler P, Glock CH, Bauernhansl T. A new iterative method for solving the joint dynamic storage location assignment, order batching and picker routing problem in manual picker-to-parts warehouses. Comput Ind Eng. 2020;147: 106645.

Larco JA, de Koster R, Roodbergen KJ, Dul J. Managing warehouse efficiency and worker discomfort through enhanced storage assignment decisions. Int J Prod Res. 2017;55(21):6407–22. https://doi.org/10.1080/00207543.2016.1165880 .

Lee IG, Chung SH, Yoon SW. Two-stage storage assignment to minimize travel time and congestion for warehouse order picking operations. Comput Ind Eng. 2020;139: 106129. https://doi.org/10.1016/j.cie.2019.106129 .

Mantel R, Schuur P, Heragu S. Order oriented slotting: a new assignment strategy for warehouses. Eur J Ind Eng. 2007;1:301–16.

Oxenstierna J, Malec J, Krueger V. Efficient order batching optimization using seed heuristics and the metropolis algorithm. SN Comput Sci. 2022;4(2):107.

Oxenstierna J, Rensburg L, Stuckey P, Krueger V. Storage assignment using nested annealing and hamming distances. In: Proceedings of the 12th international conference on operations research and enterprise systems—ICORES. 2023. pp. 94–105. https://doi.org/10.5220/0011785100003396 .

Oxenstierna J, van Rensburg LJ, Malec J, Krueger V. Formulation of a layout-agnostic order batching problem. In: Dorronsoro B, Amodeo L, Pavone M, Ruiz P, editors. Optimization and learning. Berlin: Springer International Publishing; 2021. p. 216–26.

Chapter   Google Scholar  

Ratliff H, Rosenthal A. Order-picking in a rectangular warehouse: a solvable case of the traveling salesman problem. Oper Res. 1983;31:507–21.

Reinelt G. TSPLIB—a traveling salesman problem library. INFORMS J Comput. 1991;3:376–84.

Rensburg LJ. Artificial intelligence for warehouse picking optimization—an NP-hard problem [Master’s Thesis]. Uppsala University. 2019.

Roodbergen KJ, Koster R. Routing methods for warehouses with multiple cross aisles. Int J Prod Res. 2001;39(9):1865–83.

van Ravenzwaaij D, Cassey P, Brown SD. A simple introduction to Markov Chain Monte-Carlo sampling. Psychon Bull Rev. 2018;25(1):143–54. https://doi.org/10.3758/s13423-016-1015-8 .

Wu J, Qin T, Chen J, Si H, Lin K. Slotting optimization algorithm of the stereo warehouse. In: Proceedings of the 2012 2nd international conference on computer and information application (ICCIA 2012). 2014. pp. 128–32. https://doi.org/10.2991/iccia.2012.31 .

Wu X, LuWuZhou JSX. Synchronizing time-dependent transportation services: reformulation and solution algorithm using quadratic assignment problem. Transport Res Part B Methodol. 2021;152:140–79. https://doi.org/10.1016/j.trb.2021.08.008 .

Wutthisirisart P, Noble JS, Chang CA. A two-phased heuristic for relation-based item location. Comput Ind Eng. 2015;82:94–102. https://doi.org/10.1016/j.cie.2015.01.020 .

Yang N et al. Evaluation of the joint impact of the storage assignment and order batching in mobile-pod warehouse systems. Math Probl Eng. 2022;2022.

Yingde L, Smith JS. Dynamic slotting optimization based on SKUs correlations in a zone-based wave-picking system. In: IMHRC proceedings, vol. 12. 2012.

Zhang R-Q, Wang M, Pan X. New model of the storage location assignment problem considering demand correlation pattern. Comput Ind Eng. 2019;129:210–9. https://doi.org/10.1016/j.cie.2019.01.027 .

Zhou F, De la Torre F. Factorized graph matching. IEEE Trans Pattern Anal Mach Intell. 2016;38(9):1774–89. https://doi.org/10.1109/TPAMI.2015.2501802 .

Žulj I, Glock CH, Grosse EH, Schneider M. Picker routing and storage-assignment strategies for precedence-constrained order picking. Comput Ind Eng. 2018;123:338–47. https://doi.org/10.1016/j.cie.2018.06.015 .

Download references

Acknowledgements

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We also convey thanks to Kairos Logic AB for software.

Open access funding provided by Lund University. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP).

Author information

Authors and affiliations.

Dept. of Computer Science, Lund University, Lund, Sweden

Johan Oxenstierna, Jacek Malec & Volker Krueger

Kairos Logic AB, Lund, Sweden

Johan Oxenstierna

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johan Oxenstierna .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest. This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Innovative Intelligent Industrial Production and Logistics 2022” guest edited by Alexander Smirnov, Kurosh Madani, Hervé Panetto and Georg Weichhart.

NDCG flowchart: the below example shows how Normalized Discounted Cumulative Gain (NDCG) can be computed from input permutations (products to locations), approximated ( \(f\) ) and ground truth ( \({f}^{*}\) ) values. Note that \(f\left(X\right)\) denotes a sorting of \(X\) according to the cost valuation of elements in the cost step. Also note that relevance values can be formulated in several ways.

figure 13

NDCG procedure flowchart

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Oxenstierna, J., Malec, J. & Krueger, V. Storage Assignment Using Nested Metropolis Sampling and Approximations of Order Batching Travel Costs. SN COMPUT. SCI. 5 , 477 (2024). https://doi.org/10.1007/s42979-024-02711-w

Download citation

Received : 04 April 2023

Accepted : 14 February 2024

Published : 23 April 2024

DOI : https://doi.org/10.1007/s42979-024-02711-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Storage location assignment problem
  • Order batching problem
  • Quadratic assignment problem
  • Metropolis algorithm
  • Warehousing

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Introduction to Random Assignment -Voxco

    in a random assignment

  2. The Definition of Random Assignment In Psychology

    in a random assignment

  3. Random Assignment in Experiments

    in a random assignment

  4. Random Sample v Random Assignment

    in a random assignment

  5. Random Assignment ~ A Simple Introduction with Examples

    in a random assignment

  6. What Is RANDOM ASSIGNMENT? RANDOM ASSIGNMENT Definition & Meaning

    in a random assignment

VIDEO

  1. Randomly Select

  2. DSC430 Assignment War and Random Numbers

  3. Assignment 9: Random Search

  4. RANDOM ASSIGNMENT

  5. Random Team Assignment

  6. Random Assignment

COMMENTS

  1. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  2. Random assignment

    Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. [1]

  3. The Definition of Random Assignment In Psychology

    Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group.

  4. Random Assignment in Psychology: Definition & Examples

    Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study. On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. Random selection ensures that everyone in the population has an equal ...

  5. Random sampling vs. random assignment (scope of inference)

    Random sampling vs. random assignment (scope of inference) Google Classroom. Hilary wants to determine if any relationship exists between Vitamin D and blood pressure. She is considering using one of a few different designs for her study. Determine what type of conclusions can be drawn from each study design.

  6. Random Assignment in Psychology (Definition + 40 Examples)

    Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

  7. Random Assignment in Experiments

    Random assignment helps you separation causation from correlation and rule out confounding variables. As a critical component of the scientific method, experiments typically set up contrasts between a control group and one or more treatment groups. The idea is to determine whether the effect, which is the difference between a treatment group ...

  8. Random Assignment in Psychology

    Random assignment is defined as every participant having an equal chance of being in either the experimental group or the control group. Each group is presented with the independent variable , or ...

  9. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  10. What Is Random Assignment in Psychology?

    Random assignment in psychology involves each participant having an equal chance of being chosen for any of the groups, including the control and experimental groups. It helps control for potential confounding variables, reducing the likelihood of pre-existing differences between groups. This method enhances the internal validity of experiments ...

  11. 6.1.1 Random Assignation

    Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a ...

  12. Elements of Research : Random Assignment

    Random assignment . Random assignment is a procedure used in experiments to create multiple study groups that include participants with similar characteristics so that the groups are equivalent at the beginning of the study. The procedure involves assigning individuals to an experimental treatment or program at random, or by chance (like the ...

  13. Random Assignment in Psychology (Intro for Students)

    Random assignment is a research procedure used to randomly assign participants to different experimental conditions (or 'groups'). This introduces the element of chance, ensuring that each participant has an equal likelihood of being placed in any condition group for the study. It is absolutely essential that the treatment condition and the ...

  14. What is "Random Assignment"?

    This short video is part of a series explaining key aspects of clinical research to the general population. It explains the process of randomization and its ...

  15. Random Sampling vs. Random Assignment

    Random assignment is a fundamental part of a "true" experiment because it helps ensure that any differences found between the groups are attributable to the treatment, rather than a confounding variable. So, to summarize, random sampling refers to how you select individuals from the population to participate in your study. Random assignment ...

  16. Random Assignment ~ A Simple Introduction with Examples

    Random assignment, a key methodology in academic research, ensures every participant has an equal chance of being placed in any group within an experiment. In experimental studies, the random assignment of participants is a vital element, which this article will discuss.

  17. Issues in Outcomes Research: An Overview of Randomization Techniques

    In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments. How To Randomize? Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including ...

  18. 3.6 Causation and Random Assignment

    3.6 Causation and Random Assignment. Medical researchers may be interested in showing that a drug helps improve people's health (the cause of improvement is the drug), while educational researchers may be interested in showing a curricular innovation improves students' learning (the curricular innovation causes improved learning).

  19. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  20. Research Randomizer

    RANDOM SAMPLING AND. RANDOM ASSIGNMENT MADE EASY! Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research.

  21. An overview of randomization techniques: An unbiased assessment of

    In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments. TYPES OF RANDOMIZATION. Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques ...

  22. PDF Random assignment: It's all in the cards

    Random assignment can eliminate these potential confounds and using a deck of playing cards, the teacher will illustrate how variables like sleep deprivation, athleticism, gender, height, birth order, etc. can be distributed fairly equally between the groups. As a formative

  23. Storage Assignment Using Nested Metropolis Sampling and ...

    Class-based storage assignment can therefore be regarded as a middle ground between dedicated and random. The quality of a location assignment is commonly evaluated based on some model of aggregate travel cost. For this purpose, a simplified simulation of order-picking in the warehouse can be used [7, 21].