The Evidence-Based Practice of Applied Behavior Analysis

Affiliations.

  • 1 Utah State University, Logan, UT USA.
  • 2 Wing Institute, Oakland, CA USA.
  • 3 Ball State University, Muncie, IN USA.
  • 4 Northern Arizona University, Flagstaff, AZ USA.
  • 5 Oregon State University, Corvallis, OR USA.
  • 6 University of South Carolina, Columbia, SC USA.
  • PMID: 27274958
  • PMCID: PMC4883454
  • DOI: 10.1007/s40614-014-0005-2

Evidence-based practice (EBP) is a model of professional decision-making in which practitioners integrate the best available evidence with client values/context and clinical expertise in order to provide services for their clients. This framework provides behavior analysts with a structure for pervasive use of the best available evidence in the complex settings in which they work. This structure recognizes the need for clear and explicit understanding of the strength of evidence supporting intervention options, the important contextual factors including client values that contribute to decision making, and the key role of clinical expertise in the conceptualization, intervention, and evaluation of cases. Opening the discussion of EBP in this journal, Smith (The Behavior Analyst, 36, 7-33, 2013) raised several key issues related to EBP and applied behavior analysis (ABA). The purpose of this paper is to respond to Smith's arguments and extend the discussion of the relevant issues. Although we support many of Smith's (The Behavior Analyst, 36, 7-33, 2013) points, we contend that Smith's definition of EBP is significantly narrower than definitions that are used in professions with long histories of EBP and that this narrowness conflicts with the principles that drive applied behavior analytic practice. We offer a definition and framework for EBP that aligns with the foundations of ABA and is consistent with well-established definitions of EBP in medicine, psychology, and other professions. In addition to supporting the systematic use of research evidence in behavior analytic decision making, this definition can promote clear communication about treatment decisions across disciplines and with important outside institutions such as insurance companies and granting agencies.

Keywords: Client values; Clinical expertise; Decision making; Empirically supported treatments; Evidence-based practice; Professional judgment.

An Introduction to Applied Behavior Analysis

  • First Online: 15 February 2018

Cite this chapter

applied behavior analysis research paper

  • Justin B. Leaf 3 ,
  • Joseph H. Cihon 3 , 4 ,
  • Julia L. Ferguson 3 &
  • Sara M. Weinkauf 5  

Part of the book series: Autism and Child Psychopathology Series ((ACPS))

2530 Accesses

8 Citations

Applied behavior analysis (ABA) refers to a systematic approach of understanding behavior. Deeply rooted in the early work of Thorndike, Watson, Pavlov, and Skinner on respondent and operant conditioning, ABA uses scientific observations and principles of behavior to improve and change behaviors of social interest. As a practice, ABA refers to the application of behavior analytic principles to improve socially important behaviors and is especially important in the field of developmental disabilities. Each year, more individuals with developmental disabilities, especially autism spectrum disorder, have some form of ABA therapy implemented into their treatment plans. This chapter provides an overview of the history, principles, and applications of applied behavior analysis in the developmental disabilities population.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Allen, K. E., Hart, B. M., Buell, J. S., Harris, F. R., & Wolf, M. M. (1964). Effects of social reinforcement on isolate behavior of a nursery school child. Child Development , 35 , 511–518.

PubMed   Google Scholar  

Anderson, C. M., & Long, E. S. (2002). Use of a structured descriptive assessment methodology to identify variables affecting problem behavior. Journal of Applied Behavior Analysis , 35 (2), 137–154.

PubMed   PubMed Central   Google Scholar  

Austin, J. L., & Bevan, D. (2011). Using differential reinforcement of low rates to reduce children’s requests for teacher attention. Journal of Applied Behavior Analysis , 44 (3), 451–461.

Ayllon, T. (1963). Intensive treatment of psychotic behaviour by stimulus satiation and food reinforcement. Behaviour Research and Therapy , 1 (1), 53–61.

Ayllon, T., & Azrin, N. H. (1965). The measurement and reinforcement of behavior of psychotics. Journal of the Experimental Analysis of Behavior , 8 (6), 357–383.

Ayllon, T., & Azrin, N. H. (1968). Reinforcer sampling: A technique for increasing the behavior of mental patients. Journal of Applied Behavior Analysis , 1 (1), 13–20.

Ayllon, T., & Michael, J. (1959). The psychiatric nurse as a behavioral engineer. Journal of the Experimental Analysis of Behavior , 2 (4), 323–334.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis , 1 (1), 91–97.

Baer, D. M., Wolf, M. M., & Risley, T. R. (1987). Some still-current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis , 20 (4), 313–327.

Bekker, M. J., Cumming, T. D., Osborne, N. K. P., Bruining, A. M., McClean, J. I., & Leland, L. S. (2010). Encouraging electricity savings in a university residential hall through a combination of feedback, visual prompts, and incentives. Journal of Applied Behavior Analysis , 43 (2), 327–331.

Bernal, M. E. (1972). Behavioral treatment of a child’s eating problem. Journal of Behavior Therapy and Experimental Psychiatry , 3 (1), 43–50.

Google Scholar  

Bloom, S. E., Lambert, J. M., Dayton, E., & Samaha, A. L. (2013). Teacher-conducted trial-based functional analyses as the basis for intervention. Journal of Applied Behavior Analysis , 46 (1), 208–218.

Bostow, D. E., & Bailey, J. B. (1969). Modification of severe disruptive and aggressive behavior using brief timeout and reinforcement procedures. Journal of Applied Behavior Analysis , 2 (1), 31–37.

Carr, E. G., & Durand, V. M. (1985). Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis , 18 (2), 111–126.

Carr, J. E., Howard, J. S., & Martin, N. T. (2015). An update on the behavior analyst certification board. In Panel discussion presented at the Association for Behavior Analysis International 41st annual convention . Texas: San Antonio.

Charlop-Christy, M. H., & Haymes, L. K. (1998). Using objects of obsession as token reinforcers for children with autism. Journal of Autism and Developmental Disorders , 28 (3), 189–198.

Chowdhury, M., & Benson, B. A. (2011). Use of differential reinforcement to reduce behavior problems in adults with intellectual disabilities: A methodological review. Research in Developmental Disabilities , 32 (2), 383–394.

Cihon, J. (2015). Yummy starts: A constructional approach to food selectivity with children with autism (Master’s thesis). Retrieved from: http://digital.library.unt.edu/ark:/67531/metadc799526/

Conallen, K., & Reed, P. (2016). A teaching procedure to help children with autistic spectrum disorder to label emotions. Research in Autism Spectrum Disorders , 23 , 63–72.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River, NJ: Pearson.

Cuvo, A. J., Leaf, R. B., & Borakove, L. S. (1978). Teaching janitorial skills to the mentally retarded: Acquisition, generalization, and maintenance. Journal of Applied Behavior Analysis , 11 (3), 345–355.

DiGennaro Reed, F. D., Reed, D. D., Baez, C. N., & Maguire, H. (2011). A parametric analysis of errors of commission during discrete-trial training. Journal of Applied Behavior Analysis , 44 (3), 611–615.

Donaldson, J. M., & Vollmer, T. R. (2011). An evaluation and comparison of time-out procedures with and without release contingencies. Journal of Applied Behavior Analysis , 44 (4), 693–705.

Dorey, N. R., Rosales-Ruiz, J., Smith, R., & Lovelace, B. (2009). Functional analysis and treatment of self-injury in a captive olive baboon. Journal of Applied Behavior Analysis , 42 (4), 785–794.

Dotson, W. H., Richman, D. M., Abby, L., Thompson, S., & Plotner, A. (2013). Teaching skills related to self-employment to adults with developmental disabilities: An analog analysis. Research in Developmental Disabilities , 34 (8), 2336–2350.

Durand, V. M. (1999). Functional communication training using assistive devices: Recruiting natural communities of reinforcement. Journal of Applied Behavior Analysis , 32 (3), 247–267.

Durand, V. M., & Carr, E. G. (1991). Functional communication training to reduce challenging behavior: Maintenance and application in new settings. Journal of Applied Behavior Analysis , 24 (2), 251–264.

Ellis, J., & Glenn, S. S. (1995). Behavior-analytic repertoires: Where will they come from and how can they be maintained? The Behavior Analyst Today , 18 (2), 285–292.

Etzel, B. C., & Gewirtz, J. L. (1967). Experimental modification of caretaker-maintained high-rate operant crying in a 6- and a 20-week-old infant (Infans tyrannotearus): Extinction of crying with reinforcement of eye contact and smiling. Journal of Experimental Child Psychology , 5 (3), 303–317.

Fabiano, G. A., Pelham, W. E., Jr., Manos, M. J., Gnagy, E. M., Chronis, A. M., Onyango, A. N., et al. (2004). An evaluation of three time-out procedures for children with attention-deficit/hyperactivity disorder. Behavior Therapy , 35 (3), 449–469.

Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement . New York: Appleton-Century-Crofts.

Fisher, W., Piazza, C., Cataldo, M., Harrell, R., Jefferson, G., & Conner, R. (1993). Functional communication training with and without extinction and punishment. Journal of Applied Behavior Analysis , 26 (1), 23–36.

Fyffe, C. E., Kahng, S., Fittro, E., & Russell, D. (2004). Functional analysis and treatment of inappropriate sexual behavior. Journal of Applied Behavior Analysis , 37 (3), 401–404.

Ghezzi, P. M. (2007). Discrete trials teaching. Psychology in the Schools , 44 (7), 667–679.

Green, G. (2001). Behavior analytic instruction for learners with autism advances in stimulus control technology. Focus on Autism and Other Developmental Disabilities , 16 (2), 72–85.

Green, G. R., Linsk, N. L., & Pinkston, E. M. (1986). Modification of verbal behavior of the mentally impaired elderly by their spouses. Journal of Applied Behavior Analysis , 19 (4), 329–336.

Grow, L., & LeBlanc, L. (2013). Teaching receptive language skills: Recommendations for instructors. Behavior Analysis in Practice , 6 (1), 56–75.

Gunby, K. V., & Rapp, J. T. (2014). The use of behavioral skills training and in situ feedback to protect children with autism from abduction lures. Journal of Applied Behavior Analysis , 47 (4), 856–860.

Hagopian, L. P., Farrell, D. A., & Amari, A. (1996). Treating total liquid refusal with backward chaining and fading. Journal of Applied Behavior Analysis , 29 (4), 573–575.

Hagopian, L. P., Fisher, W. W., Sullivan, M. T., Acquisto, J., & LeBlanc, L. A. (1998). Effectiveness of functional communication training with and without extinction and punishment: A summary of 21 inpatient cases. Journal of Applied Behavior Analysis , 31 (2), 211–235.

Hall, R. V., Lund, D., & Jackson, D. (1968). Effects of teacher attention on study behavior. Journal of Applied Behavior Analysis , 1 (r1), 1–12.

Hanley, G. P., Heal, N. A., Tiger, J. H., & Ingvarsson, E. T. (2007). Evaluation of a class wide teaching program for developing preschool life skills. Journal of Applied Behavior Analysis , 40 (2), 277–300.

Hanley, G. P., Sandy Jin, C., Vanselow, N. R., & Hanratty, L. A. (2014). Producing meaningful improvements in problem behavior of children with autism via synthesized analyses and treatments. Journal of Applied Behavior Analysis , 47 (1), 16–36.

Harchik, A. E., Sherman, J. A., & Sheldon, J. B. (1992). The use of self-management procedures by people with developmental disabilities: A brief review. Research in Developmental Disabilities: A Multidisciplinary Journal , 13 (3), 211–227.

Hart, B., & Risley, T. R. (1975). Incidental teaching of language in the preschool. Journal of Applied Behavior Analysis , 8 (4), 411–420.

Hart, B., & Risley, T. R. (1978). Promoting productive language through incidental teaching. Education and Urban Society , 10 , 407–429.

Hart, B. M., & Risley, T. R. (1968). Establishing use of descriptive adjectives in the spontaneous speech of disadvantaged preschool children. Journal of Applied Behavior Analysis , 1 (2), 109–120.

Heffernan, L., & Lyons, D. (2016). Differential reinforcement of other behaviour for the reduction of severe nail biting. Behavior Analysis in Practice , 9 (3), 253–256.

Hersen, M., Eisler, R. M., Alford, G. S., & Agras, W. S. (1973). Effects of token economy on neurotic depression: An experimental analysis. Behavior Therapy , 4 (3), 392–397.

Ingvarsson, E. T., & Hollobaugh, T. (2010). Acquisition of intraverbal behavior: Teaching children with autism to mand for answers to questions. Journal of Applied Behavior Analysis , 43 (1), 1–17.

Ivar Lovaas, O., Koegel, R., Simmons, J. Q., & Long, J. S. (1973). Some generalization and follow-up measures on autistic children in behavior therapy1. Journal of Applied Behavior Analysis , 6 (1), 131–165.

Iwata, B. A., DeLeon, I. G., & Roscoe, E. M. (2013). Reliability and validity of the functional analysis screening tool. Journal of Applied Behavior Analysis , 46 (1), 271–284.

Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1982). Toward a functional analysis of self-injury. Analysis and Intervention in Developmental Disabilities , 2 (1), 3–20.

Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis , 27 (2), 197–209.

Jessel, J., & Ingvarsson, E. T. (2016). Recent advances in applied research on DRO procedures. Journal of Applied Behavior Analysis , 49 , 991–995.

Johnston, J. M., & Pennypacker, H. S. (1993). Strategies and tactics of human behavioral research (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Jowett Hirst, E. S., Dozier, C. L., & Payne, S. W. (2016). Efficacy of and preference for reinforcement and response cost in token economies. Journal of Applied Behavior Analysis , 49 (2), 329–345.

Keller, F. S., & Schoenfeld, W. N. (1950). Principles of psychology: A systematic text in the science of behavior . New York, NY: Appleton-Century-Crofts.

Koegel, R. L., Bharoocha, A. A., Ribnick, C. B., Ribnick, R. C., Bucio, M. O., Fredeen, R. M., & Koegel, L. K. (2012). Using individualized reinforcers and hierarchical exposure to increase food flexibility in children with autism spectrum disorders. Journal of Autism and Developmental Disorders , 42 (8), 1574–1581.

Leaf, J. B., Cihon, J. H., Leaf, R., McEachin, J., & Taubman, M. (2016). A progressive approach to discrete trial teaching: Some current guidelines. International Electronic Journal of Elementary Education , 9 (2), 261.

Leaf, J. B., Leaf, R., McEachin, J., Taubman, M., Ala’i-Rosales, S., Ross, R. K., et al. (2016). Applied behavior analysis is a science and, therefore, progressive. Journal of Autism and Developmental Disorders , 46 (2), 720–731.

Leaf, J. B., Leaf, R., Taubman, M., McEachin, J., & Delmolino, L. (2014). Comparison of flexible prompt fading to error correction for children with autism spectrum disorder. Journal of Developmental and Physical Disabilities , 26 (2), 203–224.

Leaf, J. B., Sheldon, J. B., & Sherman, J. A. (2010). Comparison of simultaneous prompting and no-no prompting in two-choice discrimination learning with children with autism. Journal of Applied Behavior Analysis , 43 (2), 215–228.

Leaf, J. B., Townley-Cochran, D., Taubman, M., Cihon, J. H., Oppenheim-Leaf, M. L., Kassardjian, A., et al. (2015). The teaching interaction procedure and behavioral skills training for individuals diagnosed with autism spectrum disorder: A review and commentary. Review Journal of Autism and Developmental Disorders , 2 (4), 402–413.

Leaf, R., & McEachin, J. (1999). A work in progress: Behavior management strategies and a curriculum for intensive behavioral treatment of autism . New York, NY: DRL Books.

Leaf, R., McEachin, J., & Taubman, M. (2012). A work in progress: Companion series . New York: DRL.

Leaf, J. B., Leaf, J. A., Alcalay, A., Kassardjian, A., Tsuji, K., Dale, S., … Leaf, R. (2016). Comparison of Most-to-Least Prompting to Flexible Prompt Fading for Children with Autism Spectrum Disorder. Exceptionality , 24 (2), 109–122.

Lerman, D. C., & Iwata, B. A. (1993). Descriptive and experimental analyses of variables maintaining self-injurious behavior. Journal of Applied Behavior Analysis , 26 (3), 293–319.

Lerman, D. C., Valentino, A. L., & Leblanc, L. A. (2016). Discrete trial training. In: Early intervention for young children with autism spectrum disorder (pp. 47–83). Cham, Switzerland: Springer International Publishing.

Lichtenstein, E. (1997). Behavioral research contributions and needs in cancer prevention and control: Tobacco use prevention and cessation. Preventive Medicine , 26 (5), S57–S63.

Lovaas, O. I. (1981). Teaching developmentally disabled children: The me book . Austin, TX: PRO-ED Books.

Lovaas, O. I. (1987). Behavioral treatment and normal educational and intellectual functioning in young autistic children. Journal of Consulting and Clinical Psychology , 55 (1), 3–9.

MacDuff, G. S., Krantz, P. J., & McClannahan, L. E. (1993). Teaching children with autism to use photographic activity schedules: Maintenance and generalization of complex response chains. Journal of Applied Behavior Analysis , 26 (1), 89–97.

MacDuff, G. S., Krantz, P. J., & McClannahan, L. E. (2001). Prompts and prompt-fading strategies for people with autism. In C. Maurice, G. Green, & R. M. Foxx (Eds.), Making a difference behavioral intervention for autism (1st ed., pp. 37–50). Austin, TX: Pro Ed.

Mace, F. C., & Heller, M. (1990). A comparison of exclusion time-out and contingent observation for reducing severe disruptive behavior in a 7-year-old boy. Child and Family Behavior Therapy , 12 (1), 57–68.

McGee, G. G., Almeida, M. C., Sulzer-Azaroff, B., & Feldman, R. S. (1992). Promoting reciprocal interactions via peer incidental teaching. Journal of Applied Behavior Analysis , 25 (1), 117–126.

McGee, G. G., Krantz, P. J., Mason, D., & McClannahan, L. E. (1983). A modified incidental-teaching procedure for autistic youth: Acquisition and generalization of receptive object labels. Journal of Applied Behavior Analysis , 16 (3), 329–338.

McGee, G. G., Krantz, P. J., & McClannahan, L. E. (1985). The facilitative effects of incidental teaching on preposition use by autistic children. Journal of Applied Behavior Analysis , 18 (1), 17–31.

McGee, G. G., Krantz, P. J., & McClannahan, L. E. (1986). An extension of incidental teaching procedures to reading instruction for autistic children. Journal of Applied Behavior Analysis , 19 (2), 147–157.

McGee, G. G., Morrier, M. J., & Daly, T. (1999). An incidental teaching approach to early intervention for toddlers with autism. Journal of the Association for Persons with Severe Handicaps , 24 (3), 133–146.

McGoey, K. E., & Dupaul, G. J. (2000). Token reinforcement and response cost procedures: Reducing the disruptive behavior of preschool children with attention-deficit/hyperactivity disorder. School Psychology Quarterly , 15 (3), 330–343.

Michael, J. (1988). Establishing operations and the mand. The Analysis of Verbal Behavior , 6 , 3–9.

Miller, A. J., & Kratochwill, T. R. (1979). Reduction of frequent stomachache complaints by time out. Behavior Therapy , 10 (2), 211–218.

Miltenberger, R. G. (2012). Behavioral skills training procedures , Behavior modification: principles and procedures (pp. 251–269). Belmont, TN: Wadsworth, Cengage Learning.

Myers, D. V. (1975). Extinction, DRO, and response—cost procedures for eliminating self-injurious behavior: A case study. Behaviour Research and Therapy , 13 (2–3), 189–191.

Neufeld, A., & Fantuzzo, J. W. (1987). Treatment of severe self-injurious behavior by the mentally retarded using the bubble helmet and differential reinforcement procedures. Journal of Behavior Therapy and Experimental Psychiatry , 18 (2), 127–136.

Ng, A. H. S., Schulze, K., Rudrud, E., & Leaf, J. B. (2016). Using the teaching interactions procedure to teach social skills to children with autism and intellectual disability. American Journal on Intellectual and Developmental Disabilities , 121 (6), 501–519.

Nuzzolo-Gomez, R., Leonard, M. A., Ortiz, E., Rivera, C. M., & Greer, R. D. (2002). Teaching children with autism to prefer books or toys over stereotypy or passivity. Journal of Positive Behavior Interventions , 4 (2), 80–87.

Pendergrass, V. E. (1971). Effects of length of time-out from positive reinforcement and schedule of application in suppression of aggressive behavior. The Psychological Record , 21 (1), 75–80.

Phillips, E. L. (1968). Achievement place: Token reinforcement procedures in a home-style rehabilitation setting for “pre-delinquent” boys. Journal of Applied Behavior Analysis , 1 (3), 213–223.

Phillips, E. L., Phillips, E. A., Fixsen, D. L., & Wolf, M. M. (1971). Achievement place: Modification of the behaviors of pre-delinquent boys within a token economy. Journal of Applied Behavior Analysis , 4 (1), 45–59.

Phillips, E. L., Phillips, E. A., Fixsen, D. L., & Wolf, M. M. (1974). The teaching-family handbook (2nd ed.). Lawrence, KS: University Press of Kansas.

Piazza, C. C., Fisher, W. W., & Sherer, M. (1997). Treatment of multiple sleep problems in children with developmental disabilities: Faded bedtime with response cost versus bedtime scheduling. Developmental Medicine and Child Neurology , 39 (6), 414–418.

Rayner, C. (2011). Teaching students with autism to tie a shoelace knot using video prompting and backward chaining. Developmental Neuroehabilitation , 14 (6), 339–347.

Rehfeldt, R. A., & Chambers, M. R. (2003). Functional analysis and treatment of verbal perseverations displayed by an adult with autism. Journal of Applied Behavior Analysis , 36 (2), 259–261.

Reynolds, G. S. (1960). Behavioral contrast. Journal of the Experimental Analysis of Behavior , 4 (1), 57–71.

Ricciardi, J. N., Luiselli, J. K., & Camare, M. (2006). Shaping approach responses as intervention for specific phobia in a child with autism. Journal of Applied Behavior Analysis , 39 (4), 445–448.

Ritchie, R. J. (1976). A token economy system for changing controlling behavior in the chronic pain patient. Journal of Behavior Therapy and Experimental Psychiatry , 7 (4), 341–343.

Ritschl, C., Mongrella, J., & Presbie, R. J. (1972). Group time-out from rock and roll music and out-of-seat behavior of handicapped children while riding a school bus. Psychological Reports , 31 (3), 967–973.

Rooker, G. W., Iwata, B. A., Harper, J. M., Fahmie, T. A., & Camp, E. M. (2011). False-positive tangible outcomes of functional analyses. Journal of Applied Behavior Analysis , 44 (4), 737–745.

Sanders, M. R. (1999). Triple p-positive parenting program: Towards an empirically validated multilevel parenting and family support strategy for the prevention of behavior and emotional problems in children. Clinical Child and Family Psychology Review , 2 (2), 71–90.

Sherman, J. A. (1963). Reinstatement of verbal behavior in a psychotic by reinforcement methods. The Journal of Speech and Hearing Disorders , 28 , 398–401.

Shillingsburg, M. A., Bowen, C. N., & Shapiro, S. K. (2014). Increasing social approach and decreasing social avoidance in children with autism spectrum disorder during discrete trial training. Research in Autism Spectrum Disorders , 8 (11), 1443–1453.

Shook, G. L., Ala’i-Rosales, S., & Glenn, S. S. (2002). Training and certifying behavior analysts. Behavior Modification , 26 (1), 27–48.

Sigafoos, J., & Meikle, B. (1996). Functional communication training for the treatment of multiply determined challenging behavior in two boys with autism. Behavior Modification , 20 (1), 60–84.

Silverman, K., Roll, J. M., & Higgins, S. T. (2008). Introduction to the special issue on the behavior analysis and treatment of drug addiction. Journal of Applied Behavior Analysis , 41 (4), 471–480.

Skinner, B. F. (1953). Science and human behavior . New York, NY: Free Press.

Slocum, S. K., & Tiger, J. H. (2011). An assessment of the efficiency of and child preference for forward and backward chaining. Journal of Applied Behavior Analysis , 44 (4), 793–805.

Smith, C. M., Smith, R. G., Dracobly, J. D., & Pace, A. P. (2012). Multiple-respondent anecdotal assessments: An analysis of interrater agreement and correspondence with analogue assessment outcomes. Journal of Applied Behavior Analysis , 45 (4), 779–795.

Smith, T. (2001). Discrete trial training in the treatment of autism. Focus on Autism and Other Developmental Disabilities , 16 (2), 86–92.

Soluaga, D., Leaf, J. B., Taubman, M., McEachin, J., & Leaf, R. (2008). A comparison of flexible prompt fading and constant time delay for five children with autism. Research in Autism Spectrum Disorders , 2 (4), 753–765.

Sprute, K. A., & Williams, R. L. (1990). Effects of a group response cost contingency procedure on the rate of classroom interruptions with emotionally disturbed secondary students. Child and Family Behavior Therapy , 12 (2), 1–12.

Sulzer-Azaroff, B., & Mayer, G. R. (1977). Applying behavior analysis procedures with children and youth . New York, NY: Holt, Rinehart, & Winston.

Tiano, J. D., Fortson, B. L., McNeil, C. B., & Humphreys, L. A. (2005). Managing classroom behavior of head start children using response cost and token economy procedures. Journal of Early and Intensive Behavior Intervention , 2 (1), 28–39.

Tiger, J. H., Hanley, G. P., & Bruzek, J. (2008). Functional communication training: A review and practical guide. Behavior Analysis in Practice , 1 (1), 16–23.

Touchette, P. E., MacDonald, R. F., & Langer, S. N. (1985). A scatter plot for identifying stimulus control of problem behavior. Journal of Applied Behavior Analysis , 18 (4), 343–351.

Wacker, D. P., Lee, J. F., Dalmau, Y. C. P., Kopelman, T. G., Lindgren, S. D., Kuhle, J., et al. (2013). Conducting functional analyses of problem behavior via telehealth. Journal of Applied Behavior Analysis , 46 (1), 31–46.

Wacker, D. P., Steege, M. W., Northup, J., Sasso, G., Berg, W., Reimers, T., et al. (1990). A component analysis of functional communication training across three topographies of severe behavior problems. Journal of Applied Behavior Analysis , 23 (4), 417–429.

Weiher, R. G., & Harman, R. E. (1975). The use of omission training to reduce self-injurious behavior in a retarded child. Behavior Therapy , 6 (2), 261–268.

Werts, M. G., Caldwell, N. K., & Wolery, M. (1996). Peer modeling of response chains: Observational learning by students with disabilities. Journal of Applied Behavior Analysis , 29 (1), 53–66.

White, G. D., Nielsen, G., & Johnson, S. M. (1972). Timeout duration and the suppression of deviant behavior in children. Journal of Applied Behavior Analysis , 5 (2), 111–120.

Wolery, M., Ault, M. J., & Doyle, P. M. (1992). Teaching students with moderate to severe disabilities: Use of response prompting strategies . New York, NY: Long- man.

Wolf, M., Risley, T., & Mees, H. (1963). Application of operant conditioning procedures to the behaviour problems of an autistic child. Behaviour Research and Therapy , 1 (2), 305–312.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis , 11 (2), 203–214.

Wong, C. S., Kasari, C., Freeman, S., & Paparella, T. (2007). The acquisition and generalization of joint attention and symbolic play skills in young children with autism. Journal of the Association for Persons with Severe Handicaps , 32 (2), 101–109.

Download references

Author information

Authors and affiliations.

Autism Partnership Foundation, Seal Beach, CA, USA

Justin B. Leaf, Joseph H. Cihon & Julia L. Ferguson

Endicott College, Beverly, MA, USA

Joseph H. Cihon

JBA Institute, Aliso Viejo, CA, USA

Sara M. Weinkauf

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Justin B. Leaf .

Editor information

Editors and affiliations.

Department of Psychology, Louisiana State University, Baton Rouge, Louisiana, USA

Johnny L. Matson

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Leaf, J.B., Cihon, J.H., Ferguson, J.L., Weinkauf, S.M. (2017). An Introduction to Applied Behavior Analysis. In: Matson, J. (eds) Handbook of Childhood Psychopathology and Developmental Disabilities Treatment . Autism and Child Psychopathology Series. Springer, Cham. https://doi.org/10.1007/978-3-319-71210-9_3

Download citation

DOI : https://doi.org/10.1007/978-3-319-71210-9_3

Published : 15 February 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-71209-3

Online ISBN : 978-3-319-71210-9

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Behav Anal Pract
  • v.16(1); 2023 Mar
  • PMC10050523

Research Ethics for Behavior Analysts in Practice

Matthew p. normand.

Department of Psychology, University of the Pacific, 3601 Pacific Avenue, Stockton, CA 95211 USA

Hailey E. Donohue

Behavior analysts in practice have an advantage over many others in the helping professions—they have at their disposal a robust science of behavior change informed primarily by single-case experimental research designs. This is advantageous because the research literature is focused on individual behavior change and has direct relevance to behavior analysts who need to change the behavior of individuals in need. Also, the same experimental designs used to advance the basic and applied sciences can be used to evaluate and refine specific procedures as they are put into practice. Thus, behavior-analytic research and practice are often intertwined. However, when behavior analysts in practice conduct research and use their own clients as participants, several important ethical issues need to be considered. Research with human participants is subject to careful ethical oversight, but the ethical guidelines that have been developed are usually described in terms of research conducted by nonpractitioners working in universities or institutions. This article focuses on several areas of special concern when conducting research in practice settings, including dual relationships and conflicts of interest, obtaining informed consent, and using ethical review panels.

Ethics Code for Behavior Analysts 1 : 6.01 Conforming with Laws and Regulations in Research. Behavior analysts plan and conduct research in a manner consistent with all applicable laws and regulations, as well as requirements by organizations and institutions governing research activity.

Behavior analysts in practice have an advantage over many others in the helping professions—they have at their disposal a robust science of behavior change informed primarily by single-case experimental research designs. The advantage is really twofold. First, the research literature is largely focused on individual behavior change and thus has direct relevance to behavior analysts who seek to change the behavior of individuals in need. Second, the same experimental designs used to advance the basic and applied sciences can be used to evaluate and refine specific procedures as they are put into practice, and to evaluate the effectiveness of practice in general. Behavior-analytic research and practice are often intertwined.

Outside of behavior analysis, many psychologists in practice say they rely primarily on clinical experience and relatively little on the research literature to inform what they do. Indeed, a fair amount of evidence spanning several decades indicates that psychologists in practice say that they infrequently contact the research literature and seldom change their practice based on research findings (e.g., Cohen et al., 1986 ; Gyani et al., 2014 ; Morrow-Bradley & Elliott, 1986 ; Safran et al., 2011 ; Stewart & Chambless, 2007 ; Stewart et al., 2012 ). Moreover, it seems that practicing psychologists rarely conduct research themselves (Cohen et al., 1986 ; Goldfried & Wolfe, 1996 ; Morrow-Bradley & Elliott, 1986 ; Norcross & Karpiak, 2012 ). Several possible reasons for this have been discussed (Cohen et al., 1986 ; Goldfried & Wolfe, 1996 ; Morrow-Bradley & Elliott, 1986 ), but survey research suggests that one factor is the kinds of experimental designs employed in most psychology research (Goldfried & Wolfe, 1996 ; Morrow-Bradley & Elliott, 1986 ). The reliance on large group designs and aggregate outcome data means that psychologists in practice do not know how any individual will respond to a particular intervention. It also means that most psychologists in practice will not be able to evaluate their own effectiveness using the research designs most common in the literature, because the arrangement of large groups using random assignment is simply not feasible for the psychologist who sees one person at a time.

For these and other reasons, some have called for the wider adoption of single-case experimental designs as a research strategy in psychology (e.g., Barlow & Nock, 2009 ; Morgan & Morgan, 2001 ; Normand, 2016 ). It is fortunate that no such call is needed for behavior analysts in practice because single-case designs already fuel the basic and applied sciences that inform what they do. In addition to consuming that research literature and using single-case experimental designs to evaluate day-to-day clinical practices, many behavior analysts in practice also contribute to the research literature by conducting experimental evaluations of behavioral assessments and interventions. A cursory review of the authors listed on research published in applied behavior analysis journals suggests that many practitioners are also researchers, because many of the contributors list clinical-service affiliations in their bylines. 2 This can be a boon to scientists and consumers, alike, but it also raises some important ethical concerns regarding clinical research.

All research with human participants is subject to careful ethical oversight, but the ethical guidelines that have been developed are usually described in terms of research conducted by nonpractitioners working in universities or research institutes. When behavior analysts in practice conduct research using their own clients as participants, several important ethical issues need to be considered. Clear ethical guidance about practitioner-led research in the realm of psychology is difficult to find, perhaps because few psychologists in practice conduct research. The same holds true for various allied professions, such as education and special education, where the intersection of research and practice is typically depicted in terms of independent researchers collaborating with practice sites. For example, the British Educational Research Association ( 2018 ) offers the following guidance:

The institutions and settings within which the research is set also have an interest in the research, and ought to be considered in the process of gaining consent. Researchers should think about whether they should approach gatekeepers before directly approaching participants, and about whether they should adopt an institution’s own ethical approval and safeguarding procedures; this is usually a requirement. (p. 9)

It is clear from this statement that the code assumes researchers will not themselves be practitioners working in the institution where the research is being conducted.

The American Education Research Association ( 2011 ) Code of Ethics does specifically advise, “In planning research, education researchers select research participants with whom they have no other relationship (e.g., teacher, supervisor, mentor, or employer”; p. 152). The code briefly acknowledges that research will sometimes need to involve participants with whom the research has another relationship, but such situations, and others related to practice, are not otherwise addressed anywhere else in the ethics code. The medical literature, however, is replete with discussions of the ethical issues arising when a physician serves as a clinician and a researcher in practice settings. Although the circumstances of physicians are not identical to those of behavior analysts in practice, there are some similarities, including face-to-face interactions, involvement in both the assessment and treatment of various problems, and the supervision of other professionals involved in assessment and treatment. Because of these similarities, and despite the differences, guidance from the medical literature will be discussed, where relevant.

Specific to behavior analysis, Section 6 of the BACB Ethics Code for Behavior Analysts contains 11 entries pertaining to research ethics. Most of these entries do not suggest any unique concerns for behavior analysts in practice. That is, they apply similarly to researchers working in a variety of circumstances. For example, Section 6.06 stipulates that behavior analysts must be competent to conduct research before doing so, and this applies no matter the setting in which the research takes place. It is true that many practitioners might have earned degrees from graduate training programs that did not effectively teach research skills, but the same might well be true of any graduate training program. The key issue is whether the behavior analyst has developed the requisite skills, and this is not directly affected by the setting and concomitant activities in which they are engaged. This article focuses on selected areas of Section 6 that do suggest unique concerns for behavior analysts conducting research alongside their practice.

Dual Relationships and Conflicts of Interest

Ethics Code for Behavior Analysts : 6.03 Research in Service Delivery. Behavior analysts conducting research in the context of service delivery must arrange research activities such that client services and client welfare are prioritized. In these situations, behavior analysts must comply with all ethics requirements for both service delivery and research within the Code. When professional services are offered as an incentive for research participation, behavior analysts clarify the nature of the services, and any potential risks, obligations, and limitations for all parties.

Dual Relationships

A dual relationship is problematic because there are similar, salient discriminative stimuli across two different situations that should otherwise evoke different behavior. The more stimuli that are common to the two situations, the more difficult the discrimination. In any dual relationship, the same people are present in two circumstances that differ in other important respects. Because relationships are characterized by social interaction, the individuals involved are arguably the most salient stimuli, and those people are the mediators of other discriminative, motivating, reinforcing, and punishing interactions. When the same people are interacting in similar ways in the same physical environment, such as a treatment room or office, this leaves only more subtle discriminative stimuli to occasion differential responding. Those subtle discriminative stimuli might be insufficient, making necessary the arrangement of supplementary discriminative events across situations.

In clinical research, a dual (role) relationship exists when the researcher also provides clinical services or otherwise supervises or manages the delivery of clinical services to the research participant. Acting in two or more such roles blurs the line between research and practice (Hay-Smith et al., 2016 ). This can be especially problematic when conducting assessment or intervention research because the activities of each role are similar, if not identical. Thus, the participant can have a difficult time discriminating circumstances involving research activities and those involving clinical activities. This can lead to problems, such as when the circumstances involving research activities evoke behavior relevant to the circumstances in which clinical services are provided. For example, requests for subtle alterations to an intervention procedure might be easily accommodated as part of clinical practice but not as easily accommodated when a research protocol demands consistency across time and participants. Being denied the alterations in the research setting could have an impact on subsequent interactions in the clinical setting.

Research in clinical settings raises questions regarding the distinction between research endeavors and clinical interventions for the researcher, as well. Of primary importance is determining when some activity qualifies as research, which might not be straightforward. According to the BACB, “the use of an experimental design does not by itself constitute research” (Behavior Analyst Certification Board, 2020 , p. 8). Instead, research and practice can be distinguished, in part, by asking whether the focus of the activity is on developing generalizable knowledge or on helping an individual client, respectively (Brody et al., 2005 ). That said, it is helpful to ask whether the intent is to publish or present the results of the activity, although this will not always be known ahead of time; plans to publish or present findings might be made only after an innovative intervention has been completed (Brody et al., 2005 ). For behavior analysts, “any data-based activity, including analysis of preexisting data, designed to generate generalizable knowledge for the discipline,” is considered research (Behavior Analyst Certification Board, 2020 , p. 8). Determining whether an activity constitutes research is important because certain ethical considerations (e.g., Section 06 of the BACB Ethics Code) can come into play. If activities are viewed purely as clinical practice, some important issues, such as those discussed in this article, might be overlooked.

Once research activities are identified, it is prudent to state, in writing, which activities are research activities and to provide this information to the participant. Behavior analysts play a dual role when they conduct research alongside clinical activities with the same person or persons, even if the activities do not change appreciably for the sake of the research. One approach to handling such dual relationships is to avoid them by having different people provide clinical services and administer research procedures. However, this does not necessarily remove the dual role in situations where all parties know each other and interact with one another on a regular basis. In small agencies, it might not be possible to keep one or more persons strictly in a research or clinical services role. In addition, in many cases it might not be appropriate for the research procedures to be administered by someone other than clinicians already working with the research participant as a client. If the same person or persons will sometimes be providing clinical services and sometimes be acting as a researcher, care should be taken to explain the differences in those roles and what can be expected from the various people involved during those different situations. Also, if possible, the activities could be conducted in different settings to enhance the discriminative properties of clinical and research activities.

Conflicts of Interest

Ethics Code for Behavior Analysts. 6.07 Conflict of Interest in Research and Publication: When conducting research, behavior analysts identify, disclose, and address conflicts of interest (e.g., personal, financial, organization related, service related). They also identify, disclose, and address conflicts of interest in their publication and editorial activities.

Conflicts of interest are problematic because relevant motivating operations and discriminative stimuli relevant to two or more conflicting, perhaps incompatible, classes of behavior are present. In such situations, the class of behavior that is mandated or otherwise expected in each situation might not be forthcoming because the MO and SDs relevant to an incompatible or alternative class of behavior are prepotent. When conflicts of interest are unavoidable, arranging sources of countercontrol (Delprato, 2002 ; Skinner, 1953 ) might be the best course of action. This can be done, in part, by clearly stating the conflicting circumstances so that they can, to some extent, also influence the behavior of the participant. In addition, arranging supervision from other people for whom the competing MOs and SDs do not occasion conflicting behavior could help.

Financial Conflicts of Interest

The line between clinical research and the promotion of products or services from which a provider stands to gain, financially or otherwise, also can be difficult to see. In an ideal situation, research is a matter of uncovering functional relations to better predict and control behavior. In putting behavior-analytic research into practice, the prediction and control should first and foremost improve the lot of the consumer. But other variables also influence research practices. Publishing noteworthy research findings can lead to prestige, career advancement, and financial gain, among other things (Chivers, 2019 ). When the conduct of a study and the reporting of the results is strongly influenced by variables other than the prediction and control of the behavior being studied, there is a risk of slipping toward advocacy research (Johnston & Pennypacker, 2009 , p. 64). That is, setting out to show that something works rather than evaluating  whether  something works.

In medicine, for example, there are circumstances in which a physician is also acting as a researcher when they have invented the device or method under study. In such circumstances, it is considered inappropriate for the physician-researcher to be the one to obtain informed consent from participants because the physician’s “conflicting motivations unacceptably compromise their ability to provide recommendations that serve the patient’s best interests” (Morain et al., 2019 , p. 15). Although behavior analysts in practice are more likely to develop specific intervention procedures than, say, marketable medical devices, intervention procedures can be profitable and pose a conflict of interest. For example, practitioners sometimes manualize the procedures they develop and then profit by selling these manuals, giving workshops on related topics in which the manuals are part of the workshop fee, and so on. This introduces a range of extraneous controlling variables that can influence the practitioner’s behavior while engaged in both research and practice. Showing that the procedures one uses, especially those that one develops, are particularly effective can affect the bottom line. From one point of view, the reports of those findings can be seen as advertisements for the quality of services provided and justification for the fees that are charged.

Conflicts of interest can also involve more subtle sources of control. An interesting example receiving attention in recent years is the payment of speaking fees to psychologists of some renown, especially when those speaking fees can be many thousands of dollars per event (Chivers, 2019 ). Although behavior analysts might not regularly command five- or six-figure speaking fees, some regularly command relatively large consulting or workshop fees. In most cases, the speaking fee is not dependent on any specific research finding, but the research findings an individual has reported are usually important to their reputation and, hence, to the invitation to speak or consult for a fee or the decision someone makes to attend a workshop. When a psychologist or other scientists gains a reputation for a certain line of research with certain kinds of findings, the incentives to defend that research and those findings grows—even more so when fame and finances are tied up in it all.

This is not to say that behavior analysts should not develop and sell manualized procedures, publish clinical research, conduct workshops, and the like. But such activities do create conflicts of interest that need to be clearly identified. Careful steps should be taken in the way research is conducted such that those conflicts of interest are minimized. At minimum, it is important to disclose such conflicts of interest when publishing related research findings and other academic papers (and clearly disclosed during speaking engagements and workshops and while consulting). At maximum, it might be appropriate to refrain from profiting until independent evidence, free of undue conflicts of interest, is available supporting the products or procedures being promoted.

Conducting research that involves the delivery of clinical services raises an additional financial consideration. One assumes that the participants already are paying for clinical services and, if research activities occur alongside those clinical services, they might then be paying to participate in the research, in some sense. When experimental procedures co-occur with standard procedures, the conservative solution would be to waive all fees that would otherwise accrue. If the research is actual research, and not advocacy research (Johnston & Pennypacker, 2009 , p. 64), then the investigators are presumably unsure about the effects of the procedures being evaluated. Charging someone to receive services with unknown outcomes would not be ethically defensible. However, things get murky when other necessary clinical services (functional assessments, preference assessments) are provided alongside, but independent of, the experimental procedures. The murkiest situation is probably a common situation, wherein specific clinical services (such as an assessment for a specific problem behavior) are not billed independently and, instead, an hourly (or equivalent) fee is charged that covers all services. Again, in such cases we would suggest waiving all fees during the times the experimental procedures are being delivered.

Other Conflicts of Interest

Financial gain, prestige, and career advancement are not the only conflicts of interest that can arise. Especially pertinent to the conduct of assessment and intervention research is when decisions about treatment for the benefit of the client conflict with decisions about procedures dictated by the research protocol. As clearly stated in section 6.03 of the Ethics Code for Behavior Analysts , “Behavior analysts conducting research in the context of service delivery must arrange research activities such that client services and client welfare are prioritized” (Behavior Analyst Certification Board, 2020 ). Perhaps the most obvious circumstance related to applied behavior analysis research is the use of baseline conditions. Delaying treatment beyond what would be typical in practice or removing a seemingly effective treatment for purposes of demonstrating experimental control pits the best interests of the client against those of the research(er). Much of the published literature discussing ethical issues in clinical research or reporting patient opinions about research participation (e.g., Cho et al., 2015 ; Kelley et al., 2015 ) is focused on studies in which the participants would be randomly assigned to a treatment or control group. This research suggests that participants do not always understand the randomization process and the role it plays, but many participants report being concerned about the degree to which research procedures might undermine their individual care (e.g., Kelley et al., 2015 ).

The degree to which these concerns are true of participants in single-case research, especially behavior-analytic research, is unknown. It is important to note that most behavior-analytic research in practice differs from typical medical studies, especially randomized controlled trials (RCTs). Although treatment might be delayed to some extent or briefly removed, the single-case experimental designs most often used by behavior analysts result in each participant being exposed to the intervention, usually after only a brief waiting period. This is not always the case in group (between-subjects) research designs such as RCTs. In research designs involving control or comparison groups, some number of participants do not receive treatment until a considerable period of time has passed or, in some cases, at all. Still, it is important that potential participants understand that treatment might be delayed or removed, not for clinical benefit, but because of the demands of the experimental design. This is especially important so that participants can truly provide informed consent (see below) for their participation. Examples of situations in which treatment might be delayed or removed should be provided and explained during the informed consent process. When feasible, participants could be reminded of the purpose of the delay or removal at the point such situations arise during the research activities.

Informed Consent

Ethics Code for Behavior Analysts : 6.04 Informed Consent in Research. Behavior analysts are responsible for obtaining informed consent (and assent when relevant) from potential research participants under the conditions required by the research review committee. When behavior analysts become aware that data obtained from past or current clients, stakeholders, supervisees, and/or trainees during typical service delivery might be disseminated to the scientific community, they obtain informed consent for use of the data before dissemination, specify that services will not be impacted by providing or withholding consent, and make available the right to withdraw consent at any time without penalty.

Informed consent, in the ideal, means that a participant consents to take part in an experiment free of the influence of any irrelevant controlling variables supplied by the experimenters. The variables influencing the participant’s behavior should be those that already are part of their personal histories, as well as the relevant SDs related to research events, as described by the experimenter. The experimenter should be cautious about arranging MOs during the consent process, especially those that might be related to treatment procedures and outcomes. It might be impossible to avoid arranging MOs altogether, but it probably is safest to err on the side of creating abolishing rather than establishing relations (Laraway et al., 2003 ), in the sense of decreasing the effectiveness of reinforcement for participation and abating behaviors indicating consent. The discriminative stimuli introduced should be restricted to verbal descriptions of the experimental procedures with relatively few autoclitics. And, to the extent possible, MOs and SDs relevant to clinical activities should be reduced or removed by, for example, using different physical locations and staff during research activities.

Informed consent requires that prospective participants be told about the research activities in which they will be engaged while participating, the various risks and benefits associated with their participation, as well as their right to stop participating at any time or to not participate at all. This information must be explained clearly in language appropriate to the education level of the prospective participants and with appropriate cultural considerations. The prospective participant should have the opportunity to ask questions and be given the time necessary to consider the advantages and disadvantages of participating before agreeing to do so.

Of paramount importance in all research involving human participants is the degree to which the participants can and do provide informed consent freely—that is, without coercion. Prospective participants should be able to decline to participate, or cease participating after agreeing to do so, without any loss of privileges to which they are otherwise entitled. Indeed, one could argue that we feel most free under conditions that are free of conspicuous aversive control (e.g., Skinner, 1971 ), such as the potential loss of privileges for failing to participate in research. It is better to arrange that some advantages will result from participating. However, any favorable inducements to participate should not be so substantial that they create circumstances that make it difficult for the prospective participant to say no. Brody et al. ( 2005 ) warn against the threat of “therapeutic misconception” that occurs “when subjects desperately believe a study offers their last hope,” because this can compromise the voluntariness of their participation (p. 1412). It is important that researchers consider whether the informed consent process truly promotes understanding and voluntary decision making, or if it falls short by serving solely to disclose information to research participants (Brody et al., 2005 ).

The relationship between a client and a service provider conducting research is typically more significant than that between a research participant and, for example, a faculty member or research assistant at a university. From the participant’s perspective, the research is likely assumed to be important to the person conducting that research, such that declining to participate in the research is seen as akin to declining to help the researcher do something important. This would be the case in any sort of research, clinical or otherwise, but in the case of a prospective participant who also is receiving clinical services, an invitation to participate in research conducted by a service provider might suggest that some important losses will occur if they decline. Maybe they will have fewer people attending to their needs or have fewer resources devoted to their services. Maybe they will not be given priority in terms of scheduling treatment sessions or clinical meetings. (Whether a prospective participant will lose any privileges is something different than whether they believe they might lose such privileges.) Under such circumstances, one might ask whether the consumer can freely consent.

Behavior analysts appreciate, of course, that our behavior is not really “free,” but is instead determined by our circumstances, past and present. 3 Still, how relatively free we are might be said to fall along a continuum determined by the amount of conspicuous influence operating on our behavior. Handing over $20 when someone asks for a donation to a charity is a freer act than doing the same with someone holding a gun to your head. In terms of the circumstances being discussed here, the issue is how strong the arranged consequences for participation are, with the degree of perceived freedom being an assessment of how strongly the experimenter-arranged consequences compete with existing reinforcement contingencies outside of the experimenter’s control. For example, offering a large sum of money, especially to an economically disadvantaged person, can lead to a situation in which that person cannot reasonably decline. The same is true of offering intervention services to an individual or family in need of such services. It is important that behavior analysts “clarify the nature of the services, and any potential risks, obligations, and limitations for all parties” when offering intervention services as part of, or as an incentive for, research participation (Behavior Analyst Certification Board, 2020 , 6.03).

There are strategies that can minimize coercion in the informed consent process. To start, it might be advisable to have staff who are not otherwise involved with the client carry out the recruitment and informed consent activities (Persons et al., 2021 ). Although this is not required by federal regulations, the Belmont Report and the World Medical Association Declaration of Helsinki, the two seminal statements on human subjects protections, made this recommendation (National Commission for the Protection of Human Subjects of Biomedical & Behavioral Research, 1978 ; World Medical Association, 2013 ). For example, the Declaration of Helsinki states,

When seeking informed consent for participation in a research study the physician must be particularly cautious if the potential subject is in a dependent relationship with the physician or may consent under duress. In such situations the informed consent must be sought by an appropriately qualified individual who is completely independent of this relationship.

Like most ethical considerations, the matter is not at all straightforward, however. To date, there are no empirical studies of ethical behavior conducted specifically in the context of behavior analysis research. The medical literature, on the other hand, suggests that prospective participants sometimes, or maybe often, prefer to have the risks and benefits of participating in a study explained by their treating physician (Kelley et al., 2015 ).

It might also be advisable to allow the prospective participant to wait for a period of time before signing and delivering the consent forms (Loewenstein et al., 2011 ; Loewenstein et al., 2012 ; Persons et al., 2021 ). This provides an opportunity for them to consider the benefits and risks associated with participating, and even allows them to get the opinion of others and to learn more about what is being proposed, if they are inclined to do so. Even better, a neutral third party could be made available at a later date to witness the signing and collect the consent form (Loewenstein et al., 2011 ; Loewenstein et al., 2012 ). To the extent possible, it is advisable to keep participation status anonymous from other individuals in the organization, except for those parties who must know for legal or methodological reasons or to make decisions about ongoing care. Using contracts and informed consent language that clearly specify the clinical rights of the client that will be maintained even in the case of nonparticipation is important. Finally, it seems reasonable to conduct research activities, including recruitment, at times that are distinct from regularly scheduled treatment or consultation sessions (Persons et al., 2021 ).

Institutional Review Board

Ethics Code for Behavior Analysts : 6.02 Research Review. Behavior analysts conduct research, whether independent of or in the context of service delivery, only after approval by a formal research review committee.

A formidable hurdle for many behavior analysts in practice who want to conduct clinical research is the establishment of, or partnership with, an Institutional Review Board (IRB) or similar review panel. The purpose of an IRB is to ensure the protection of human subjects by reviewing the proposed research activities to determine if they are ethically defensible and if the potential benefits of the research findings outweigh the potential risks to participants. In practice, IRB review can help protect the researcher in the case of lawsuits arising from real or perceived harms that result during the course of a study. According to the Office for Human Research Protections (OHRP) of the U.S. Department of Health and Human Services (HHS), any research activities with human subjects must be reviewed and approved by a panel of individuals not otherwise invested in the proposed research. The OHS also stipulates that the IRB should comprise at least five members drawn from diverse backgrounds and include individuals who have conducted human research, as well as individuals who have not. In addition, the IRB should include at least one member who is not a researcher and one member who is not affiliated with the institution conducting the research. It is important that the individuals who are proposing the research or who will conduct the research cannot vote on the research proposal, but they can provide information about the proposal, as requested by the IRB.

The letter of the law requires that all human research supported with federal funds be approved by an accredited IRB, but the spirit of the law applies to all human research, whether federally funded or not (Osborne & Luoma, 2018 ). In addition, some states have regulations requiring that any research activities with human subjects be approved by a formal review panel, regardless of federal funding status. And more and more journals are requiring attestation that all research activities were approved by an IRB or similar review panel prior to publication, meaning that the dissemination of research findings often hinges on IRB review and approval.

Most IRBs are affiliated with a university or hospital, but this is not a requirement and even small clinical organizations could conceivably convene such a panel. However, larger institutions are in better positions to assemble review boards that conform to federal expectations, especially in terms of membership. Even when clinical organizations partner with larger research organizations, such as universities, IRB procedures can be burdensome for investigators from multiple sites (Brody et al., 2005 ), given that federal regulations assert that “in the conduct of cooperative research projects, each institution is responsible for safeguarding the rights and welfare of human subjects” (U.S. Department of Health & Human Services Office of Human Research Protections, n.d. ). In other words, independent review is expected from each associated institution’s IRB, unless there is special approval for a joint review arrangement.

LeBlanc et al. ( 2018 ) discussed the challenges associated with forming a research review committee, what we are calling here an IRB, and offered suggestions for establishing and maintaining such a committee in a human service setting (also see Persons et al., 2021 ). Indeed, the standards for establishing a review committee can be prohibitive for a small clinical organization where there are few qualified members to populate an IRB and where most of the otherwise qualified members would be directly involved in any research activities proposed. In lieu of establishing an in-house IRB, an agency could partner with an established IRB at another institution (e.g., a local university) or use the services of a commercial IRB. Partnering with another institution could be difficult, though, as institutional IRBs often are overworked by their own research proposals and there can be little to gain by increasing their workload by assisting outside agencies. Moreover, taking on outside research submissions could expose the institution to additional legal liability. Commercial IRBs, on the other hand, exist for the expressed purpose of serving entities that cannot otherwise access an IRB. The major downside to commercial IRBs is that there is a fee associated with review, though it is worth noting that there are also costs associated with establishing and maintaining an in-house IRB. Still, the fees charged by commercial IRBs can be high, running into the thousands of dollars per review.

One recent alternative to commercial IRBs is the Behavioral Health Research Collective (BHRC), a nonprofit group of behavioral health-care organizations that collectively manage a federally registered IRB ( https://bhrcirb.org ). Although the BHRC was not accepting new members at the time of this writing, the general model could be applied in the establishment of a new collective. That is, several clinical agencies could collaborate to form an interagency IRB such that costs and workload are fairly distributed and conflicts of interest in the review of individual protocols are minimized. Other resources also have recently emerged, primarily within the medical field, with the goal of supplementing the current IRB framework of human-subjects research review (de Melo-Martín et al., 2007 ; Emanuel et al., 2000 ; Porter et al., 2018 ; Sugarman et al., 2003 ). For example, the Research Ethics Consultation (REC) is a forum that has emerged within the medical field in the past decade aimed at discussing new or persisting ethical issues during the research process. More than three dozen U.S. institutions have developed REC services (Porter et al., 2018 ). Likewise, the Clinical Research Ethics Collaborative was established as a nationwide group of medical research ethics consultants working to support clinicians through clinical research ethics consultation; as of 2018, the Collaborative had 54 members spanning 35 institutions (Porter et al., 2018 ). Porter et al. ( 2018 ) suggest that research ethics consultation could be valuable as a resource for researchers with challenging, novel ethical questions during research, to assist with the increasing challenges associated with the informed consent process and the risks and benefits involved in research activities, and as a support system when investigators encounter unforeseen hurdles and conflicts during their research activities.

To further complicate matters, concerns have been raised in the medical community about whether an independent IRB, consisting of a diverse group of researchers, nonspecialists, and community members, is the right fit to review specialized research (Brody et al., 2005 ). Some have suggested that subject matter experts might be better suited to review specialized research (Brody et al., 2005 ). It has also been suggested that commercial, even national, IRBs could review protocols more efficiently and be designated for entire subject matters, such as a national IRB for cancer-related research (Brody et al., 2005 ). Collaboration between researchers and ethicists is essential as problems arise during the research process, and ethics consultation services could increase researchers’ awareness of the ethical implications of their work past the review process, result in better research policies over time, and facilitate an organizational culture that is receptive to the recognition and resolution of ethical conflicts (de Melo-Martín et al., 2007 ). However, these potential benefits have not been investigated empirically and it remains to be seen how useful or widespread ethics consultation will be.

In many ways, behavior analysts in practice are uniquely situated to fulfill the promise of the scientist-practitioner model (Hayes et al., 1999 ) that is so commonly espoused in psychology. Behavior analysts do not face the barriers that prevent many psychologists from consuming and contributing to the research literature. The single-case experimental designs that are most common in the basic and applied research literature provide information about the way specific independent variables affect the behavior of individual participants. Those same designs permit the demonstration of functional relations on an individual-by-individual basis, meaning practitioners can use them to evaluate their practice on a case-by-case basis. This is a good thing, in our opinion. Without ongoing evaluation of practice, rigorous science can inch toward convenient pseudoscience (see Normand, 2008 , for a discussion).

Still, the advantages of conducting research in practice are not reasons to ignore some of the potential disadvantages concerning the protection of human subjects. We stress that these are potential disadvantages insofar as they can be overcome, so long as researchers can identify them in their own practices and take steps to minimize any adverse influences (see summary in Table ​ Table1). 1 ). In our estimation, the most critical steps researchers in practice can take are the establishment of (or partnership with) a conscientious and rigorous review board, the separation of clinical services from research activities (in terms of finances, personnel, and settings) whenever possible, and the liberal disclosure of potential conflicts of interesting in published papers and during conference and workshop presentations. In addition, it is worth considering whether the journals that publish research conducted in practice settings should require more details about recruitment and consent procedures when the participants are also receiving other clinical services delivered by the same persons or agency. These actions would serve to better protect human subjects while continuing to support behavior analysts in practice who are conducting important behavioral research.

Suggestions for Mitigating Ethical Concerns

Declarations

This is a discussion paper, not an empirical investigation, so there are no disclosures relevant to IRB approval or informed consent.

We have no conflicts of interest to disclose.

1 Throughout the article, Ethics Code for Behavior Analysts refers to Behavior Analyst Certification Board ( 2020 ).

2 For example, since 2017, more than 40% of research articles published in the Journal of Applied Behavior Analysis include at least one author listing a clinical affiliation.

3 There is no doubt that some would grant that behavior is controlled in most circumstances, but not all. Agreement on this point is not necessary for the sake of the argument that follows.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • American Education Research Association Code of ethics. Educational Researcher. 2011; 40 (3):145–156. doi: 10.3102/0013189X11410403. [ CrossRef ] [ Google Scholar ]
  • Barlow DH, Nock MK. Why can't we be more idiographic in our research? Perspectives on Psychological Science. 2009; 4 (1):19–21. doi: 10.1111/j.1745-6924.2009.01088.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Behavior Analyst Certification Board. (2020). Ethics code for behavior analysts .
  • British Educational Research Association. (2018). Ethical guidelines for educational research.  https://www.bera.ac.uk/researchers-resources/publications/ethical-guidelines-for-educational-research-2018 . Accessed 7 Jan 2022.
  • Brody BA, McCullough LB, Sharp RR. Consensus and controversy in clinical research ethics. JAMA. 2005; 294 (11):1411–1414. doi: 10.1001/jama.294.11.1411. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chivers T. Does psychology have a conflict-of-interest problem? Nature. 2019; 571 :2023. doi: 10.1038/d41586-019-02041-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cho MK, Magnus D, Constantine M, Lee SS-J, Kelley M, Alessi S, Korngiebel D, James C, Kuwana E, Gallagher TH. Attitudes toward risk and informed consent for research on medical practices: A cross-sectional survey. Annals of Internal Medicine. 2015; 162 (10):690–696. doi: 10.7326/M15-0166. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen LH, Sargent MM, Sechrest LB. Use of psychotherapy research by professional psychologists. American Psychologist. 1986; 41 (2):198. doi: 10.1037/0003-066X.41.2.198. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Melo-Martín I, Palmer LI, Fins JJ. Viewpoint: Developing a research ethics consultation service to foster responsive and responsible clinical research. Academic Medicine. 2007; 82 (9):900–904. doi: 10.1097/ACM.0b013e318132f0ee. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delprato DJ. Countercontrol in behavior analysis. The Behavior Analyst. 2002; 25 (2):191–200. doi: 10.1007/BF03392057. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000; 283 (20):2701–2711. doi: 10.1001/jama.283.20.2701. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goldfried MR, Wolfe BE. Psychotherapy practice and research: Repairing a strained relationship. American Psychologist. 1996; 51 (10):1007. doi: 10.1037//0003-066X.51.10.1007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gyani A, Shafran R, Myles P, Rose S. The gap between science and practice: How therapists make their clinical decisions. Behavior Therapy. 2014; 45 (2):199–211. doi: 10.1016/j.beth.2013.10.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hay-Smith EJC, Brown M, Anderson L, Treharne GJ. Once a clinician, always a clinician: A systematic review to develop a typology of clinician-researcher dual-role experiences in health research with patient-participants. BMC Medical Research Methodology. 2016; 16 :95. doi: 10.1186/s12874-016-0203-6. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hayes SC, Barlow DH, Nelson-Gray RO. The scientist practitioner: Research and accountability in the age of managed care . 2. Allyn & Bacon; 1999. [ Google Scholar ]
  • Johnston JM, Pennypacker HS. Strategies and tactics of behavioral research . 3. Routledge; 2009. [ Google Scholar ]
  • Kelley M, James C, Alessi Kraft S, Korngiebel D, Wijangco I, Rosenthal E, Joffe S, Cho MK, Wilfond B, Lee SS-J. Patient perspectives on the learning health system: The importance of trust and shared decision making. American Journal of Bioethics. 2015; 15 (9):4–17. doi: 10.1080/15265161.2015.1062163. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Laraway S, Snycerski S, Michael J, Poling A. Motivating operations and terms to describe them: Some further refinements. Journal of Applied Behavior Analysis. 2003; 36 :407–414. doi: 10.1901/jaba.2003.36-407. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • LeBlanc LA, Nosik MR, Petursdottir A. Establishing consumer protections for research in human service agencies. Behavior Analysis in Practice. 2018; 11 (4):445–455. doi: 10.1007/s40617-018-0206-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Loewenstein G, Cain DM, Sah S. The limits of transparency: Pitfalls and potential of disclosing conflicts of interest. American Economic Review. 2011; 101 (3):423–428. doi: 10.1257/aer.101.3.423. [ CrossRef ] [ Google Scholar ]
  • Loewenstein G, Sah S, Cain DM. The unintended consequences of conflict of interest disclosure. JAMA. 2012; 307 (7):669–670. doi: 10.1001/jama.2012.154. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morain SR, Joffe S, Largent EA. When is it ethical for physician-investigators to seek consent from their own patients? American Journal of Bioethics. 2019; 19 (4):11–18. doi: 10.1080/15265161.2019.1572811. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL, Morgan RK. Single-participant research design: Bringing science to managed care. American Psychologist. 2001; 56 (2):119–127. doi: 10.1037//0003-066X.56.2.119. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morrow-Bradley C, Elliott R. Utilization of psychotherapy research by practicing psychotherapists. American Psychologist. 1986; 41 (2):188. doi: 10.1037/0003-066X.41.2.188. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1978). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research . https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html . Accessed 7 Jan 2022. [ PubMed ]
  • Norcross JC, Karpiak CP. Clinical psychologists in the 2010s: 50 years of the APA Division of Clinical Psychology. Clinical Psychology: Science & Practice. 2012; 19 (1):1–12. doi: 10.1111/j.1468-2850.2012.01269.x. [ CrossRef ] [ Google Scholar ]
  • Normand MP. Science, skepticism, and applied behavior analysis. Behavior Analysis in Practice. 2008; 1 (2):42–49. doi: 10.1007/BF03391727. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Normand, M. P. (2016). Less is more: Psychologists can learn more by studying fewer people [Opinion]. Frontiers in Psychology, 7 (934). 10.3389/fpsyg.2016.00934 [ PMC free article ] [ PubMed ]
  • Osborne TL, Luoma JB. Overcoming a primary barrier to practice-based research: Access to an institutional review board (IRB) for independent ethics review. Psychotherapy. 2018; 55 (3):255–262. doi: 10.1037/pst0000166. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Persons JB, Osborne TL, Codd RT. Ethical and legal guidance for mental health practitioners who wish to conduct research in a private practice setting. Behavior Therapy. 2021; 52 (2):313–323. doi: 10.1016/j.beth.2020.04.012. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Porter KM, Danis M, Taylor HA, Cho MK, Wilfond BS. The emergence of clinical research ethics consultation: Insights from a national collaborative. American Journal of Bioethics. 2018; 18 (1):39–45. doi: 10.1080/15265161.2017.1401156. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Safran JD, Abreu I, Ogilvie J, DeMaria A. Does psychotherapy research influence the clinical practice of researcher–clinicians? Clinical Psychology: Science & Practice. 2011; 18 (4):357–371. doi: 10.1111/j.1468-2850.2011.01267.x. [ CrossRef ] [ Google Scholar ]
  • Skinner BF. Science and human behavior . Macmillan; 1953. [ Google Scholar ]
  • Skinner BF. Beyond freedom and dignity . Alfred A. Knopf; 1971. [ Google Scholar ]
  • Stewart RE, Chambless DL. Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology. 2007; 63 (3):267–281. doi: 10.1002/jclp.20347. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stewart RE, Stirman SW, Chambless DL. A qualitative investigation of practicing psychologists' attitudes toward research-informed practice: Implications for dissemination strategies. Professional Psychology: Research & Practice. 2012; 43 (2):100. doi: 10.1037/a0025694. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sugarman J, Eckenwiler LA, Emanuel EJ. Research oversight through new lenses: The consortium to examine clinical research ethics. Irb. 2003; 25 (1):9–10. doi: 10.2307/3564407. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • U.S. Department of Health and Human Services Office of Human Research Protections. (n.d.). Federal policy for the protection of human subjects (‘Common Rule’).   http://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/ . Accessed 7 Jan 2022.
  • World Medical Association World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA. 2013; 310 (20):2191–2194. doi: 10.1001/jama.2013.281053. [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. (PDF) Applied behavior analysis and school psychology

    applied behavior analysis research paper

  2. (PDF) Behavior analysis in an international context

    applied behavior analysis research paper

  3. FREE 8+ Behavior Analysis Samples in MS Word

    applied behavior analysis research paper

  4. (PDF) An Introduction to Applied Behavior Analysis

    applied behavior analysis research paper

  5. Applied Behavior Analysis A Complete Guide

    applied behavior analysis research paper

  6. PDF Download Applied Behavior Analysis (3rd Edition) Full Description

    applied behavior analysis research paper

VIDEO

  1. How to Assess the Quantitative Data Collected from Questionnaire

  2. Applied Behavior Analysis, early intervention

  3. Applied Behavior Analysis Webinar

  4. Diversity in Applied Behavior Analysis Part 1

  5. DTT, prompt fading, and taking data

  6. The 7 Dimensions of Applied Behavior Analysis!!!

COMMENTS

  1. 47360 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on APPLIED BEHAVIOR ANALYSIS. Find methods information, sources, references or conduct a literature ...

  2. Journal of Applied Behavior Analysis

    Submissions are invited for a Special Section of Journal of Applied Behavior Analysis on Applications of Contingency Management to Promote Health Behavior. Read the full Call for Papers.. The target date for submission of manuscripts is March 1 st, 2024 or earlier, with a goal of publishing the first papers in the Summer issue of 2024. Authors are invited to submit manuscripts to the Editor ...

  3. The Evidence-Based Practice of Applied Behavior Analysis

    Opening the discussion of EBP in this journal, Smith (The Behavior Analyst, 36, 7-33, 2013) raised several key issues related to EBP and applied behavior analysis (ABA). The purpose of this paper is to respond to Smith's arguments and extend the discussion of the relevant issues.

  4. Behavior Analysis: Research and Practice

    Behavior Analysis: Research and Practice is a multidisciplinary journal committed to increasing the communication between the subdisciplines within behavior analysis and psychology, and bringing up-to-date information on current developments within the field.. It publishes original research, reviews of the discipline, theoretical and conceptual work, applied research, translational research ...

  5. Functional analysis of problem behavior: A 40-year review

    Journal of Applied Behavior Analysis. Volume 56, Issue 2 p. 262 ... years ago; we expanded this review to capture the vast and innovative functional analysis research that has occurred over the past decade. Our review produced 1,333 functional analysis outcomes from 326 studies on the functional analysis of problem behavior between June 2012 ...

  6. The Fuzzy Concept of Applied Behavior Analysis Research

    A seven-dimension framework, introduced by Baer, Wolf, and Risley in an iconic 1968 article, has become the de facto gold standard for identifying "good" work in applied behavior analysis. We examine the framework's historical context and show how its overarching attention to social relevance first arose and then subsequently fueled the growth of applied behavior analysis. Ironically ...

  7. PDF The Importance of Analysis in Applied Behavior Analysis

    This article explores the question of whether applied behavior analysts conduct experimental or functional analysis of behavior and suggests levels of analysis they can carry out. It also discusses the role of analysis in behavioral research and practice and the certification requirements for behavior analysts.

  8. Applied Behaviour Analysis for Autism: Evidence, Issues, and

    In his paper addressing issues related to the legal regulation of ABA, ... the Council of Autism Service Providers Applied Behavior Analysis Treatment of Autism Spectrum ... . 2016 IACC Autism Spectrum Disorder Research Portfolio Analysis Report. January 2019. Available from: The U.S. Department of Health and Human Services Interagency Autism ...

  9. The Evidence-Based Practice of Applied Behavior Analysis

    Opening the discussion of EBP in this journal, Smith (The Behavior Analyst, 36, 7-33, 2013) raised several key issues related to EBP and applied behavior analysis (ABA). The purpose of this paper is to respond to Smith's arguments and extend the discussion of the relevant issues.

  10. Applied Behavior Analysis in Children and Youth with Autism Spectrum

    This manuscript provides a comprehensive overview of the impact of applied behavior analysis (ABA) on children and youth with autism spectrum disorders (ASD). Seven online databases and identified systematic reviews were searched for published, peer-reviewed, English-language studies examining the impact of ABA on health outcomes. Measured outcomes were classified into eight categories ...

  11. Journal of Applied Behavior Analysis

    The Journal of Applied Behavior Analysis ( JABA) is primarily for the original publication of research articles about applications of the experimental analysis of behavior to problems of social importance. Concise reviews, discussion articles, replications, and technical reports also are considered for publication.

  12. Applied behavior analysis.

    The term applied behavior analysis (ABA) was introduced by Baer et al. (1968) to describe the application of basic behavioral principles to understand and improve behavior. ABA focuses on observable, measurable, and objectively defined behavior that may occur in excess or not frequently enough (behavioral deficit). This chapter presents an overview of the field of ABA, including discussion of ...

  13. A Study in the Founding of Applied Behavior Analysis Through Its

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature ), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face ...

  14. An Introduction to Applied Behavior Analysis

    Abstract. Applied behavior analysis (ABA) refers to a systematic approach of understanding behavior. Deeply rooted in the early work of Thorndike, Watson, Pavlov, and Skinner on respondent and operant conditioning, ABA uses scientific observations and principles of behavior to improve and change behaviors of social interest.

  15. Increasing cultural understanding and diversity in applied behavior

    In recent years, the demands for behavior analysis to serve consumers with diverse cultural backgrounds have significantly increased. The field is in great need of culturally competent behavior analysts who can integrate appropriate cultural considerations to their programs. The field of behavior analysis can address this growing need by fostering cultural competency in professional training ...

  16. School-wide PBIS: An Example of Applied Behavior Analysis Implemented

    In this paper, PBIS is defined and the contributions of behavior analysis in shaping both the content and implementation of PBIS are reviewed. ... worthy question is why PBIS has been so widely adopted over the past 20 years when so many other examples of behavior analysis have offered impressive research outcomes with limited societal adoption ...

  17. A Literature Review: Applied Behavior Analysis and Performance; the

    Applied Behavior Analysis (JABA)'s first publication was released. The first articles in JABA helped behavior analysts lead the way in applied behavior analysis and acted as models for research to follow. Second, Baer, Wolf, and Risley (1968) published "Some Current Dimensions of Applied Behavior Analysis."

  18. The Fuzzy Concept of Applied Behavior Analysis Research

    Before long, a half-century will have elapsed since the first issue of Journal of Applied Behavior Analysis (JABA) featured Baer, Wolf and Risley's iconic article, "Some current dimensions of applied behavior analysis" (hereafter we will refer to the article as BWR, and the authors as Baer et al.).In the years since the article was published, the seven-dimension framework that it ...

  19. PDF Evidence -Based Practice in Psychology and Behavior Analysis

    applied behavior analysis has drifted from its empirical roots. This paper concludes by dis cussing the potential ... in psychology refers to "the integration of the best avai lable research with clinical expertise in the context of patient characteristics, culture, and preferences" (p. 273). ... The third section of the paper discusses these ...

  20. Sustainability

    The analysis of students' attitudes and perceptions represents a basis for enhancing different types of activities, including teaching, learning, assessment, etc. Emphasis might be placed on the implementation of modern procedures and technologies, which play an important role in the process of digital transformation. Among them is artificial intelligence—a technology that has already been ...

  21. Efficacy of Interventions Based on Applied Behavior Analysis for Autism

    Due to the relatively limited research addressing treatment options based on ABA for children and adolescents with ASD, it was deemed appropriate to include studies that used applied behavior analysis (ABA), discrete trial teaching (DTT), pivotal response treatment (PRT), picture exchange communication system (PECS) or early start denver model ...

  22. Applied Behavior Analysis in Children and Youth with Autism Spectrum

    Applied Behavior Analysis. At its core, ABA is the practice of utilizing the psychological principles of learning theory to enact change on the behaviors seen commonly in individuals diagnosed with ASD (Lovaas et al., 1974).Ole Ivar Lovaas produced a method based on the principles of B. F. Skinner's theory of operant conditioning in the 1970s to help treat children diagnosed with ASD (or ...

  23. Research Ethics for Behavior Analysts in Practice

    A cursory review of the authors listed on research published in applied behavior analysis journals suggests that many practitioners are also researchers, because many of the contributors list clinical-service affiliations in their bylines. 2 This can be a boon to scientists and consumers, alike, but it also raises some important ethical ...