Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Quality of Single-Case Designs Targeting Adults’ Exercise and Physical Activity

Profile image of Kelley Strohacker

2019, Translational Journal of the ACSM

Related Papers

The purpose of this study was to develop a descriptive profile of the adapted physical activity research using single subject experimental designs. All research articles using single subject experimental designs published in the journal of Adapted Physical Activity Quarterly from 1984 to 2013 were employed as the data source. Each of the articles was coded in a subcategory of seven categories: (a) the size of sample; (b) the age of participants; (c) the type of disabilities; (d) the type of data analysis; (e) the type of designs, (f) the independent variable, and (g) the dependent variable. Frequencies, percentages, and trend inspection were used to analyze the data and develop a profile. The profile developed characterizes a small portion of research articles used single subject designs, in which most researchers used a small sample size, recruited children as subjects, emphasized learning and behavior impairments, selected visual inspection with descriptive statistics, preferred a...

single case study exercise

Rumen Manolov

The current text provides advice on the content of an article reporting a single-case design research. The advice is drawn from several sources, such as the Single-case research in behavioral sciences reporting guidelines, developed by an international panel of experts,scholarly articles on reporting, methodological quality scales, and the author’s professional experience. The indications provided on the Introduction, Discussion, and Abstract are very general and applicable to many instances of applied psychological research across domains.In contrast, more space is dedicated to the Method and Results sections, on the basis of the peculiarities of single-case designs methodology and the complications in term s of data analysis.Specifically, regarding the Method, several aspects strengthening (or allowing the assessment of)the internal validity are underlined, as well as information relevant for evaluating the possibility to generalize the results. Regarding the Results, the focus is put on justifying the analytical approach followed. The author considers that, even if a study does not meet methodological quality standards, it should include sufficiently explicit reporting that makes possible assessing its methodological quality. The importance of reporting all data gathered, including unexpected and undesired results, is also highlighted. Finally, a checklist is provided as a summary of the reporting tips.

Matthew Burns

THOMAS KRATOCHWILL

Chido Drake

Developmental Medicine & Child Neurology

Robbin Hickman , Susan Harris

Journal of Social Service Research

Stephen E . Wong

This article examines single-case designs that omit baseline phases, contain shorter reversal phases, administer treatment across fewer baselines, or have other features that make them easier for practitioners to use in evaluating their own interventions. Particular attention is given to the Repeated Pretest-Posttest and the Periodic Treatments Designs, the Nonconcurrent Multiple-Baseline Design, the short reversal design, the short multiple-baseline design,

Neuropsychological Rehabilitation

Michael Perdices

Journal of Behavioral Education

Kimberly Vannest

In this editorial discussion we reflect on the issues addressed by, and arising from, the papers in this special issue on Single-Case Experimental Design (SCED) study methodology. We identify areas of consensus and disagreement regarding the conduct and analysis of SCED studies. Despite the long history of application of SCEDs in studies of interventions in clinical and educational settings, the field is still developing. There is an emerging consensus on methodological quality criteria for many aspects of SCEDs, but disagreement on what are the most appropriate methods of SCED data analysis. Our aim is to stimulate this ongoing debate and highlight issues requiring further attention from applied researchers and methodologists. In addition we offer tentative criteria to support decision-making in relation to the selection of analytical techniques in SCED studies. Finally, we stress that large-scale interdisciplinary collaborations, such as the current Special Issue, are necessary if SCEDs are going to play a significant role in the development of the evidence base for clinical practice.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Rizgar Maghded

Education and Treatment of Children

Hariharan Swaminathan

Gerardo Munck

Evidence-Based Nursing

Jenny Ploeg

Clinical Psychology & Psychotherapy

Clive Hollin

Contemporary Advances in Sports Science

Manos Georgiadis

Evidence-based Communication Assessment and Intervention

Wim Van den Noortgate , Patrick Onghena

The Design Journal

Aldo Hernandez

Kenneth Ottenbacher

NORA Newsletter 2(3), pages 3-5.

michael kirk-smith

Health Psychology Review

Johanna Nurmi

Frontiers in Education

Mariola Moeyaert

Journal of Interdisciplinary Medicine

Diana Opincariu

Journal of Umm Al-Qura University for Engineering and Architecture

Noura Ghabra

Medicine and science in sports and exercise

Lynette Craft

Gareth Stratton

Physical therapy

Mark Wolery

Terry Huang

manoj dhadwal

Sustainable Design Unit

William Harn

Allen Dzuda

Application of a Single-Case Research Design to Present the Effectiveness of Rehabilitation in the Clinic

Submitted: 04 September 2019 Reviewed: 27 November 2019 Published: 30 January 2020

DOI: 10.5772/intechopen.90665

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Physical Therapy Effectiveness

Edited by Mario Bernardo-Filho, Danúbiada Cunha de Sá-Caputo and Redha Taiar

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

981 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

Clinical benefits of rehabilitation are very difficult to present because of various factors such as very small sample sizes, no control comparison, or short period of intervention. However, clinical improvement can be presented with a single case study research design that is very interesting and challenging technique. Basic and advanced single-research designs can be performed in various patterns, for example, baseline (A) and intervention (B) phases [A-B], A-B-withdrawal (A’) phases [A-B-A’], A-B-A’- new intervention (B’) phases [A-B-A’-B], etc. In each phase, a line graph must be presented for changes using a trend line or split-middle method or mean and the standard deviation is shown. A trend or celebration line of data in the baseline period (A) should be drawn through the intervention phase (B). Then, the serial dependence or autocorrelation coefficient in each phase must be calculated by Barletta test, and the transformation of autocorrelation data should be performed when serial dependence occurs. Finally, clinically statistical improvement during intervention can be analyzed with the Bloom table, two-standard deviation band, paired t-test, binomial statistic, or C-statistical analysis. Therefore, single case study or cases of research design can be used to present the effectiveness of any intervention in the clinic.

  • single-case research design
  • rehabilitation

Author Information

Jirakrit leelarungrayub *.

  • Department of Physical Therapy, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand

Yothin Pothasak

Jynwara kaju, rungtiwa kanthain.

*Address all correspondence to: [email protected]

1. Introduction

At present, the benefit of clinical research is very important in clinical practice. However, clinical research involving human patients or subjects has more varied bias and confounding factors than experimental research [ 1 ]. A previous report claimed that common studies in clinical research had various designs such as case–control studies, cohort study, randomized controlled trials, reviews, and meta-analyses, the same as case reports [ 2 ]. Moreover, single-subject design also is used for practice-based primary care research, due to the limitation on condition, heterogeneity, and strict criteria, but with some interesting results [ 3 ]. Previous evidence reported that clinical research can be divided broadly into two types; observational and experimental [ 4 ]. Cross-sectional, case control, cohort, and ecological study are analytical, but prevalence surveys, case series, surveillance data, and analysis of routinely collected data are a type of descriptive study. Although the best clinical research is designed with a controlled group, low bias on inclusion criteria, and good statistical analysis protocol, the limitations on various pathological conditions and inconsistency among patients are presented, whereas the value of inspiring new ideas or explaining new results from techniques during rehabilitation is of clinical concern in multiple or single cases [ 5 , 6 , 7 ]. Therefore, effectiveness of rehabilitation in a rare or single case can be presented with descriptive data alone, as in a previous report that presented the effect of combined thoracic and backward lifting exercise at a thoracic kyphosis angle, and intercostal muscle pain, without any graphs, tables, or statistical analysis measurement [ 8 ]. However, better results of a single-case research design can be represented by visual graphs after data collection at either baseline or intervention phases, whereas efficiency of intervention also can be evaluated with a mean level and trend changes by comparing with baseline or prior interventions [ 9 , 10 ]. Finally, significant effectiveness of intervention can be analyzed simply with the Bloom Table, paired t-test, or one-way ANOVA methods, if serial dependence or autocorrelation coefficient of data is not found ( Figure 1 ).

single case study exercise

Flowchart of procedure on a single-case research design.

The flow chart in Figure 1 shows a simple series procedure for performing a single-case research, (1) collecting data in both baseline and intervention phases, (2) drawing the trend line and mean level in each phase, (3) checking the series dependency of data in each phase, and (4) drawing the celeration line from the prior phase to the next one, and then analyzing statistically.

2. Research design

There has been a variety of single case research designs such as (1) AB, (2) reversal, (3) multiple baseline, and (4) alternating treatment [ 11 ]. The components of a single-case research design are composed of a series of interventions and frequency of assessment in multiple phases and various conditions. For instance, the effectiveness of a 7-day chest wall-stretching exercise in patients with chronic obstructive pulmonary disease (COPD) compared with 7 days in the baseline phase [ 12 ]. The same applies to 7-day pulmonary rehabilitation in patients with chronic scleroderma [ 13 ]. Therefore, various designs with a control period or baseline (A), intervention (B), withdrawn (A), or new intervention phase (B′) can be performed. Thus, a simple visual graph has been recommended with the following three designs ( Figure 2 ).

single case study exercise

Visual graph presentation of a three-pattern design. Baseline and intervention phases (1), baseline-intervention-withdrawn phases (2), and intervention-withdrawn-intervention phases (3).

3. Data collection and presentation

In this design, the frequency of data collection has been suggested as at least three times or during individual periods in either the baseline or intervention phase [ 14 ], in order to present the change in trend in the baseline phase, whereas a previous report preferred an evaluation 6-8 times [ 15 ] as in Table 1 , for example. The visual analysis involves determining the level, trend and stability of the data within in each phase, as well as immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases [ 16 ].

Baseline phase Intervention phase
PImax Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 Day 9 Day 10
1 2 3 2 2 3 3 5 5 5 6
2 2 2 2 3 3 4 4 5 7 7
3 2 2 2 2 2 3 5 4 8 8

Chest wall excursion (cm) between baseline and intervention phases.

Presentation of the data or scores can be performed with scattering, plotted as a simple line graph for either single or multiple cases with a multiple baseline design, as in Figure 3 . Presentation with a line graph has several advantages, for example, easy interpretation and understanding. Moreover, the bar graph of mean in each phase can be used.

single case study exercise

Line graph presentation of chest wall excursion (cm) in two cases that show the baseline and intervention phases with different scores in the baseline phase.

The appropriate graphic methods can be applied to present data of changes in level, variability, trend, and slope [ 17 ] within each phase following the immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases [ 16 ]. For example, the mean levels of peak inspiratory pressure (PImax) between baseline (A), intervention (B), and withdrawn (A’) is shown in Figure 4(1) and trend lines in Figure 4(2) . Trend lines within a phase can demonstrate the stability of scores in the baseline phase with constant trend, as well as acceleration and deceleration trend patterns in intervention and withdrawn phases, respectively.

single case study exercise

Demonstrated data of mean levels (1) and trend lines (2) within baseline (A), intervention (B), and withdrawn (A’) phases.

Interpretation of the mean and trend line in each phase presents the meaning clinically, for example, chest wall excursion ( Figure 5(1) ), dyspnea scale ( Figure 5(2) ), vital capacity, diffused capacity of lung for carbon monoxide (DLCO), etc. Figure 5(1) shows the constancy of chest wall excursion before treatment, which then increases with a mean level higher (3.83 cm) than that at baseline (2.19 cm). Therefore, that intervention can increase chest wall excursion in the clinic. In addition, dyspnea scale changes in Figure 5(2) decrease from a mean of 5.6 in the intervention phase to 4.2 in the baseline phase.

single case study exercise

Graph lines of chest wall excursion (1), dyspnea scale (2), vital capacity (3), and diffusing capacity of lung for carbon monoxide (DLCO) (4) between baseline and intervention phases.

The deleterious effect of interventions can be presented, as in Figure 5(3) . Vital capacity reduces and decelerates tendency during given treatment; therefore, that treatment does not affect vital capacity. Finally, the last example of change in DLCO between baseline and intervention phases shows a slight increase of DLCO mean. Thus, that intervention can increase the DLCO slightly ( Figure 5(4) ).

The trend line in each phase, especially between baseline and intervention phases, can show the effectiveness of rehabilitation using some techniques, for instance, the deceleration trend line of dyspnea ( Figure 5(2) ) and acceleration trend line of chest wall excursion ( Figure 5(1) ). Therefore, the effectiveness of rehabilitation in the intervention phase can be compared with the extended line from the baseline phase, which is called the celeration line as in Figure 6 .

single case study exercise

Line graphs showing the celeration line from the baseline phase of chest wall excursion (1) and vital capacity (2).

The celeration line can be found by the freehand [ 19 ], several semi-average [ 18 ], and least squares method, including computing in a specific program, such as SPSS or SigmaPlot.

The simple method of drawing the celeration line can be carried out by therapists using the semi-average [ 18 ] or “split-middle” method. For example, the data of 10 pain scores in the baseline phase; 4, 5, 4, 5, 6, 7, 6, 8, 7, and 6 have been recommended with 8–10 data in order to fit the celeration line more accurately [ 19 , 20 ]. Therefore, the procedure of drawing a celeration line can be performed in the following steps:

Step 1 : Plot the data ( Figure 7(1) ).

single case study exercise

Plotting data in the baseline phase (1). The baseline phase is divided in half with a solid line, and each half phase is divided by dashed vertical lines with median horizontal lines (2). The celeration line is extended to the intervention phase (3).

Step 2 : Divide the data in half by drawing a solid vertical line and dividing each of the halves in half by drawing dashed vertical lines before plotting the median level of each half phase, so that the mean in the first half is 5 and 7 in the second half ( Figure 7(2) ).

Step 3 : Finally, extend the celeration line from baseline to the intervention phase ( Figure 7(3) ).

On the other hand, the mean value of each half can be used for instant median value [ 18 , 21 ].

Therefore, data presentation of the single-case research design is developed simply and understood with the mean level and trend changes in the baseline phase. In addition, the effectiveness of rehabilitation can be presented from the difference of mean level and trend line from the celeration line in the baseline phase. However, the effectiveness of rehabilitation in the intervention phase also can be compared under statistical procedure.

4. Statistical analysis

The statistical analysis must be performed as a series procedure. First of all, the data on autocorrelation or serial dependency must be evaluated so that it can violate the independence of error assumption of statistical tests [ 22 ]. Data or outcome especially should have no serial dependency on either baseline or intervention phases. Serial dependency basically means that temporally adjacent scores tend to be related to one another. For instance, the score of an individual subject for day 1 tends to predict that for day 2. A day-2 score can predict the day-3 score, etc. Previous study ranked lag 1 autocorrelation and assigned it to one of the three sets; autocorrelations ranging from 0.15 to 0.50 (low), 0.51 to 0.75 (moderate), and 0.76 to 0.94 (high) [ 23 ], which found the interaction between serial dependency and significance level. This meant that high serial dependency correlates with low significance. Therefore, the data within phase examination should have no autocorrelation (r) or serial dependency.

4.1. Serial dependency analysis

Step 1 : Calculate the mean of the scores in each phase with the sum of all data divided by total numbers.

Baseline phase: [2 + 3 + 2 + 3 + 3]/5 = 2.6

Intervention phase: [4 + 5 + 5 + 8 + 8]/5 = 6

Step 2 : Find the difference values by minutes in each score ( Table 1 ) with its mean.

Baseline phase Intervention phase
Score-mean = difference value Score-mean = difference value
2.0 − 2.6 = −0.6 4.0 − 6.0 = −2.0
3.0 − 2.6 = 0.6 5.0 − 6.0 = −1.0
2.0 − 2.6 = −0.6 5.0 − 6.0 = −1.0
3.0 − 2.6 = 0.6 8.0 − 6.0 = 2.0
3.0 − 2.6 = 0.6 8.0 − 6.0 = 2.0

Step 3 : Calculate the sum of multiple first and second difference values from Step 2.

Baseline phase Intervention phase
(−0.6) (0.6) = −0.36 (−2) (−1) = 2
(0.6) (−0.6) = −0.36 (−1) (−1) = 1
(−0.6) (0.6) = −0.36 (−1) (2) = −2
(0.6) (0.6) = 0.36 (2) (2) = 4
−0.72 5

Step 4 : Calculate the sum of the square of difference values from Step 2.

Baseline phase Intervention phase
(−0.6) = 0.36 (−2) = 4
(0.6) = 0.36 (−1) = 1
(−0.6) = 0.36 (−1) = 1
(0.6) = 0.36 (2) = 4
(0.6) = 0.36 (2) = 4
1.8 15

Step 5 . Calculate the autocorrelation coefficient (r) in each phase by dividing the sum value in Step 3 by the square sum of difference values in Step 4.

Baseline phase Intervention phase
0.72/1.8 = 0.40 5/15 = 0.33

Step 6 : Analyze the autocorrelation coefficient as statistically significant or not; a simple procedure called Bartlett test can be used. If the autocorrelation value (r) in Step 5 is less than 2/ √ n (n = number of data), the nonsignificant autocorrelation within or in the phase is confirmed. Therefore, this example = 2/ √ 5 is 0.894.

Baseline phase Intervention phase
0.40 < 0.894 0.33 < 0.894

Final results of serial dependency calculation show that data in both phases have not demonstrated a significant degree of autocorrelation or serial dependency. Therefore, therapists may be confident in the results from disease stability in the baseline phase and intervention program. However, if the result of autocorrelation analysis shows significant serial dependency, transformation of autocorrelation should be performed again.

4.2. Transformation data

Step 1 : The mean score [3 + 5 + 4 + 5 + 6 + 6 + 7 + 8 + 8 + 11 + 10 + 11]/12 is 7.08.

Step 2 : Difference values.

Score − mean = difference value
3.0 − 7.08 = −4.08
5.0 − 7.08 = −2.08
4.0 − 7.08 = −3.08
5.0 − 7.08 = −2.08
6.0 − 7.08 = −1.08
6.0 − 7.08 = −1.08
7.0 − 7.08 = −0.08
8.0 − 7.08 = 0.92
8.0 − 7.08 = 0.92
11.0 − 7.08 = 3.92
10.0 − 7.08 = 2.92
11.0 − 7.08 = 4.92

Step 3 : Calculate the sum of multiple first and second difference values.

(−4.08)(−2.08) = 8.48
(−2.08)(−3.08) = 6.41
(−3.08)(−2.08) = 6.41
(−2.08)(−1.08) = 2.24
(−1.08)(−1.08) = 1.17
(−1.08)(−0.08) = 0.08
(−0.08)(0.92) = −0.07
(0.92)(0.92) = 0.85
(0.92)(3.92) = 3.61
(3.92)(2.92) = 11.45
(2.92)(4.92) = 14.37
=

Step 4 : Calculate the squared sum of difference values.

(−4.08) = 16.65
(−2.08) = 4.33
(−3.08) = 9.49
(−2.08) = 4.33
(−1.08) = 1.17
(−1.08) = 1.17
(0.08) = 0.01
(0.92) = 0.85
(0.92) = 0.85
(3.92) = 15.37
(2.92) = 8.53
(4.92) = 24.21
=

Step 5 : Analyze the autocorrelation coefficient: 54.99/86.92 = 0.632. Analyze whether the autocorrelation coefficient is statistically significant or not, and 2/ √ 12 is 0.577 because √ 12 is 3.464 (n = 12). Therefore, these series data have significant serial dependency because 0.632 is more than 0.577. All data must be transformed by the first difference in transformation as below.

Step 6 : The first score in the data is subtracted from the second, the second from the third, the third from the fourth, the fourth from the fifth, and so on, until all scores are fully completed: 3, 5, 4, 5, 6, 6, 7, 8, 8, 11, 10, and 12.

3 − 5 = −2, 5 − 4 = 1, 4 − 5 = −1, 5 − 6 = −1, 6 − 6 = 0, 6 − 7 = −1, 7 − 8 = −1, 8 − 8 = 0, 8 − 11 = −3, 11 − 10 = 1, and 10 − 11 = −1.

Thus, new data are −2, 1, −1, −1, 0, −1, −1, 0, −3, 1, and −1. When returning to calculate the serial dependency in Steps 1–5, the autocorrelation efficiency is 0. 51, which is less than 0.577. Finally, all new series data were added with a constant of any value such as 2, in order to avoid the minute value.

However, the transformation of data by the first transformation difference still failed, thus moving the average transformation to the second procedure in order to reduce the serial dependency in Step 7.

Step 7 : The following moving average method reduces the serial dependency that consists of simply plotting the mean values between two adjacent data points over the entire series data.

Score Transformed score
3 + 5 = 8/2 = 4.0
4 + 5 = 9/2 = 4.5
6 + 6 = 12/2 = 6.0
7 + 8 = 15/2 = 7.5
8 + 11 = 19/2 = 9.5
10 + 11 = 21/2 = 10.5

Then, the new data from 12 to 6 of 4.0, 4.5, 6.0, 7.5, 9.5, and 10.5 will be returned for calculating the autocorrelation coefficient (r) in Steps 1–5. The result of the new autocorrelation coefficient calculation is 0.56, which is less than 0.82 (2/ √ 6 ). Therefore, the data do not present serial dependency.

4.3. Statistical evaluation

4.3.1. standardized statistic method.

When the non-serial dependency or autocorrelation is not presented in each phases, confidential results of rehabilitation can be implied without any confounding factors from diseases or condition of auto-recovery. Finally, the procedure for identifying the statistical difference between intervention and baseline or control can be evaluated by various methods. Analysis of variance (ANOVA), F-test, and t-test between phases are widely familiar to researchers and clinicians, for example, comparing the data of expiratory tidal volume (VTE) during 10 days on a mechanical ventilator during chest physical therapy with the previous 10 days of non-treatment. The visual graphic shows the mean, autocorrelation, and trend line ( Figure 8(1) ), whereas bar graphs with a significant level calculated under the nonparametric paired t-test were presented ( Figure 8(2) ).

single case study exercise

Line graphs of expiratory tidal volume (mL) at baseline and intervention by chest physical therapy phase (1) and t-test analysis from the Wilcoxon signed-rank test in the SPSS program (2).

Before the CPT phase: 352, 350, 323, 349, 345, 350, 352, 348, 360, 354 (mean = 347.9 mL).

During the CPT phase: 364, 371, 378, 380, 410, 425, 429, 485, 501, 515 (mean = 425.8 mL).

4.3.2. Statistical analysis with the celeration line

When the celeration line is approached from the baseline to intervention phase, the changes should be stationary if the pathology is stable, whereas some celeration lines from baseline may decelerate muscle mass from prolonged bed rest or accelerate dyspnea from secretion obstruction. A previous report claimed that the celeration line during the intervention phase was the same as that in the baseline phase, indicating no beneficial effect from treatment [ 24 ], whereas 50% of the data points in the intervention phase fell or were below the celeration line from the baseline phase, which reflected beneficial clinical effect from treatment.

Two methods can be used to determine statistically significant changes. One method is using the probability table presented by Bloom [ 25 ] ( Table 2 ), and the other is computing a simple binomial test.

Proportion during baseline Number of treatment observations
4 6 8 10 12
0.10 3 3 3 4 4
0.15 3 3 4 4 5
0.20 3 4 5 5 6
0.25 4 4 5 6 7
0.30 4 5 6 6 7
0.35 4 5 6 7 8
0.40 4 5 6 8 9
0.45 4 6 7 8 9
0.50 4 6 7 9 10

Number of treatment observations above the celeration line that are required to demonstrate a statistically significant effect.

Modified from Bloom [ 25 ], p. 203. Determined by use of the one-tailed test; p < 0.05.

4.3.3. Statistical analysis using the Bloom table

Step 1: Calculate the proportion during the baseline phase, for example, in Figure 9 . From a total of six data points, locate three points above the trend line, so the proportion is 0.6.

Step 2: Draw the extending or celeration line to the treatment phase.

Step 3: See that the first column is 0.50 in the Bloom Table and the number of treatment observations is 5. The intersection of these two values is approximately 5.

Step 4: Figure 9(1) has only four points above the trend line, whereas Figure 9(2) has all five points above trend line. Therefore, the mobilization exercise shows significant changes in the clinic, but the breathing exercise does not affect chest wall excursion.

single case study exercise

Comparative data of chest wall excursion (cm) between interventions: breathing exercise (1) and mobilization exercise (2).

4.3.4. Statistical analysis with the two-standard deviation band method

Baseline phase: 120, 123, 121, 122, 125, 122, 123, 122, 121, 122 (mean = 122.21, SD = 1.45)

Exercise period: 120, 119, 116, 115, 117, 114, 100, 101, 100, 102 (mean = 110.4, SD = 8.45)

Thus, two-standard deviation (SD) values in the baseline phase can be calculated with two times of SD (1.45 × 2 = 2.9). The upper band above the mean is a sum of 122.1 and 2.9 (122.1 + 2.9), and the lower band below the mean is 122.1 minutes 2.9 (122.1 − 2.9). Therefore, final levels of the 2-SD band are 125 and 119.2, which are drawn in Figure 10(1) and (2) .

single case study exercise

Line graph depicting the 2-SD band method for identifying the statistical significance between two subjects. Different results of exercise occurred in the two subjects, with the first subject showing most of the points (8 in 10 points) below the 2-SD band; thus, it can be concluded that the exercise reduces systolic blood pressure significantly (1), whereas 10 points are located within the 2-SD band in the second subject, which involves nonsignificant change when exercise is implicated (2).

4.3.5. Statistical analysis with the binomial test

This method was demonstrated by Kazdin [ 15 ] and White [ 20 ]. The formula for computing was based on the binomial test x n p n , where n is the number of data points in the intervention phase, x is the number of data points above (or below) the celeration lines, and p is the probability of obtaining × data points above or below the projected celeration line [ 27 ]. Therefore, the binomial test from Figure 11 can be computed by n = 5, X = 4, and p = 0.5, with results in statistical probability of p < 0.05.

single case study exercise

Line graph showing the data in baseline and breathing exercise phases.

Null hypothesis (Ho) = data points in the intervention phase being more than 5 in 10.

Alternative hypothesis (H1) = data points in the intervention phase being less than 5 in 10.

The result of the binomial test in Table 3 shows an asymmetrically significant value (0.375) of more than 0.05, which means acceptance on the null hypothesis.

Category N Observed Prop. Test Prop. Aym. Sig (1-tailed)
Data Group 1 1.00 4 0.8 0.50 0.375
Group 2 0.00 1 0.20
5 1.00

Result of the binomial test from SPSS analysis.

Note: Group 1 is the number of data points above the celeration line, whereas Group 2 is the number of data points below the celeration line from the baseline phase.

4.3.6. Statistical analysis with C statistic

The C statistic method was proposed by Bloom and Fisher (1975) [ 21 ] and Tryon (1982) [ 28 ] as a single-case research design and also reported by Caetano et al (2018) [ 29 ]. There are seven steps in using C statistic with example data shown below.

Baseline phase: 82, 82, 84, 82, 84, 82, 84, 82, 83, 84

Breathing exercise: 83, 85, 85, 85, 87, 88, 90, 93, 94, 94

Step 1 : Each data point in the baseline phase is subtracted from its adjacent successor data, for example, the first from second, second from third, etc. until all of the scores in the baseline phase are used.

Step 2 : Each value from Step 1 is squared and the sum repeated.

Step 1 Step 2
82 − 82 = 0 (0) = 0
84 − 82 = 2 (2) = 4
84 − 82 = 2 (2) = 4
84 − 82 = 2 (2) = 4
83 − 84 = −1 (−1) = 1
=

Step 3 : The mean value of the baseline points is calculated.

82 + 82 + 84 + 82 + 84 + 82 + 84 + 82 + 83 + 84 = 829/10 = 82.9

Step 4 : The value of mean-difference for each set of data is calculated by subtracting each raw score from the mean value of squared results before summing up the squared mean difference.

82 82.9 = −0.9 (−0.9) = 0.81
82 82.9 = −0.9 (−0.9) = 0.81
84 82.9 = 1.1 (1.1) = 1.21
82 82.9 = −0.9 (−0.9) = 0.81
84 82.9 = 1.1 (1.1) = 1.21
82 82.9 = −0.9 (−0.9) = 0.81
84 82.9 = 1.1 (1.1) = 1.21
82 82.9 = −0.9 (−0.9) = 0.81
83 82.9 = 0.1 (0.1) = 0.01
84 82.9 = 1.1 (1.1) = 1.21
=

Step 5 : The value from Step 4 is multiplied by 2: 8.9 × 2 = 17.8.

Step 6 : The C score is computed using the formula.

Step 7 : The standard error for the C statistic is computed using the formula.

Standard error = √ n − 2 / n − 1 n + 1 , in which n = the number of data in the series data from which the C statistic is computed.

Standard error = √ 10 − 2 / 10 − 1 10 + 1 = 8 / 99 = 0.0808 = 0.284

Step 8 : In determining whether the C statistic is significant, a Z score is computed by dividing the C statistic value from Step 6 by the standard error from Step 7.

Step 9 : Statistical analysis: for any series data with 10 data points, a Z of 1.64 or more is significant at a level of p < 0.05. The results of Z calculation in the baseline phase shows −1.62, which is less than the required 1.64, indicating that the trend does not exist.

Step 10 : Data in the intervention phase are included with the baseline phase and completed as in Step 2.

82 − 82 = 0 (0) = 0
84 − 82 = 2 (2) = 4
84 − 82 = 2 (2) = 4
84 − 82 = 2 (2) = 4
83 − 84 = −1 (−1) = 1
83 − 85 = −2 (−2) = 4
85 − 85 = 0 (0) = 0
87 − 88 = −1 (−1) = 1
90 − 93 = −3 (−3) = 9
94 − 94 = 0 (0) = 0
27

Step 11 : The mean value for the baseline points is calculated: 82 + 82 + 84 + 82 + 84 + 82 + 84 + 82 + 83 + 84 + 83 + 85 + 85 + 85 + 87 + 88 + 90 + 93 + 94 + 94 = 1713/20 = 85.65.

Step 12 : The value of the mean-difference for each set of data is calculated by subtracting each raw score from the mean value of squared results before summing up the squared mean difference.

82 85.65 = −3.65 (−3.65) = 13.32
82 85.65 = −3.65 (−3.65) = 13.32
84 85.65 = −1.65 (1.65) = 2.72
82 85.65 = −3.65 (−3.65) = 13.32
84 85.65 = −1.65 (1.65) = 2.72
82 85.65 = −3.65 (−3.65) = 13.32
84 85.65 = −1.65 (1.65) = 2.72
82 85.65 = −3.65 (−3.65) = 13.32
83 85.65 = −2.65 (−2.65) = 7.02
84 85.65 = −1.65 (1.65) = 2.72
83 85.65 = − 2.65 (−2.65) = 7.02
85 85.65 = −0.65 (−0.65) = 0.42
85 85.65 = −0.65 (−0.65) = 0.42
85 85.65 = −0.65 (−0.65) = 0.42
87 85.65 = 1.35 (1.35) = 1.82
88 85.65 = 2.35 (2.35) = 5.52
90 85.65 = 4.35 (4.35) = 18.92
93 85.65 = 7.35 (7.35) = 54.02
94 85.65 = 8.35 (8.35) = 69.72
94 85.65 = 8.35 (8.35) = 69.72
=

Step 13 : The value from Step 12 is multiplied by 2: 312.55 × 2 = 625.1.

Step 14 . The C score is computed using the formula.

Step 15 : The standard error for the C statistic is computed using the formula.

Standard error = √ 20 − 2 / 20 − 1 20 + 1 = 18 / 399 = 0.045 = 0.212

Step 16 : In determining whether the C statistic is significant, a Z score is computed by dividing the C statistic value from Step 14 by the standard error from Step 15.

Step 17 : Statistical analysis: for any series data with 20 data points, a Z of 1.64 or more is significant at a level of p < 0.05. The results of Z calculation in the baseline and intervention phases show 4.514, which is more than the required 1.64. This presents a statistically significant trend across the baseline and intervention phases, or the breathing exercise in this case affects oxygen saturation in the clinic when compared to the baseline phase.

5. Discussion and critical point for a single-case research study

All details and data on examples show the procedure for presenting a single-case research study, due to the limitation on conditions, rare cases, and interesting treatment [ 3 ]. A basic procedure of a single-case design, with visual graphs of 6–8 data points in each phase [ 15 ], including mean level and cerelation line in baseline phase, can be presented [ 8 ]. When the data in each period at either baseline or intervention have no series dependency, a clinically significant difference of the treatment can be evaluated by comparing the baseline or pre-treatment using the Bloom Table, C statistic, or paired t-test [ 22 , 25 , 28 , 29 ]. Moreover, the different types of single-case experimental designs as alternating treatment, introduction/withdrawal designs, or multiple baseline designs are very interesting in physical and rehabilitation medicine [ 30 ]. However, an update on evidence in using a single-case design reported that it risks a bias tool (SCD RoB) based on current conceptualizations of biases, as well as the Cochrane risk of bias criteria. Therefore, contemporary single-case design quality should be applied for demonstrating and providing initial validation in research using such a design [ 31 ]. In order to demonstrate a single-case research study, the clinical details or characteristics of a subject must be presented as in previous study; such as illness history, medical diagnosis, laboratory results, and prior medical treatments during 7 days of pre-treatment, during treatment and/or posttreatment with an ABA study design. It is important that the condition of a patient, its stability or constancy, is reported for at least 7 days in COPD patients [ 12 ]. Their arterial blood gas (ABG), complete blood count (CBC), blood chemistry, liver function test, chest X-ray, and sputum culture also are reported. The results presented the autocorrelation values in each phase, which was less than 2/ √ 7 or 0.756. Therefore, it can be assumed that the results have no autocorrelation within the data, so changes in the data did not reflect the time dependence or pathological condition [ 22 ]. In addition, a previous study on a scleroderma patient having short-term chest physical therapy (CPT) for 7 days compared pre-treatment with an A-B design and showed that CPT significantly reduced the dyspnea score and respiratory rate and increased chest expansion and maximal inspiratory mount pressure (PImax) by using Bloom Table analysis [ 13 ]. Therefore, both previous studies present the effectiveness or benefits of rehabilitation in specific patients. Thus, in order to present the effectiveness of therapy or rehabilitation, all details of treatment also should be cleared and more details given such as techniques, frequency, intensity, and timing. When specific treatment is performed, the outcomes should be recorded daily before statistical analysis. For good data presentation, graph lines with scattered plotting, mean level, and autocorrelation must be presented in each phase. Finally, the clinical effectiveness of treatment must be shown by using the statistical protocol, such as the celeration line from the baseline to treatment phase and Bloom Table analysis. In addition, the statistic nonparametrical paired t-test or one-way ANOVA or C statistic can be used for confirmation as same as in a previous review meta-analysis suggestion [ 32 ].

6. Conclusion

The effectiveness of rehabilitation can be seen under a single-case research design as well as in practice of psychology research [ 33 ]. Firstly, data collection should be performed at least 5 days before starting intervention or treatment in order to obtain higher stability for the condition of the patient. Secondly, scattering the data from baseline and intervention with visual graph lines must be carried out. Thirdly, trend lines and the mean values in each phase should be calculated and plotted. Fourthly, series dependency or autocorrelation coefficient must be calculated and indicated on the graph. Finally, statistical analysis of the effectiveness of treatment can be performed by comparing with the celeration line, which has been drawn manually using the spilt-middle method or fit curve from baseline phase, and underlining the Bloom Table. Moreover, the nonparametrical paired t-test or binomial test can be used. Although there are some limitations in this method, due to the small data collection and series dependency of data within each phase, it has easy-to-understand visuals and shows the trend. Therefore, a single-case research design is one of the many statistical protocols that can be applied for clinical scientists or therapists in order for them to represent the effectiveness of treatment in rare cases and present interesting results. Furthermore, the results of a good research design can be developed into a standardized statistical model, with a larger sample size, control subjects, and parametric statistical protocol in the future.

Conflict of interest

All authors claim no conflict of interest.

  • 1. Hartung DM, Touchette D. Overview of clinical research design. American Journal of Health-System Pharmacy. 2009; 66 (4):398-408. DOI: 10.2146/ajhp080300
  • 2. Callas PW. Searching the biomedical literature: Research study designs and critical appraisal. Clinical Laboratory Science. 2008; 21 (1):42-48
  • 3. Janosky JE. Use of the single subject design for practice based primary care research. Postgraduate Medicine. 2005; 81 (959):549-551
  • 4. Parab S, Bhalerao S. Study designs. International Journal of Ayurveda Research. 2010; 1 (2):128-131. DOI: 10.4103/0974-7788.64406
  • 5. Gonnella C. Single-subject experimental paradigm as a clinical decision tool. Physical Therapy. 1989; 69 :601-609
  • 6. Siggelkow N. What’s in a name? The Academy of Management Journal. 2007; 50 (1):30-34
  • 7. Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qualitative Inquiry. 2011; 17 (6):511-521. DOI: 10.1177/1077800411409884
  • 8. Yoo WG. Effect of a combined thoracic and backward lifting exercise on the thoracic kyphosis angle and intercostal muscle pain. Journal of Physical Therapy Science. 2017; 29 (8):1481-1482. DOI: 10.1589/jpts.29.1481
  • 9. Portney LG, Watkins MP. Foundations of Clinical Research: Applications to Practice. Philadelphia, PA: F.A. Davis Company; 2015
  • 10. Horner RH, Swminathan H, Sugai G, Smolkowski K. Considerations for the systematic analysis and use of single-case research. Education & Treatment of Children. 2012; 35 (2):269-290
  • 11. Lobo MA, Moeyaert M, Baraldi Cunha A, Babik I. Single-case design, analysis, and quality assessment for intervention research. Journal of Neurologic Physical Therapy. 2017; 41 (3):187-197. DOI: 10.1097/NPT.0000000000000187
  • 12. Leelarungrayb D, Pothongsunun P, Yankai A, Pratanaphon S. Acute clinical benefits of chest wall-stretching exercise on expired tidal volume, dyspnea and chest expansion in a patient with chronic obstructive pulmonary disease: A single case study. Journal of Bodywork and Movement Therapies. 2009; 13 (4):338-343. DOI: 10.1016/j.jbmit.2008.11.0004
  • 13. Leelarungrayub J, Pinkaew D, Wonglangka K, Eungpinichpong W, Klaphajone J. Short-term pulmonary rehabilitation for a female patient with chronic scleroderma under a single-case research design. Clinical Medicine Insights: Circulatory, Respiratory and Pulmonary Medicine. 2016; 10 :11-17 eCollection 2016
  • 14. Barlow DH, Hersen M. Single-case experimental designs. Uses in applied clinical research. Archives of General Psychiatry. 1973; 29 (3):319-325
  • 15. Kazdin AE. Single Case Research Designs: Methods for Clinical and Applied Setting. New York: Oxford University Press; 1982
  • 16. Busse RT, Kratochwill TR, Elliott SN. Meta-analysis for single-case consultation outcomes: Applications to research and practice. J School Psychol. 1995; 33 (4):269-285. DOI: 10.1016/0022-4405(95)00014-D
  • 17. Wolery M, Harris SR. Interpreting results of single-subject research designs. Physical Therapy. 1982; 62 (4):445-452
  • 18. White OR, Haring NG. Exceptional Teaching. 2nd ed. Columbus, OH: Charles; 1980
  • 19. Kazdin AE. Unobtrusive measures in behavioral assessment. Journal of Applied Behavior Analysis. 1979; 12 (4):713-724
  • 20. White OR. Data-based instruction: Evaluating educational progress. In: Cone JD, Hawkins RP, editors. Behavioral Assessments: New Directions in Clinical Psychology. New York: Brunner/Mazel; 1977
  • 21. Bloom M, Fischer J. Evaluating Practice: Guidelines for the Accountable Professional. Englewood Cliffs, NJ: Prentice-Hall; 1982
  • 22. Matyas TA, Greenwood KM. Serial dependency in single-case time series. In: Franklin RD, Allison DB, Gorman BS, editors. Design and Analysis of Single-Case Research. Mahwah, NJ: Lawrence Erlbaum Associates; 1997. pp. 215-243
  • 23. Jones RR, Weinrott MR, Vaught RS. Effects of serial dependency on the agreement between visual and statistical inference. Journal of Applied Behavior Analysis. 1978; 11 (2):277-283
  • 24. Ottenbacher KJ. Evaluating Clinical Change.: Strategies for Occupational and Physical Therapist. Sydney: Williams & Wilkins; 1986
  • 25. Bloom M. The Paradox of Helping: Introduction in the Philosophy of Scientific Practice. New York: Macmillan Publishing; 1975
  • 26. Gottman JM, Leiblum SR. How to Do Psychotherapy and how to Evaluate it. New York: Holt, Rinehart & Winston; 1974
  • 27. Siegel S. Nonparametric Statistics. New York: McGraw-Hill; 1956
  • 28. Tryon WW. A simplified time-series analysis for evaluating treatment inter-ventions. Journal of Applied Behavior Analysis. 1982; 15 (3):423-429
  • 29. Caetano SJ, Sonpavde G, Pond GR. C-statistic: A brief explanation of its construction, interpretation and limitations. European Journal of Cancer. 2018; 90 :130-132. DOI: 10.1016/j.ejca.2017.10.027 Epub 2017 Dec 5
  • 30. Krasny-Pacini A, Evans J. Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Annals of Physical and Rehabilitation Medicine. 2018; 61 (3):164-179. DOI: 10.1016/j.rehab.2017.12.002 Epub 2017 Dec 15
  • 31. Reichow B, Barton EE, Maggin DM. Development and applications of the single-case design risk of bias tool for evaluating single-case design research study reports. Research in Developmental Disabilities. 2018; 79 :53-64. DOI: 10.1016/j.ridd.2018.05.008 Epub 2018 Jun 27
  • 32. Tincani M, De Mers M. Meta-analysis of single-case research design studies on instructional pacing. Behavior Modification. 2016; 40 (6):799-824 Epub 2016 Apr 11
  • 33. Sexton-Radek K. Single case designs in psychology practice. Health Psychology Research. 2014; 2 (3):1551. DOI: 10.4081/hpr.2014.1551.eCollection 2014 Nov 6

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Mario Bernardo-Filho

Published: 01 April 2020

By Andre Nyberg and Erik Frykholm

937 downloads

By Tsung-Hsien Wang

1883 downloads

By Hilde D.G. Nielsen

1039 downloads

IntechOpen Author/Editor? To get your discount, log in .

Discounts available on purchase of multiple copies. View rates

Local taxes (VAT) are calculated in later steps, if applicable.

Support: [email protected]

single case study exercise

1st Edition

Single-Case Research Methods in Sport and Exercise Psychology

VitalSource Logo

  • Taylor & Francis eBooks (Institutional Purchase) Opens in new tab or window

Description

What is single-case research? How can single-case methods be used within sport and exercise? Single-case research is a powerful method for examining change in outcome variables such as behaviour, performance and psychological constructs, and for assessing the efficacy of interventions. It has innumerable uses within the context of sport and exercise science, such as in the development of more effective performance techniques for athletes and sportspeople and in helping us to better understand exercise behaviours in clinical populations. However, the fundamental principles and techniques of single-case research have not always been clearly understood by students and researchers working in these fields. Single-Case Research Methods in Sport and Exercise Psychology is the first book to fully explain single-case research in the context of sport and exercise. Starting with first principles, the book offers a comprehensive introduction to the single-case research process, from study design to data analysis and presentation. Including case studies and examples from across sport and exercise psychology, the book provides practical guidance for students and researchers and demonstrates the advantages and common pitfalls of single-case research for anybody working in applied or behavioural science in a sport or exercise setting.

Table of Contents

Jamie Barker is Senior Lecturer in Sport and Exercise Psychology in the Department of Sport and Exercise, Staffordshire University, UK. Paul McCarthy is a Lecturer in Psychology in the Department of Psychology, Glasgow Caledonian University, UK. Marc Jones is Reader in Sport and Exercise Psychology in the Department of Sport and Exercise, Staffordshire University, UK. Aidan Moran is Professor of Cognitive Psychology and Director of the Psychology Research Laboratory at University College, Dublin, Republic of Ireland.

About VitalSource eBooks

VitalSource is a leading provider of eBooks.

  • Access your materials anywhere, at anytime.
  • Customer preferences like text size, font type, page color and more.
  • Take annotations in line as you read.

Multiple eBook Copies

This eBook is already in your shopping cart. If you would like to replace it with a different purchasing option please remove the current eBook option from your cart.

Book Preview

single case study exercise

The country you have selected will result in the following:

  • Product pricing will be adjusted to match the corresponding currency.
  • The title Perception will be removed from your cart because it is not available in this region.

The Advantages and Limitations of Single Case Study Analysis

single case study exercise

As Andrew Bennett and Colin Elman have recently noted, qualitative research methods presently enjoy “an almost unprecedented popularity and vitality… in the international relations sub-field”, such that they are now “indisputably prominent, if not pre-eminent” (2010: 499). This is, they suggest, due in no small part to the considerable advantages that case study methods in particular have to offer in studying the “complex and relatively unstructured and infrequent phenomena that lie at the heart of the subfield” (Bennett and Elman, 2007: 171). Using selected examples from within the International Relations literature[1], this paper aims to provide a brief overview of the main principles and distinctive advantages and limitations of single case study analysis. Divided into three inter-related sections, the paper therefore begins by first identifying the underlying principles that serve to constitute the case study as a particular research strategy, noting the somewhat contested nature of the approach in ontological, epistemological, and methodological terms. The second part then looks to the principal single case study types and their associated advantages, including those from within the recent ‘third generation’ of qualitative International Relations (IR) research. The final section of the paper then discusses the most commonly articulated limitations of single case studies; while accepting their susceptibility to criticism, it is however suggested that such weaknesses are somewhat exaggerated. The paper concludes that single case study analysis has a great deal to offer as a means of both understanding and explaining contemporary international relations.

The term ‘case study’, John Gerring has suggested, is “a definitional morass… Evidently, researchers have many different things in mind when they talk about case study research” (2006a: 17). It is possible, however, to distil some of the more commonly-agreed principles. One of the most prominent advocates of case study research, Robert Yin (2009: 14) defines it as “an empirical enquiry that investigates a contemporary phenomenon in depth and within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident”. What this definition usefully captures is that case studies are intended – unlike more superficial and generalising methods – to provide a level of detail and understanding, similar to the ethnographer Clifford Geertz’s (1973) notion of ‘thick description’, that allows for the thorough analysis of the complex and particularistic nature of distinct phenomena. Another frequently cited proponent of the approach, Robert Stake, notes that as a form of research the case study “is defined by interest in an individual case, not by the methods of inquiry used”, and that “the object of study is a specific, unique, bounded system” (2008: 443, 445). As such, three key points can be derived from this – respectively concerning issues of ontology, epistemology, and methodology – that are central to the principles of single case study research.

First, the vital notion of ‘boundedness’ when it comes to the particular unit of analysis means that defining principles should incorporate both the synchronic (spatial) and diachronic (temporal) elements of any so-called ‘case’. As Gerring puts it, a case study should be “an intensive study of a single unit… a spatially bounded phenomenon – e.g. a nation-state, revolution, political party, election, or person – observed at a single point in time or over some delimited period of time” (2004: 342). It is important to note, however, that – whereas Gerring refers to a single unit of analysis – it may be that attention also necessarily be given to particular sub-units. This points to the important difference between what Yin refers to as an ‘holistic’ case design, with a single unit of analysis, and an ’embedded’ case design with multiple units of analysis (Yin, 2009: 50-52). The former, for example, would examine only the overall nature of an international organization, whereas the latter would also look to specific departments, programmes, or policies etc.

Secondly, as Tim May notes of the case study approach, “even the most fervent advocates acknowledge that the term has entered into understandings with little specification or discussion of purpose and process” (2011: 220). One of the principal reasons for this, he argues, is the relationship between the use of case studies in social research and the differing epistemological traditions – positivist, interpretivist, and others – within which it has been utilised. Philosophy of science concerns are obviously a complex issue, and beyond the scope of much of this paper. That said, the issue of how it is that we know what we know – of whether or not a single independent reality exists of which we as researchers can seek to provide explanation – does lead us to an important distinction to be made between so-called idiographic and nomothetic case studies (Gerring, 2006b). The former refers to those which purport to explain only a single case, are concerned with particularisation, and hence are typically (although not exclusively) associated with more interpretivist approaches. The latter are those focused studies that reflect upon a larger population and are more concerned with generalisation, as is often so with more positivist approaches[2]. The importance of this distinction, and its relation to the advantages and limitations of single case study analysis, is returned to below.

Thirdly, in methodological terms, given that the case study has often been seen as more of an interpretivist and idiographic tool, it has also been associated with a distinctly qualitative approach (Bryman, 2009: 67-68). However, as Yin notes, case studies can – like all forms of social science research – be exploratory, descriptive, and/or explanatory in nature. It is “a common misconception”, he notes, “that the various research methods should be arrayed hierarchically… many social scientists still deeply believe that case studies are only appropriate for the exploratory phase of an investigation” (Yin, 2009: 6). If case studies can reliably perform any or all three of these roles – and given that their in-depth approach may also require multiple sources of data and the within-case triangulation of methods – then it becomes readily apparent that they should not be limited to only one research paradigm. Exploratory and descriptive studies usually tend toward the qualitative and inductive, whereas explanatory studies are more often quantitative and deductive (David and Sutton, 2011: 165-166). As such, the association of case study analysis with a qualitative approach is a “methodological affinity, not a definitional requirement” (Gerring, 2006a: 36). It is perhaps better to think of case studies as transparadigmatic; it is mistaken to assume single case study analysis to adhere exclusively to a qualitative methodology (or an interpretivist epistemology) even if it – or rather, practitioners of it – may be so inclined. By extension, this also implies that single case study analysis therefore remains an option for a multitude of IR theories and issue areas; it is how this can be put to researchers’ advantage that is the subject of the next section.

Having elucidated the defining principles of the single case study approach, the paper now turns to an overview of its main benefits. As noted above, a lack of consensus still exists within the wider social science literature on the principles and purposes – and by extension the advantages and limitations – of case study research. Given that this paper is directed towards the particular sub-field of International Relations, it suggests Bennett and Elman’s (2010) more discipline-specific understanding of contemporary case study methods as an analytical framework. It begins however, by discussing Harry Eckstein’s seminal (1975) contribution to the potential advantages of the case study approach within the wider social sciences.

Eckstein proposed a taxonomy which usefully identified what he considered to be the five most relevant types of case study. Firstly were so-called configurative-idiographic studies, distinctly interpretivist in orientation and predicated on the assumption that “one cannot attain prediction and control in the natural science sense, but only understanding ( verstehen )… subjective values and modes of cognition are crucial” (1975: 132). Eckstein’s own sceptical view was that any interpreter ‘simply’ considers a body of observations that are not self-explanatory and “without hard rules of interpretation, may discern in them any number of patterns that are more or less equally plausible” (1975: 134). Those of a more post-modernist bent, of course – sharing an “incredulity towards meta-narratives”, in Lyotard’s (1994: xxiv) evocative phrase – would instead suggest that this more free-form approach actually be advantageous in delving into the subtleties and particularities of individual cases.

Eckstein’s four other types of case study, meanwhile, promote a more nomothetic (and positivist) usage. As described, disciplined-configurative studies were essentially about the use of pre-existing general theories, with a case acting “passively, in the main, as a receptacle for putting theories to work” (Eckstein, 1975: 136). As opposed to the opportunity this presented primarily for theory application, Eckstein identified heuristic case studies as explicit theoretical stimulants – thus having instead the intended advantage of theory-building. So-called p lausibility probes entailed preliminary attempts to determine whether initial hypotheses should be considered sound enough to warrant more rigorous and extensive testing. Finally, and perhaps most notably, Eckstein then outlined the idea of crucial case studies , within which he also included the idea of ‘most-likely’ and ‘least-likely’ cases; the essential characteristic of crucial cases being their specific theory-testing function.

Whilst Eckstein’s was an early contribution to refining the case study approach, Yin’s (2009: 47-52) more recent delineation of possible single case designs similarly assigns them roles in the applying, testing, or building of theory, as well as in the study of unique cases[3]. As a subset of the latter, however, Jack Levy (2008) notes that the advantages of idiographic cases are actually twofold. Firstly, as inductive/descriptive cases – akin to Eckstein’s configurative-idiographic cases – whereby they are highly descriptive, lacking in an explicit theoretical framework and therefore taking the form of “total history”. Secondly, they can operate as theory-guided case studies, but ones that seek only to explain or interpret a single historical episode rather than generalise beyond the case. Not only does this therefore incorporate ‘single-outcome’ studies concerned with establishing causal inference (Gerring, 2006b), it also provides room for the more postmodern approaches within IR theory, such as discourse analysis, that may have developed a distinct methodology but do not seek traditional social scientific forms of explanation.

Applying specifically to the state of the field in contemporary IR, Bennett and Elman identify a ‘third generation’ of mainstream qualitative scholars – rooted in a pragmatic scientific realist epistemology and advocating a pluralistic approach to methodology – that have, over the last fifteen years, “revised or added to essentially every aspect of traditional case study research methods” (2010: 502). They identify ‘process tracing’ as having emerged from this as a central method of within-case analysis. As Bennett and Checkel observe, this carries the advantage of offering a methodologically rigorous “analysis of evidence on processes, sequences, and conjunctures of events within a case, for the purposes of either developing or testing hypotheses about causal mechanisms that might causally explain the case” (2012: 10).

Harnessing various methods, process tracing may entail the inductive use of evidence from within a case to develop explanatory hypotheses, and deductive examination of the observable implications of hypothesised causal mechanisms to test their explanatory capability[4]. It involves providing not only a coherent explanation of the key sequential steps in a hypothesised process, but also sensitivity to alternative explanations as well as potential biases in the available evidence (Bennett and Elman 2010: 503-504). John Owen (1994), for example, demonstrates the advantages of process tracing in analysing whether the causal factors underpinning democratic peace theory are – as liberalism suggests – not epiphenomenal, but variously normative, institutional, or some given combination of the two or other unexplained mechanism inherent to liberal states. Within-case process tracing has also been identified as advantageous in addressing the complexity of path-dependent explanations and critical junctures – as for example with the development of political regime types – and their constituent elements of causal possibility, contingency, closure, and constraint (Bennett and Elman, 2006b).

Bennett and Elman (2010: 505-506) also identify the advantages of single case studies that are implicitly comparative: deviant, most-likely, least-likely, and crucial cases. Of these, so-called deviant cases are those whose outcome does not fit with prior theoretical expectations or wider empirical patterns – again, the use of inductive process tracing has the advantage of potentially generating new hypotheses from these, either particular to that individual case or potentially generalisable to a broader population. A classic example here is that of post-independence India as an outlier to the standard modernisation theory of democratisation, which holds that higher levels of socio-economic development are typically required for the transition to, and consolidation of, democratic rule (Lipset, 1959; Diamond, 1992). Absent these factors, MacMillan’s single case study analysis (2008) suggests the particularistic importance of the British colonial heritage, the ideology and leadership of the Indian National Congress, and the size and heterogeneity of the federal state.

Most-likely cases, as per Eckstein above, are those in which a theory is to be considered likely to provide a good explanation if it is to have any application at all, whereas least-likely cases are ‘tough test’ ones in which the posited theory is unlikely to provide good explanation (Bennett and Elman, 2010: 505). Levy (2008) neatly refers to the inferential logic of the least-likely case as the ‘Sinatra inference’ – if a theory can make it here, it can make it anywhere. Conversely, if a theory cannot pass a most-likely case, it is seriously impugned. Single case analysis can therefore be valuable for the testing of theoretical propositions, provided that predictions are relatively precise and measurement error is low (Levy, 2008: 12-13). As Gerring rightly observes of this potential for falsification:

“a positivist orientation toward the work of social science militates toward a greater appreciation of the case study format, not a denigration of that format, as is usually supposed” (Gerring, 2007: 247, emphasis added).

In summary, the various forms of single case study analysis can – through the application of multiple qualitative and/or quantitative research methods – provide a nuanced, empirically-rich, holistic account of specific phenomena. This may be particularly appropriate for those phenomena that are simply less amenable to more superficial measures and tests (or indeed any substantive form of quantification) as well as those for which our reasons for understanding and/or explaining them are irreducibly subjective – as, for example, with many of the normative and ethical issues associated with the practice of international relations. From various epistemological and analytical standpoints, single case study analysis can incorporate both idiographic sui generis cases and, where the potential for generalisation may exist, nomothetic case studies suitable for the testing and building of causal hypotheses. Finally, it should not be ignored that a signal advantage of the case study – with particular relevance to international relations – also exists at a more practical rather than theoretical level. This is, as Eckstein noted, “that it is economical for all resources: money, manpower, time, effort… especially important, of course, if studies are inherently costly, as they are if units are complex collective individuals ” (1975: 149-150, emphasis added).

Limitations

Single case study analysis has, however, been subject to a number of criticisms, the most common of which concern the inter-related issues of methodological rigour, researcher subjectivity, and external validity. With regard to the first point, the prototypical view here is that of Zeev Maoz (2002: 164-165), who suggests that “the use of the case study absolves the author from any kind of methodological considerations. Case studies have become in many cases a synonym for freeform research where anything goes”. The absence of systematic procedures for case study research is something that Yin (2009: 14-15) sees as traditionally the greatest concern due to a relative absence of methodological guidelines. As the previous section suggests, this critique seems somewhat unfair; many contemporary case study practitioners – and representing various strands of IR theory – have increasingly sought to clarify and develop their methodological techniques and epistemological grounding (Bennett and Elman, 2010: 499-500).

A second issue, again also incorporating issues of construct validity, concerns that of the reliability and replicability of various forms of single case study analysis. This is usually tied to a broader critique of qualitative research methods as a whole. However, whereas the latter obviously tend toward an explicitly-acknowledged interpretive basis for meanings, reasons, and understandings:

“quantitative measures appear objective, but only so long as we don’t ask questions about where and how the data were produced… pure objectivity is not a meaningful concept if the goal is to measure intangibles [as] these concepts only exist because we can interpret them” (Berg and Lune, 2010: 340).

The question of researcher subjectivity is a valid one, and it may be intended only as a methodological critique of what are obviously less formalised and researcher-independent methods (Verschuren, 2003). Owen (1994) and Layne’s (1994) contradictory process tracing results of interdemocratic war-avoidance during the Anglo-American crisis of 1861 to 1863 – from liberal and realist standpoints respectively – are a useful example. However, it does also rest on certain assumptions that can raise deeper and potentially irreconcilable ontological and epistemological issues. There are, regardless, plenty such as Bent Flyvbjerg (2006: 237) who suggest that the case study contains no greater bias toward verification than other methods of inquiry, and that “on the contrary, experience indicates that the case study contains a greater bias toward falsification of preconceived notions than toward verification”.

The third and arguably most prominent critique of single case study analysis is the issue of external validity or generalisability. How is it that one case can reliably offer anything beyond the particular? “We always do better (or, in the extreme, no worse) with more observation as the basis of our generalization”, as King et al write; “in all social science research and all prediction, it is important that we be as explicit as possible about the degree of uncertainty that accompanies out prediction” (1994: 212). This is an unavoidably valid criticism. It may be that theories which pass a single crucial case study test, for example, require rare antecedent conditions and therefore actually have little explanatory range. These conditions may emerge more clearly, as Van Evera (1997: 51-54) notes, from large-N studies in which cases that lack them present themselves as outliers exhibiting a theory’s cause but without its predicted outcome. As with the case of Indian democratisation above, it would logically be preferable to conduct large-N analysis beforehand to identify that state’s non-representative nature in relation to the broader population.

There are, however, three important qualifiers to the argument about generalisation that deserve particular mention here. The first is that with regard to an idiographic single-outcome case study, as Eckstein notes, the criticism is “mitigated by the fact that its capability to do so [is] never claimed by its exponents; in fact it is often explicitly repudiated” (1975: 134). Criticism of generalisability is of little relevance when the intention is one of particularisation. A second qualifier relates to the difference between statistical and analytical generalisation; single case studies are clearly less appropriate for the former but arguably retain significant utility for the latter – the difference also between explanatory and exploratory, or theory-testing and theory-building, as discussed above. As Gerring puts it, “theory confirmation/disconfirmation is not the case study’s strong suit” (2004: 350). A third qualification relates to the issue of case selection. As Seawright and Gerring (2008) note, the generalisability of case studies can be increased by the strategic selection of cases. Representative or random samples may not be the most appropriate, given that they may not provide the richest insight (or indeed, that a random and unknown deviant case may appear). Instead, and properly used , atypical or extreme cases “often reveal more information because they activate more actors… and more basic mechanisms in the situation studied” (Flyvbjerg, 2006). Of course, this also points to the very serious limitation, as hinted at with the case of India above, that poor case selection may alternatively lead to overgeneralisation and/or grievous misunderstandings of the relationship between variables or processes (Bennett and Elman, 2006a: 460-463).

As Tim May (2011: 226) notes, “the goal for many proponents of case studies […] is to overcome dichotomies between generalizing and particularizing, quantitative and qualitative, deductive and inductive techniques”. Research aims should drive methodological choices, rather than narrow and dogmatic preconceived approaches. As demonstrated above, there are various advantages to both idiographic and nomothetic single case study analyses – notably the empirically-rich, context-specific, holistic accounts that they have to offer, and their contribution to theory-building and, to a lesser extent, that of theory-testing. Furthermore, while they do possess clear limitations, any research method involves necessary trade-offs; the inherent weaknesses of any one method, however, can potentially be offset by situating them within a broader, pluralistic mixed-method research strategy. Whether or not single case studies are used in this fashion, they clearly have a great deal to offer.

References 

Bennett, A. and Checkel, J. T. (2012) ‘Process Tracing: From Philosophical Roots to Best Practice’, Simons Papers in Security and Development, No. 21/2012, School for International Studies, Simon Fraser University: Vancouver.

Bennett, A. and Elman, C. (2006a) ‘Qualitative Research: Recent Developments in Case Study Methods’, Annual Review of Political Science , 9, 455-476.

Bennett, A. and Elman, C. (2006b) ‘Complex Causal Relations and Case Study Methods: The Example of Path Dependence’, Political Analysis , 14, 3, 250-267.

Bennett, A. and Elman, C. (2007) ‘Case Study Methods in the International Relations Subfield’, Comparative Political Studies , 40, 2, 170-195.

Bennett, A. and Elman, C. (2010) Case Study Methods. In C. Reus-Smit and D. Snidal (eds) The Oxford Handbook of International Relations . Oxford University Press: Oxford. Ch. 29.

Berg, B. and Lune, H. (2012) Qualitative Research Methods for the Social Sciences . Pearson: London.

Bryman, A. (2012) Social Research Methods . Oxford University Press: Oxford.

David, M. and Sutton, C. D. (2011) Social Research: An Introduction . SAGE Publications Ltd: London.

Diamond, J. (1992) ‘Economic development and democracy reconsidered’, American Behavioral Scientist , 35, 4/5, 450-499.

Eckstein, H. (1975) Case Study and Theory in Political Science. In R. Gomm, M. Hammersley, and P. Foster (eds) Case Study Method . SAGE Publications Ltd: London.

Flyvbjerg, B. (2006) ‘Five Misunderstandings About Case-Study Research’, Qualitative Inquiry , 12, 2, 219-245.

Geertz, C. (1973) The Interpretation of Cultures: Selected Essays by Clifford Geertz . Basic Books Inc: New York.

Gerring, J. (2004) ‘What is a Case Study and What Is It Good for?’, American Political Science Review , 98, 2, 341-354.

Gerring, J. (2006a) Case Study Research: Principles and Practices . Cambridge University Press: Cambridge.

Gerring, J. (2006b) ‘Single-Outcome Studies: A Methodological Primer’, International Sociology , 21, 5, 707-734.

Gerring, J. (2007) ‘Is There a (Viable) Crucial-Case Method?’, Comparative Political Studies , 40, 3, 231-253.

King, G., Keohane, R. O. and Verba, S. (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research . Princeton University Press: Chichester.

Layne, C. (1994) ‘Kant or Cant: The Myth of the Democratic Peace’, International Security , 19, 2, 5-49.

Levy, J. S. (2008) ‘Case Studies: Types, Designs, and Logics of Inference’, Conflict Management and Peace Science , 25, 1-18.

Lipset, S. M. (1959) ‘Some Social Requisites of Democracy: Economic Development and Political Legitimacy’, The American Political Science Review , 53, 1, 69-105.

Lyotard, J-F. (1984) The Postmodern Condition: A Report on Knowledge . University of Minnesota Press: Minneapolis.

MacMillan, A. (2008) ‘Deviant Democratization in India’, Democratization , 15, 4, 733-749.

Maoz, Z. (2002) Case study methodology in international studies: from storytelling to hypothesis testing. In F. P. Harvey and M. Brecher (eds) Evaluating Methodology in International Studies . University of Michigan Press: Ann Arbor.

May, T. (2011) Social Research: Issues, Methods and Process . Open University Press: Maidenhead.

Owen, J. M. (1994) ‘How Liberalism Produces Democratic Peace’, International Security , 19, 2, 87-125.

Seawright, J. and Gerring, J. (2008) ‘Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options’, Political Research Quarterly , 61, 2, 294-308.

Stake, R. E. (2008) Qualitative Case Studies. In N. K. Denzin and Y. S. Lincoln (eds) Strategies of Qualitative Inquiry . Sage Publications: Los Angeles. Ch. 17.

Van Evera, S. (1997) Guide to Methods for Students of Political Science . Cornell University Press: Ithaca.

Verschuren, P. J. M. (2003) ‘Case study as a research strategy: some ambiguities and opportunities’, International Journal of Social Research Methodology , 6, 2, 121-139.

Yin, R. K. (2009) Case Study Research: Design and Methods . SAGE Publications Ltd: London.

[1] The paper follows convention by differentiating between ‘International Relations’ as the academic discipline and ‘international relations’ as the subject of study.

[2] There is some similarity here with Stake’s (2008: 445-447) notion of intrinsic cases, those undertaken for a better understanding of the particular case, and instrumental ones that provide insight for the purposes of a wider external interest.

[3] These may be unique in the idiographic sense, or in nomothetic terms as an exception to the generalising suppositions of either probabilistic or deterministic theories (as per deviant cases, below).

[4] Although there are “philosophical hurdles to mount”, according to Bennett and Checkel, there exists no a priori reason as to why process tracing (as typically grounded in scientific realism) is fundamentally incompatible with various strands of positivism or interpretivism (2012: 18-19). By extension, it can therefore be incorporated by a range of contemporary mainstream IR theories.

— Written by: Ben Willis Written at: University of Plymouth Written for: David Brockington Date written: January 2013

Further Reading on E-International Relations

  • Identity in International Conflicts: A Case Study of the Cuban Missile Crisis
  • Imperialism’s Legacy in the Study of Contemporary Politics: The Case of Hegemonic Stability Theory
  • Recreating a Nation’s Identity Through Symbolism: A Chinese Case Study
  • Ontological Insecurity: A Case Study on Israeli-Palestinian Conflict in Jerusalem
  • Terrorists or Freedom Fighters: A Case Study of ETA
  • A Critical Assessment of Eco-Marxism: A Ghanaian Case Study

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.

single case study exercise

Breadcrumbs Section. Click here to navigate to respective pages.

Single-Case Research Methods in Sport and Exercise Psychology

Single-Case Research Methods in Sport and Exercise Psychology

DOI link for Single-Case Research Methods in Sport and Exercise Psychology

Get Citation

  • What is single-case research?
  • How can single-case methods be used within sport and exercise?

Single-case research is a powerful method for examining change in outcome variables such as behaviour, performance and psychological constructs, and for assessing the efficacy of interventions. It has innumerable uses within the context of sport and exercise science, such as in the development of more effective performance techniques for athletes and sportspeople and in helping us to better understand exercise behaviours in clinical populations. However, the fundamental principles and techniques of single-case research have not always been clearly understood by students and researchers working in these fields.

Single-Case Research Methods in Sport and Exercise Psychology is the first book to fully explain single-case research in the context of sport and exercise. Starting with first principles, the book offers a comprehensive introduction to the single-case research process, from study design to data analysis and presentation. Including case studies and examples from across sport and exercise psychology, the book provides practical guidance for students and researchers and demonstrates the advantages and common pitfalls of single-case research for anybody working in applied or behavioural science in a sport or exercise setting.

TABLE OF CONTENTS

Chapter 1 | 13  pages, introduction to single-case research, chapter 2 | 11  pages, history and philosophy of single-case research in sport and exercise, chapter 3 | 21  pages, general procedures in single-case research, chapter 4 | 17  pages, assessing behaviour in sport and exercise, chapter 5 | 22  pages, the withdrawal design in sport and exercise, chapter 6 | 23  pages, multiple-baseline designs in sport and exercise, chapter 7 | 17  pages, the changing-criterion design in sport and exercise, chapter 8 | 20  pages, the alternating-treatments design in sport and exercise, chapter 9 | 22  pages, analysing data in single-case research, chapter 10 | 12  pages, single-case research in sport and exercise.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

single case study exercise

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Single-Case Design, Analysis, and Quality Assessment for Intervention Research

Lobo, Michele A. PT, PhD; Moeyaert, Mariola PhD; Baraldi Cunha, Andrea PT, PhD; Babik, Iryna PhD

Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, Delaware (M.A.L., A.B.C., I.B.); and Division of Educational Psychology & Methodology, State University of New York at Albany, Albany, New York (M.M.).

Correspondence: Michele A. Lobo, PT, PhD, Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, DE 19713 ( [email protected] ).

This research was supported by the National Institute of Health, Eunice Kennedy Shriver National Institute of Child Health & Human Development (1R21HD076092-01A1, Lobo PI), and the Delaware Economic Development Office (Grant #109).Some of the information in this article was presented at the IV Step Meeting in Columbus, Ohio, June 2016.The authors declare no conflict of interest.

Background and Purpose: 

The purpose of this article is to describe single-case studies and contrast them with case studies and randomized clinical trials. We highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation research.

Summary of Key Points: 

Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single-case studies involve repeated measures and manipulation of an independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, as well as external validity for generalizability of results, particularly when the study designs incorporate replication, randomization, and multiple participants. Single-case studies should not be confused with case studies/series (ie, case reports), which are reports of clinical management of a patient or a small series of patients.

Recommendations for Clinical Practice: 

When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, such as when research resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. Readers will be directed to examples from the published literature in which these techniques have been discussed, evaluated for quality, and implemented.

INTRODUCTION

In this special interest article we present current tools and techniques relevant for single-case rehabilitation research. Single-case (SC) studies have been identified by a variety of names, including “n of 1 studies” and “single-subject” studies. The term “single-case study” is preferred over the previously mentioned terms because previous terms suggest these studies include only 1 participant. In fact, as discussed later, for purposes of replication and improved generalizability, the strongest SC studies commonly include more than 1 participant.

A SC study should not be confused with a “case study/series” (also called “case report”). In a typical case study/series, a single patient or small series of patients is involved, but there is not a purposeful manipulation of an independent variable, nor are there necessarily repeated measures. Most case studies/series are reported in a narrative way, whereas results of SC studies are presented numerically or graphically. 1 , 2 This article defines SC studies, contrasts them with randomized clinical trials, discusses how they can be used to scientifically test hypotheses, and highlights current research designs, analysis techniques, and quality appraisal tools that may be useful for rehabilitation researchers.

In SC studies, measurements of outcome (dependent variables) are recorded repeatedly for individual participants across time and varying levels of an intervention (independent variables). 1–5 These varying levels of intervention are referred to as “phases,” with 1 phase serving as a baseline or comparison, so each participant serves as his/her own control. 2 In contrast to case studies and case series in which participants are observed across time without experimental manipulation of the independent variable, SC studies employ systematic manipulation of the independent variable to allow for hypothesis testing. 1 , 6 As a result, SC studies allow for rigorous experimental evaluation of intervention effects and provide a strong basis for establishing causal inferences. Advances in design and analysis techniques for SC studies observed in recent decades have made SC studies increasingly popular in educational and psychological research. Yet, the authors believe SC studies have been undervalued in rehabilitation research, where randomized clinical trials (RCTs) are typically recommended as the optimal research design to answer questions related to interventions. 7 In reality, there are advantages and disadvantages to both SC studies and RCTs that should be carefully considered to select the best design to answer individual research questions. Although there are a variety of other research designs that could be utilized in rehabilitation research, only SC studies and RCTs are discussed here because SC studies are the focus of this article and RCTs are the most highly recommended design for intervention studies. 7

When designed and conducted properly, RCTs offer strong evidence that changes in outcomes may be related to provision of an intervention. However, RCTs require monetary, time, and personnel resources that many researchers, especially those in clinical settings, may not have available. 8 RCTs also require access to large numbers of consenting participants who meet strict inclusion and exclusion criteria that can limit variability of the sample and generalizability of results. 9 The requirement for large participant numbers may make RCTs difficult to perform in many settings, such as rural and suburban settings, and for many populations, such as those with diagnoses marked by lower prevalence. 8 To rely exclusively on RCTs has the potential to result in bodies of research that are skewed to address the needs of some individuals whereas neglecting the needs of others. RCTs aim to include a large number of participants and to use random group assignment to create study groups that are similar to one another in terms of all potential confounding variables, but it is challenging to identify all confounding variables. Finally, the results of RCTs are typically presented in terms of group means and standard deviations that may not represent true performance of any one participant. 10 This can present as a challenge for clinicians aiming to translate and implement these group findings at the level of the individual.

SC studies can provide a scientifically rigorous alternative to RCTs for experimentally determining the effectiveness of interventions. 1 , 2 SC studies can assess a variety of research questions, settings, cases, independent variables, and outcomes. 11 There are many benefits to SC studies that make them appealing for intervention research. SC studies may require fewer resources than RCTs and can be performed in settings and with populations that do not allow for large numbers of participants. 1 , 2 In SC studies, each participant serves as his/her own comparison, thus controlling for many confounding variables that can impact outcome in rehabilitation research, such as gender, age, socioeconomic level, cognition, home environment, and concurrent interventions. 2 , 11 Results can be analyzed and presented to determine whether interventions resulted in changes at the level of the individual, the level at which rehabilitation professionals intervene. 2 , 12 When properly designed and executed, SC studies can demonstrate strong internal validity to determine the likelihood of a causal relationship between the intervention and outcomes and external validity to generalize the findings to broader settings and populations. 2 , 12 , 13

SINGLE-CASE RESEARCH DESIGNS FOR INTERVENTION RESEARCH

There are a variety of SC designs that can be used to study the effectiveness of interventions. Here we discuss (1) AB designs, (2) reversal designs, (3) multiple baseline designs, and (4) alternating treatment designs, as well as ways replication and randomization techniques can be used to improve internal validity of all of these designs. 1–3 , 12–14

The simplest of these designs is the AB design 15 ( Figure 1 ). This design involves repeated measurement of outcome variables throughout a baseline control/comparison phase (A) and then throughout an intervention phase (B). When possible, it is recommended that a stable level and/or rate of change in performance be observed within the baseline phase before transitioning into the intervention phase. 2 As with all SC designs, it is also recommended that there be a minimum of 5 data points in each phase. 1 , 2 There is no randomization or replication of the baseline or intervention phases in the basic AB design. 2 Therefore, AB designs have problems with internal validity and generalizability of results. 12 They are weak in establishing causality because changes in outcome variables could be related to a variety of other factors, including maturation, experience, learning, and practice effects. 2 , 12 Sample data from a single-case AB study performed to assess the impact of Floor Play intervention on social interaction and communication skills for a child with autism 15 are shown in Figure 1 .

F1

If an intervention does not have carryover effects, it is recommended to use a reversal design . 2 For example, a reversal A 1 BA 2 design 16 ( Figure 2 ) includes alternation of the baseline and intervention phases, whereas a reversal A 1 B 1 A 2 B 2 design 17 ( Figure 3 ) consists of alternation of 2 baseline (A 1 , A 2 ) and 2 intervention (B 1 , B 2 ) phases. Incorporating at least 4 phases in the reversal design (ie, A 1 B 1 A 2 B 2 or A 1 B 1 A 2 B 2 A 3 B 3 ...) allows for a stronger determination of a causal relationship between the intervention and outcome variables because the relationship can be demonstrated across at least 3 different points in time–-change in outcome from A 1 to B 1 , from B 1 to A 2 , and from A 2 to B 2 . 18 Before using this design, however, researchers must determine that it is safe and ethical to withdraw the intervention, especially in cases where the intervention is effective and necessary. 12

F2

A recent study used an ABA reversal SC study to determine the effectiveness of core stability training in 8 participants with multiple sclerosis. 16 During the first 4 weekly data collections, the researchers ensured a stable baseline, which was followed by 8 weekly intervention data points, and concluded with 4 weekly withdrawal data points. Intervention significantly improved participants' walking and reaching performance ( Figure 2 ). 16 This A 1 BA 2 design could have been strengthened by the addition of a second intervention phase for replication (A 1 B 1 A 2 B 2 ). For instance, a single-case A 1 B 1 A 2 B 2 withdrawal design aimed to assess the efficacy of rehabilitation using visuo-spatio-motor cueing for 2 participants with severe unilateral neglect after a severe right hemisphere stroke. 17 Each phase included 8 data points. Statistically significant intervention-related improvement was observed, suggesting that visuo-spatio-motor cueing might be promising for treating individuals with very severe neglect ( Figure 3 ). 17

The reversal design can also incorporate a cross-over design where each participant experiences more than 1 type of intervention. For instance, a B 1 C 1 B 2 C 2 design could be used to study the effects of 2 different interventions (B and C) on outcome measures. Challenges with including more than 1 intervention involve potential carryover effects from earlier interventions and order effects that may impact the measured effectiveness of the interventions. 2 , 12 Including multiple participants and randomizing the order of intervention phase presentations are tools to help control for these types of effects. 19

When an intervention permanently changes an individual's ability, a return-to-baseline performance is not feasible and reversal designs are not appropriate. Multiple baseline designs ( MBDs ) are useful in these situations ( Figure 4 ). 20 Multiple baseline designs feature staggered introduction of the intervention across time: each participant is randomly assigned to 1 of at least 3 experimental conditions characterized by the length of the baseline phase. 21 These studies involve more than 1 participant, thus functioning as SC studies with replication across participants. Staggered introduction of the intervention allows for separation of intervention effects from those of maturation, experience, learning, and practice. For example, a multiple baseline SC study was used to investigate the effect of an antispasticity baclofen medication on stiffness in 5 adult males with spinal cord injury. 20 The subjects were randomly assigned to receive 5 to 9 baseline data points with a placebo treatment before the initiation of the intervention phase with the medication. Both participants and assessors were blind to the experimental condition. The results suggested that baclofen might not be a universal treatment choice for all individuals with spasticity resulting from a traumatic spinal cord injury ( Figure 4 ). 20

F4

The impact of 2 or more interventions can also be assessed via alternating treatment designs ( ATDs ). In ATDs, after establishing the baseline, the experimenter exposes subjects to different intervention conditions administered in close proximity for equal intervals ( Figure 5 ). 22 ATDs are prone to “carryover effects” when the effects of 1 intervention influence the observed outcomes of another intervention. 1 As a result, such designs introduce unique challenges when attempting to determine the effects of any 1 intervention and have been less commonly utilized in rehabilitation. An ATD was used to monitor disruptive behaviors in the school setting throughout a baseline followed by an alternating treatment phase with randomized presentation of a control condition or an exercise condition. 23 Results showed that 30 minutes of moderate to intense physical activity decreased behavioral disruptions through 90 minutes after the intervention. 23 An ATD was also used to compare the effects of commercially available and custom-made video prompts on the performance of multistep cooking tasks in 4 participants with autism. 22 Results showed that participants independently performed more steps with the custom-made video prompts ( Figure 5 ). 22

F5

Regardless of the SC study design, replication and randomization should be incorporated when possible to improve internal and external validity. 11 The reversal design is an example of replication across study phases. The minimum number of phase replications needed to meet quality standards is 3 (A 1 B 1 A 2 B 2 ), but having 4 or more replications is highly recommended (A 1 B 1 A 2 B 2 A 3 ...). 11 , 14 In cases when interventions aim to produce lasting changes in participants' abilities, replication of findings may be demonstrated by replicating intervention effects across multiple participants (as in multiple-participant AB designs), or across multiple settings, tasks, or service providers. When the results of an intervention are replicated across multiple reversals, participants, and/or contexts, there is an increased likelihood that a causal relationship exists between the intervention and the outcome. 2 , 12

Randomization should be incorporated in SC studies to improve internal validity and the ability to assess for causal relationships among interventions and outcomes. 11 In contrast to traditional group designs, SC studies often do not have multiple participants or units that can be randomly assigned to different intervention conditions. Instead, in randomized phase-order designs , the sequence of phases is randomized. Simple or block randomization is possible. For example, with simple randomization for an A 1 B 1 A 2 B 2 design, the A and B conditions are treated as separate units and are randomly assigned to be administered for each of the predefined data collection points. As a result, any combination of A-B sequences is possible without restrictions on the number of times each condition is administered or regard for repetitions of conditions (eg, A 1 B 1 B 2 A 2 B 3 B 4 B 5 A 3 B 6 A 4 A 5 A 6 ). With block randomization for an A 1 B 1 A 2 B 2 design, 2 conditions (eg, A and B) would be blocked into a single unit (AB or BA), randomization of which to different periods would ensure that each condition appears in the resulting sequence more than 2 times (eg, A 1 B 1 B 2 A 2 A 3 B 3 A 4 B 4 ). Note that AB and reversal designs require that the baseline (A) always precedes the first intervention (B), which should be accounted for in the randomization scheme. 2 , 11

In randomized phase start-point designs , the lengths of the A and B phases can be randomized. 2 , 11 , 24–26 For example, for an AB design, researchers could specify the number of time points at which outcome data will be collected (eg, 20), define the minimum number of data points desired in each phase (eg, 4 for A, 3 for B), and then randomize the initiation of the intervention so that it occurs anywhere between the remaining time points (points 5 and 17 in the current example). 27 , 28 For multiple baseline designs, a dual-randomization or “regulated randomization” procedure has been recommended. 29 If multiple baseline randomization depends solely on chance, it could be the case that all units are assigned to begin intervention at points not really separated in time. 30 Such randomly selected initiation of the intervention would result in the drastic reduction of the discriminant and internal validity of the study. 29 To eliminate this issue, investigators should first specify appropriate intervals between the start points for different units, then randomly select from those intervals, and finally randomly assign each unit to a start point. 29

SINGLE-CASE ANALYSIS TECHNIQUES FOR INTERVENTION RESEARCH

The What Works Clearinghouse (WWC) single-case design technical documentation provides an excellent overview of appropriate SC study analysis techniques to evaluate the effectiveness of intervention effects. 1 , 18 First, visual analyses are recommended to determine whether there is a functional relationship between the intervention and the outcome. Second, if evidence for a functional effect is present, the visual analysis is supplemented with quantitative analysis methods evaluating the magnitude of the intervention effect. Third, effect sizes are combined across cases to estimate overall average intervention effects, which contribute to evidence-based practice, theory, and future applications. 2 , 18

Visual Analysis

Traditionally, SC study data are presented graphically. When more than 1 participant engages in a study, a spaghetti plot showing all of their data in the same figure can be helpful for visualization. Visual analysis of graphed data has been the traditional method for evaluating treatment effects in SC research. 1 , 12 , 31 , 32 The visual analysis involves evaluating level, trend, and stability of the data within each phase (ie, within-phase data examination) followed by examination of the immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases (ie, between-phase comparisons). When the changes (and/or variability) in level are in the desired direction, are immediate, readily discernible, and maintained over time, it is concluded that the changes in behavior across phases result from the implemented treatment and are indicative of improvement. 33 Three demonstrations of an intervention effect are necessary for establishing a functional relationship. 1

Within-Phase Examination

Level, trend, and stability of the data within each phase are evaluated. Mean and/or median can be used to report the level, and trend can be evaluated by determining whether the data points are monotonically increasing or decreasing. Within-phase stability can be evaluated by calculating the percentage of data points within 15% of the phase median (or mean). The stability criterion is satisfied if about 85% (80%–90%) of the data in a phase fall within a 15% range of the median (or average) of all data points for that phase. 34

Between-Phase Examination

Immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases are evaluated next. For this, several nonoverlap indices have been proposed that all quantify the proportion of measurements in the intervention phase not overlapping with the baseline measurements. 35 Nonoverlap statistics are typically scaled as percent from 0 to 100, or as a proportion from 0 to 1. Here, we briefly discuss the nonoverlap of all pairs ( NAP ), 36 the extended celeration line ( ECL ), the improvement rate difference ( IRD ), 37 and the TauU , and the TauU-adjusted, TauU adj , 35 as these are the most recent and complete techniques. We also examine the percentage of nonoverlapping data ( PND ) 38 and the two standard deviations band method, as these are frequently used techniques. In addition, we include the percentage of nonoverlapping corrected data ( PNCD )–-an index applying to the PND after controlling for baseline trend. 39

Nonoverlap of All Pairs

single case study exercise

Extended Celeration Line

single case study exercise

As a consequence, this method depends on a straight line and makes an assumption of linearity in the baseline. 2 , 12

Improvement Rate Difference

This analysis is conceptualized as the difference in improvement rates (IR) between baseline ( IR B ) and intervention phases ( IR T ). 38 The IR for each phase is defined as the number of “improved data points” divided by the total data points in that phase. Improvement rate difference, commonly employed in medical group research under the name of “risk reduction” or “risk difference,” attempts to provide an intuitive interpretation for nonoverlap and to make use of an established, respected effect size, IR B − IR T , or the difference between 2 proportions. 37

TauU and TauU adj

single case study exercise

Online calculators might assist researchers in obtaining the TauU and TauU adjusted coefficients ( http://www.singlecaseresearch.org/calculators/tau-u ).

Percentage of Nonoverlapping Data

single case study exercise

Two Standard Deviation Band Method

When the stability criterion described earlier is met within phases, it is possible to apply the 2-standard deviation band method. 12 , 41 First, the mean of the data for a specific condition is calculated and represented with a solid line. In the next step, the standard deviation of the same data is computed, and 2 dashed lines are represented: one located 2 standard deviations above the mean and the other 2 standard deviations below. For normally distributed data, few points (<5%) are expected to be outside the 2-standard deviation bands if there is no change in the outcome score because of the intervention. However, this method is not considered a formal statistical procedure, as the data cannot typically be assumed to be normal, continuous, or independent. 41

Statistical Analysis

If the visual analysis indicates a functional relationship (ie, 3 demonstrations of the effectiveness of the intervention effect), it is recommended to proceed with the quantitative analyses, reflecting the magnitude of the intervention effect. First, effect sizes are calculated for each participant (individual-level analysis). Moreover, if the research interest lies in the generalizability of the effect size across participants, effect sizes can be combined across cases to achieve an overall average effect size estimate (across-case effect size).

Note that quantitative analysis methods are still being developed in the domain of SC research 1 and statistical challenges of producing an acceptable measure of treatment effect remain. 14 , 42 , 43 Therefore, the WWC standards strongly recommend conducting sensitivity analysis and reporting multiple effect size estimators. If consistency across different effect size estimators is identified, there is stronger evidence for the effectiveness of the treatment. 1 , 18

Individual-Level Effect Size Analysis

single case study exercise

Across-Case Effect Sizes

Two-level modeling to estimate the intervention effects across cases can be used to evaluate across-case effect sizes. 44 , 45 , 50 Multilevel modeling is recommended by the WWC standards because it takes the hierarchical nature of SC studies into account: measurements are nested within cases and cases, in turn, are nested within studies. By conducting a multilevel analysis, important research questions can be addressed (which cannot be answered by single-level analysis of SC study data), such as (1) What is the magnitude of the average treatment effect across cases? (2) What is the magnitude and direction of the case-specific intervention effect? (3) How much does the treatment effect vary within cases and across cases? (4) Does a case and/or study-level predictor influence the treatment's effect? The 2-level model has been validated in previous research using extensive simulation studies. 45 , 46 , 51 The 2-level model appears to have sufficient power (>0.80) to detect large treatment effects in at least 6 participants with 6 measurements. 21

Furthermore, to estimate the across-case effect sizes, the HPS (Hedges, Pustejovsky, and Shadish) , or single-case educational design ( SCEdD)-specific mean difference, index can be calculated. 52 This is a standardized mean difference index specifically designed for SCEdD data, with the aim of making it comparable to Cohen's d of group-comparison designs. The standard deviation takes into account both within-participant and between-participant variability, and is typically used to get an across-case estimator for a standardized change in level. The advantage of using the HPS across-case effect size estimator is that it is directly comparable with Cohen's d for group comparison research, thus enabling the use of Cohen's (1988) benchmarks. 53

Valuable recommendations on SC data analyses have recently been provided. 54 , 55 They suggest that a specific SC study data analytic technique can be chosen on the basis of (1) the study aims and the desired quantification (eg, overall quantification, between-phase quantifications, and randomization), (2) the data characteristics as assessed by visual inspection and the assumptions one is willing to make about the data, and (3) the knowledge and computational resources. 54 , 55 Table 1 lists recommended readings and some commonly used resources related to the design and analysis of single-case studies.

3rd ed. Needham Heights, MA: Allyn & Bacon; 2008.

New York, NY: Oxford University Press; 2010.

Hillsdale, NJ: Lawrence Erlbaum Associates; 1992.

Washington, DC: American Psychological Association; 2014.

Philadelphia, PA: F. A. Davis Company; 2015.

Reversal design . 2008;10(2):115-128.

. 2014;35:1963-1969.

. 2000;10(4):385-399.

Multiple baseline design . 1990;69(6):311-317.

. 2010;25(6):459-469.

Alternating treatment design . 2014;52(5):447-462.

. 2013;34(6):371-383.

Randomization . 2010;15(2):124-144.

Visual analysis . 2000;17(1):20-39.

. 2012;33(4):202-219.

Percentage of nonoverlapping data . 2010;4(4):619-625.

. 2010;47(8):842-858.

Nonoverlap of all pairs . 2009;40:357-367.

. 2012;21(3):203-216.

Improvement rate difference . 2016;121(3):169-193.

. 2016;86:104-113.

Tau-U/Piecewise regression . In press.

. 2017;38(2).

Hierarchical Linear Modeling . 2013;43(12):2943-2952.

. 2007;29(3):23-55.

QUALITY APPRAISAL TOOLS FOR SINGLE-CASE DESIGN RESEARCH

Quality appraisal tools are important to guide researchers in designing strong experiments and conducting high-quality systematic reviews of the literature. Unfortunately, quality assessment tools for SC studies are relatively novel, ratings across tools demonstrate variability, and there is currently no “gold standard” tool. 56 Table 2 lists important SC study quality appraisal criteria compiled from the most common scales; when planning studies or reviewing the literature, we recommend readers to consider these criteria. Table 3 lists some commonly used SC quality assessment and reporting tools and references to resources where the tools can be located.

Criteria Requirements
1. Design The design is appropriate for evaluating the intervention
2. Method details Participants' characteristics, selection method, and testing setting specifics are adequately detailed to allow future replication
3. Independent variable , , , The independent variable (ie, the intervention) is thoroughly described to allow replication; fidelity of the intervention is thoroughly documented; the independent variable is systematically manipulated under the control of the experimenter
4. Dependent variable , , Each dependent/outcome variable is quantifiable. Each outcome variable is measured systematically and repeatedly across time to ensure the acceptable 0.80-0.90 interassessor percent agreement (or ≥0.60 Cohen's kappa) on at least 20% of sessions
5. Internal validity , , The study includes at least 3 attempts to demonstrate an intervention effect at 3 different points in time or with 3 different phase replications. Design-specific recommendations: (1) for reversal designs, a study should have ≥4 phases with ≥5 points, (2) for alternating intervention designs, a study should have ≥5 points per condition with ≤2 points per phase, and (3) for multiple baseline designs, a study should have ≥6 phases with ≥5 points to meet the What Works Clearinghouse standards without reservations. Assessors are independent and blind to experimental conditions
6. External validity Experimental effects should be replicated across participants, settings, tasks, and/or service providers
7. Face validity , , The outcome measure should be clearly operationally defined, have a direct unambiguous interpretation, and measure a construct is designed to measure
8. Social validity , Both the outcome variable and the magnitude of change in outcome because of an intervention should be socially important; the intervention should be practical and cost effective
9. Sample attrition , The sample attrition should be low and unsystematic, because loss of data in SC designs due to overall or differential attrition can produce biased estimates of the intervention's effectiveness if that loss is systematically related to the experimental conditions
10. Randomization , If randomization is used, the experimenter should make sure that (1) equivalence is established at the baseline and (2) the group membership is determined through a random process
What Works Clearinghouse Standards (WWC) Kratochwill TR, Hitchcock J, Horner RH, et al. Institute of Education Sciences: What Works Clearinghouse: Procedures and standards handbook. . Published 2010. Accessed November 20, 2016.
Quality indicators from Horner et al Horner RH, Carr EG, Halle J, McGee G, Odom S, Wolery M. The use of single-subject research to identify evidence-based practice in special education. . 2005;71(2):165-179.
Evaluative method Reichow B, Volkmar F, Cicchetti D. Development of the evaluative method for evaluating and determining evidence-based practices in autism. . 2008;38(7):1311-1319.
Certainty framework Simeonsson R, Bailey D. Evaluating programme impact: levels of certainty. In: Mitchell D, Brown R, eds. London, England: Chapman & Hall; 1991:280-296.
Evidence in Augmentative and Alternative Communication Scales (EVIDAAC) Schlosser RW, Sigafoos J, Belfiore P. EVIDAAC comparative single-subject experimental design scale (CSSEDARS). . Published 2009. Accessed November 20, 2016.
Single-Case Experimental Design (SCED) Tate RL, McDonald S, Perdices M, Togher L, Schulz R, Savage S. Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale. . 2008;18(4):385-401.
Logan et al scales Logan LR, Hickman RR, Harris SR, Heriza CB. Single-subject research design: Recommendations for levels of evidence and quality rating. . 2008;50:99-103.
Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) Tate RL, Perdices M, Rosenkoetter U, et al. The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 statement. 2016;56:133-142.
Theory, examples, and tools related to multilevel data analysis Van den Noortgate W, Ferron J, Beretvas SN, Moeyaert M. Multilevel synthesis of single-case experimental data. Katholieke Universiteit Leuven web site. .
Tools for computing between-cases standardized mean difference ( -statistic) Pustejovsky JE. scdhlm: a web-based calculator for between-case standardized mean differences (Version 0.2) [Web application]. .
Tools for computing NAP, IRD, Tau, and other statistics Vannest KJ, Parker RI, Gonen O. Single case research: web based calculators for SCR analysis (Version 1.0) [Web-based application]. College Station, TX: Texas A&M University. Published 2011. Accessed November 20, 2016. .
Tools for obtaining graphical representations, means, trend lines, PND Wright J. Intervention central. Accessed November 20, 2016. .
Access to free Simulation Modeling Analysis (SMA) Software Borckardt JJ. SMA Simulation modeling analysis: time series analysis program for short time series data streams. Published 2006. .

When an established tool is required for systematic review, we recommend use of the WWC tool because it has well-defined criteria and is developed and supported by leading experts in the SC research field in association with the Institute of Education Sciences. 18 The WWC documentation provides clear standards and procedures to evaluate the quality of SC research; it assesses the internal validity of SC studies, classifying them as “meeting standards,” “meeting standards with reservations,” or “not meeting standards.” 1 , 18 Only studies classified in the first 2 categories are recommended for further visual analysis. Also, WWC evaluates the evidence of effect, classifying studies into “strong evidence of a causal relation,” “moderate evidence of a causal relation,” or “no evidence of a causal relation.” Effect size should only be calculated for studies providing strong or moderate evidence of a causal relation.

The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 is another useful SC research tool developed recently to improve the quality of single-case designs. 57 SCRIBE consists of a 26-item checklist that researchers need to address while reporting the results of SC studies. This practical checklist allows for critical evaluation of SC studies during study planning, manuscript preparation, and review.

Single-case studies can be designed and analyzed in a rigorous manner that allows researchers strength in assessing causal relationships among interventions and outcomes, and in generalizing their results. 2 , 12 These studies can be strengthened via incorporating replication of findings across multiple study phases, participants, settings, or contexts, and by using randomization of conditions or phase lengths. 11 There are a variety of tools that can allow researchers to objectively analyze findings from SC studies. 56 Although a variety of quality assessment tools exist for SC studies, they can be difficult to locate and utilize without experience, and different tools can provide variable results. The WWC quality assessment tool is recommended for those aiming to systematically review SC studies. 1 , 18

SC studies, like all types of study designs, have a variety of limitations. First, it can be challenging to collect at least 5 data points in a given study phase. This may be especially true when traveling for data collection is difficult for participants, or during the baseline phase when delaying intervention may not be safe or ethical. Power in SC studies is related to the number of data points gathered for each participant, so it is important to avoid having a limited number of data points. 12 , 58 Second, SC studies are not always designed in a rigorous manner and, thus, may have poor internal validity. This limitation can be overcome by addressing key characteristics that strengthen SC designs ( Table 2 ). 1 , 14 , 18 Third, SC studies may have poor generalizability. This limitation can be overcome by including a greater number of participants, or units. Fourth, SC studies may require consultation from expert methodologists and statisticians to ensure proper study design and data analysis, especially to manage issues like autocorrelation and variability of data. 2 Fifth, although it is recommended to achieve a stable level and rate of performance throughout the baseline, human performance is quite variable and can make this requirement challenging. Finally, the most important validity threat to SC studies is maturation. This challenge must be considered during the design process to strengthen SC studies. 1 , 2 , 12 , 58

SC studies can be particularly useful for rehabilitation research. They allow researchers to closely track and report change at the level of the individual. They may require fewer resources and, thus, can allow for high-quality experimental research, even in clinical settings. Furthermore, they provide a tool for assessing causal relationships in populations and settings where large numbers of participants are not accessible. For all of these reasons, SC studies can serve as an effective method for assessing the impact of interventions.

  • Cited Here |
  • Google Scholar

n-of-1 studies; quality assessment; research design; single-subject research

  • + Favorites
  • View in Gallery

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Physical exercise as a treatment for non-suicidal self-injury: evidence from a single-case study

  • PMID: 17267807
  • DOI: 10.1176/ajp.2007.164.2.350a

PubMed Disclaimer

Similar articles

  • Psychodynamic treatments of self-injury. Levy KN, Yeomans FE, Diamond D. Levy KN, et al. J Clin Psychol. 2007 Nov;63(11):1105-20. doi: 10.1002/jclp.20418. J Clin Psychol. 2007. PMID: 17932991
  • The epidemiology and phenomenology of non-suicidal self-injurious behavior among adolescents: a critical review of the literature. Jacobson CM, Gould M. Jacobson CM, et al. Arch Suicide Res. 2007;11(2):129-47. doi: 10.1080/13811110701247602. Arch Suicide Res. 2007. PMID: 17453692 Review.
  • Borderline personality disorder and suicidality. Oldham JM. Oldham JM. Am J Psychiatry. 2006 Jan;163(1):20-6. doi: 10.1176/appi.ajp.163.1.20. Am J Psychiatry. 2006. PMID: 16390884 Review. No abstract available.
  • [Situation of common psychosomatic symptom in adolescent and its influence on 6 months later suicide and self-injurious behavior]. Cao H, Tao FB, Huang L, Wan YH, Sun Y, Su PY, Hao JH. Cao H, et al. Zhonghua Yu Fang Yi Xue Za Zhi. 2012 Mar;46(3):202-8. Zhonghua Yu Fang Yi Xue Za Zhi. 2012. PMID: 22800588 Chinese.
  • Suicide and self-injury among children and youth with chronic health conditions. Barnes AJ, Eisenberg ME, Resnick MD. Barnes AJ, et al. Pediatrics. 2010 May;125(5):889-95. doi: 10.1542/peds.2009-1814. Epub 2010 Apr 12. Pediatrics. 2010. PMID: 20385631
  • Hierarchical topological model of the factors influencing adolescents' non-suicidal self-injury behavior based on the DEMATEL-TAISM method. Lan Z, Pau K, Mohd Yusof H, Huang X. Lan Z, et al. Sci Rep. 2022 Oct 14;12(1):17238. doi: 10.1038/s41598-022-21377-z. Sci Rep. 2022. PMID: 36241902 Free PMC article.
  • The Effect of Emotion Regulation on Non-Suicidal Self-Injury Among Adolescents: The Mediating Roles of Sleep, Exercise, and Social Support. Lan Z, Pau K, Md Yusof H, Huang X. Lan Z, et al. Psychol Res Behav Manag. 2022 Jun 7;15:1451-1463. doi: 10.2147/PRBM.S363433. eCollection 2022. Psychol Res Behav Manag. 2022. PMID: 35698565 Free PMC article.
  • Resisting Urges to Self-Injure. Klonsky ED, Glenn CR. Klonsky ED, et al. Behav Cogn Psychother. 2008 Mar;36(2):211-220. doi: 10.1017/S1352465808004128. Epub 2008 Feb 19. Behav Cogn Psychother. 2008. PMID: 29527120 Free PMC article.
  • Characterizing the course of non-suicidal self-injury: A cognitive neuroscience perspective. Liu RT. Liu RT. Neurosci Biobehav Rev. 2017 Sep;80:159-165. doi: 10.1016/j.neubiorev.2017.05.026. Epub 2017 Jun 1. Neurosci Biobehav Rev. 2017. PMID: 28579492 Free PMC article. Review.
  • Evidence-based psychosocial treatments for self-injurious thoughts and behaviors in youth. Glenn CR, Franklin JC, Nock MK. Glenn CR, et al. J Clin Child Adolesc Psychol. 2015;44(1):1-29. doi: 10.1080/15374416.2014.945211. Epub 2014 Sep 25. J Clin Child Adolesc Psychol. 2015. PMID: 25256034 Free PMC article. Review.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • MedlinePlus Consumer Health Information
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Qual Stud Health Well-being

“I tried so many diets, now I want to do it differently”—A single case study on coaching for weight loss

In this single case study, the author presented an in-depth description and analysis of a coaching intervention with focus on weight loss, conducted over 10 sessions in the course of 17 months. The client was a well-educated woman in her late 30s, who had tried many different forms of dieting over the years—with little and no lasting effect. In his coaching approach, the author went beyond a pure behavioural change model, that is, based on the Health Belief Model, and tried to take a whole-life perspective, where the client learned to link specific events and habits in her work life and everyday life with specific eating habits. In their collaborative practice, coach and coachee initiated changes both in regard to diet, physical activity, and healthy life style, in general. In a theoretical section, the change in understanding with regard to overeating was presented. Finally, an intra-active model—viewing the client as a self-reflective individual—was used as theoretical basis. A narrative analysis of the first session and a cross-session examination was presented to show, analyse, and understand the procedure of the coaching approach. Finally, the voice of the coachee was heard in regard to her personal experiences during the process. The data material was based on audio recordings of selected sessions, notes written by the coach from every session, and final written reflections by the coachee.

By now, we all know that overweight, defined by a body mass index (BMI—the weight in kilograms divided by the square of the height in meters) above 25, and especially obesity, defined by a BMI of 30 or more, is a threat to people's health. On the basis of a large cohort study, Adams et al. ( 2006 ) could conclude: Excess body weight during midlife, including overweight, is associated with an increased risk of death. The number of people who are overweight and obese is growing in many countries, led by the following top 10: (1) USA, (2) China, (3) India, (4) Russia, (5) Brazil, (6) Mexico, (7) Egypt, (8) Germany, (9) Pakistan, and (10) Indonesia. Worldwide, the number of overweight and obese people increased from 857 million in 1980 to 2.1 billion in 2013—an increase of more than 145% ( www.health.usnews.com/health-news/health-wellness/articles/2014/05/28/america-tops-list-of-10-most-obese-countries ; retrieved 7 August 2014). The reasons for this development are manifold: lack of energy balance, inactive lifestyle, environmental factors, lack of sleep, just to mention some of the central causes ( www.nhlbi.nih.gov/health/health-topics/topics/obe/causes.html ; retrieved 7 August 2014). The relationship between socio-economic status and obesity must be nuanced depending on the level of development of the country. McLaren ( 2007 ) documented the following as the overall pattern of results in regard to this relationship: For both men and women, an increasing proportion of positive associations and a decreasing proportion of negative associations were found, as one moved from countries with high levels of socio-economic development to countries with medium and low levels of development.

On the basis of these epidemic dimensions, the issue of overweight and obesity cannot be taken lightly. It is probably one of the most fundamental threats that governmental health authorities are forced to deal with. Compared to the size of the challenge, the topic and objective of this article seem to be a drop in the ocean. The attempt here was to present a concrete case study with two central objectives: First, to document and evaluate the client's experiences of the change process, based on the intentions and dialogical strategies of the coach, and second, to document and analyse the process and impact of coaching for weight loss coaching dialogues. The approach presented here used a broader perspective than a sole focus on perception of threat as the basis for behavioural change (e.g., the Health Belief Model [HBM]), because of the author's firm belief that the change in behaviour is based on new ways of sense- or meaning making, of understanding oneself, and of attitudes towards life. This shift in the mindset will slowly prepare the client and make her ready to initiate the first step towards a change of lifestyle.

Approaches and procedures for behaviour change

During the twentieth century, the understanding of the individual has changed. On the basis of a better understanding of psychological reasons for obesity (Swencionis & Rendell, 2012 ), this change of understanding also had an influence on the approaches to helping people with weight loss and lifestyle change (Ogden, 2002 ). (1) During the first half of the century, the main approach assumed that the individual was passive , and behaviour could be predicted and influenced through specific stimuli (Pavlov, 1927 ; Skinner, 1953 ). (2) From the 1960 onwards, the behavioural law of conditioning was substituted by emphasizing “an interactive alignment of the individual and their environment” (Ogden, 2002 , p. 21). Individuals were understood as being able to process information, and on the basis of their cognitive understanding they could make changes in their behaviour. This understanding can be seen in the HBM (Daddario, 2007 ; Rosenstock, 1974 ). Very briefly described, the model stated the following: Perceived seriousness and perceived susceptibility with regard to specific health threats might lead to growing awareness of possible health threats, which might enhance the likelihood of engaging in health-promoting behaviour. In a further development of the model, self-efficacy (Bandura, 1977 , 1997 )—the confidence that a desired behaviour can be carried out—was included as an important factor for change (Becker & Rodenstock, 1987 ; Rosenstock, Strecher, & Becker, 1988 ). (3) In a way, the further development of the HBM was the first indication of the third phase in the understanding of the individual; a phase that was inspired by poststructualism and which had become more and more prominent during the last two decades. Ogden ( 2002 ) described the shift by presenting the reflexive, intra-active individual , a term fairly new in the literature, which Ogden ( 1995 ) had already described as follows:

The contemporary intra-active individual is characterized by an agency and an intentionality which is directed internally towards their inner self. The late twentieth century object of psychological thought has become a subjective entity whose subject is the self. (p. 412)

The neologism “intra-active enactment,” introduced by Barad ( 2007 ), who was inspired by Michael Foucault and Niels Bohr, was related to the idea of a growing subjectification in today's society. Højgaard and Søndergaard ( 2011 ) wrote:

The concept of subjectification builds on the work of Michel Foucault, his conceptualisation of discursive power as productive, and his point about subjective submission under discursive power as a process which involves a simultaneous production of subjective existence and agency. The humanist idea of a “core self” is radically transgressed in this line of thought and replaced by the idea of an emerging subject. The basic figure of simultaneity between submission and agency in the formation of the subject has had an enormous impact on poststructuralist thinking over the past decades. (p. 340)

In that sense, subjectification is a concept that describes individuals as agents who simultaneously shape whatever they are shaped by. In a similar way, subjectification describes the growing impact on the individual's handling of specific social and personal challenges in regard to health issues such as smoking, diet, weight loss, and involvement in physical activities and sports. In regard to these challenges, the subject tends towards various forms of surveillance , a phenomenon that Foucault described as constant and pervasive forms of observation that finally lead to growing individual self-control. Individuals become their own “worst enemy.” From this perspective, there are indications in the literature of the critical influence of monitoring weight on psychological states (Dionne & Yeudall, 2005 ; Ogden & Whyman, 1997 ).

The intra-active overeater

On the basis of this intra-active model of the self, Ogden ( 2002 ) described the change in regard to the understanding of dieting as follows. In the 1960s and 1970s, the psychological understanding of overeating was influenced by the idea that overweight and obesity were highly and sometimes uncontrollably responsive to external and environmental cues such as time of day, sight, number and salience of food cues, and taste (Schachter, 1968 ; Schachter & Gross, 1968 ). In the last quarter of the twentieth century, new models emerged. Ogden (2002, p. 87) referred to studies conducted with normal weight individuals, studies which “suggested that trying to eat less (i.e., restrained eating) was a better predictor of food intake than weight per se .” A theory of restrained eating behaviour evolved. The perception of weight , a biological construct, was no longer seen as the main determinant of eating behaviour. Instead restrained eating , a psychological construct, was introduced as means to evaluate food intake (Herman & Mack, 1975 ; Hibscher & Herman, 1977 ). In consequence of this approach, overeating could be characterized as a failure of self-control (Ogden, 1997 , 2002 ); the individual does not manage to keep him- or herself under “proper” surveillance. This led to the development of to an escape theory to explain disinhibition with the consequence of overeating (Heatherton & Baumeister, 1991 ): Binge and overeating arise as part of a motivated attempt to escape from self-awareness and self-control in situations where one's own high standards and demanding ideals are put under pressure. Eating turns into pure relief for surveillance and becomes self-satisfactory. In a study of Heatherton and Baumeister ( 1991 , p. 101) binge eating was also “associated with decreased negative affect.” On the basis of this understanding, Ogden ( 2002 , p. 89) concluded: “Therefore, a shift to lower self-awareness resulted in reduced self-control”—with the final possible consequence: overeating. Her definition of the individual “as a reflexive, intra-active and self-controlling self, who is not biomedical and not social” (Ogden, 2002 , p. 97) was an attempt to overcome the dualism or disintegration between social and environment dimensions on the one side, and individual and psychological dimensions on the other side. With this model, the intention was to merge the individual with social and environmental factors. Individuals become reflexive and intra-active subjects and objects of their own self-reflection ( Figure 1 ). Indubitably, environmental factors have an impact on the individual but not in a simple causal relationship.

An external file that holds a picture, illustration, etc.
Object name is QHW-10-26925-g001.jpg

The intra-active individual.

Højgaard and Søndergaard (2011, p. 347) saw the concept of intra-action as “premised on a conception of radical co-constitution of subjects as part of material-discursive enactments.” Intra-active theorists see individuals embedded in a specific social discourse of, for example, being slim, looking good, controlling one's weight, being physically active, etc. as well as in discourses such as “when I am unhappy or under pressure, eating helps me to feel satisfied.” All these discourses generate intra-active movements in the individual as a form of material-discursive practice, for example: “I eat fast food or sweets to calm down when I come home after a stressful day at work.” This practice becomes meaningful for the individual. Repetition underlines this meaningfulness and becomes the motor for the performativity of, for example, binge eating. The concept of intra-action takes the complexity of the individual's world into account by embracing the diverse factors of the material, social, and subject world.

This theoretical understanding was assumed to be helpful for the further understanding of coaching as a dialogical tool for self-reflection around eating and striving for weight loss. Coaching should be seen as an integrative part of the individual's intra-active process where new material-discursive practices are initiated; practices that simultaneously express and enact new meaning and understanding in regard to the complexity of the client's life. The intention of coaching for weight loss should replace self-surveillance with a self-understanding that helps clients manage their own life in a way that enables new dimensions in regard to handling specific challenges.

Coaching as the basis for changing eating habits

If we see the lack of self-control as the greatest hindrance to weight loss, then individuals who wish to change their eating habits have to get involved in a self-reflective process , in our case offered through coaching, which may be seen as the basis for a new material-discursive practice. Based on the presented intra-active approach, it would not work simply to focus on the goal of weight loss and pursue the direct path towards a change of behaviour. A goal focus on weight would only keep the person's attention on self-surveillance and self-control. It is not enough to understand the perceived seriousness and the perceived susceptibility to personal health issues, for example, in regard to overweight (as suggested by the HBM). Instead, individuals striving for weight loss are invited by the coach to see themselves from a new perspective and somehow learn to grasp the complexity of their world in a new way. They have to take a closer look at themselves as being co-constituted as both a subject and object of their lived practice.

In the following, a definition of health coaching is presented on the basis of the current review of the literature. Further on, the author will unfold the fundamentals for coaching for weight loss that takes the approach of the intra-active overeater into account, and that is seen as the more effective approach than the traditional narrowly goal-oriented approaches.

Wolever et al. ( 2013 ) have published a systematic review of the literature on health and wellness coaching, observing an emerging consensus in what is referred to as health and wellness coaching. On the basis of their literature study, they define health and wellness coaching as

a patient-centered approach wherein patients at least partially determine their goals, use self-discovery or active learning processes together with content education to work toward their goals, and self-monitor behaviors to increase accountability, all within the context of an interpersonal relationship with a coach. (p. 52)

They continue by presenting patients or clients as “as lifelong learners whose individual personal values and innate internal resources can be cultivated in the context of a supportive relationship to guide them toward their own desired vision of health” (see also Smith et al., 2013 , p. 52). Based on this understanding, an approach to coaching for weight loss will be unfolded by emphasizing the self-reflective dimension of the dialogue, by highlighting personal values and meaning making and finally, as a result of this reflective process, by preparing for change of behaviour (see also Stelter, 2007 , 2014 ). The use of this approach will be illustrated in the later description and analysis of the case study with Anna. In the following, a number of central criteria for this self-reflective coaching for weight loss approach will be presented:

Based on social constructionist thinking, social meaning making evolves by making sense of and reflecting on the impact of specific social relations, specific others, and the social context in general on one's identity and one's way of being in the world (Gergen, 2009 ). We co-construct the world through and in the relationships with others.

  • Focusing on values : Values embrace the most central and fundamental issues in our life and are—often implicitly—guiding markers for our way of acting in the world. Too often, coaching takes its point of departure in a specific goal [e.g., in the GROW model (Goal-Reality-Options-Wrap-up; Whitmore, 2009 ), or in motivational interviewing], but goals are at the lowest level in the hierarchy of intentionality (Stelter, 2009 , 2014 ). On the next higher level, “purpose” is the dimension that goals are influenced by, and on the crown floor of the hierarchy of intentionality we find the dimension of meaning making, which inevitably leads to a reflection on values (Stelter, 2007 , 2014 ). To do successful coaching interventions, it is important to build bridges between the clients’ ambitions to change their way of living and their specific individual values, which they can investigate on their journey, and which in the end can lead to a change of behaviour.
  • The narrative-collaborative dimension of coaching : Meaning and values are often embedded in specific episodes and events. As part of a transformative learning process (Illeris, 2004 ; Mezirow & Associates, 1990 ), where learning always implies an impact on identity and self-understanding, coaching becomes a collaborative enterprise between coach and coachee. The narrative element is highlighted by connecting the coachee's actions with identity issues and vice versa. Narratives are the vehicles which link specific events in a timeline and which have a special impact on the client. If these narratives are a strain on the client, the aim is to deconstruct them in the collaborative process between coach and client. Deconstruction implies the potential for change. By reflecting on the narrative and presenting additional possible interpretations, the dialogical partner applies procedures that undermine the taken-for-granted understanding of the client's life and identity (White, 2004 ).

The author was aware of the close connection between the self-reflective dimension of the dialogue and the focus on the final goal: weight loss. The path in the GROW model, or motivational interviewing (Rubak, Sandboek, Lauritzen, & Christensen, 2005 ) appeared to be too goal-focused with too little attention paid to possible barriers and the complexity of life, which are more thoroughly dealt with in an in-depth, self-reflective coaching process based on the three key areas presented above.

Methodological reflections in regard to the single case study

In this case study, the intention is to present the whole picture of a single case on coaching for weight loss (more on case study design, Yin, 2009 ). The research documentation in the form of this article was NOT at all planned beforehand; it was only decided a few weeks before the last session in the coaching process. 1 As mentioned earlier, the objective of this case study is twofold: First, to document and analyse the process and impact of coaching for weight loss dialogues, and second, to document and analyse the client's experiences of the change process, based on the intentions and dialogical strategies of the coach. To be able to meet these objectives, the following data material was included:

  • Notes taken by the coach during each session (this is a standard procedure).
  • Audio files of the first six sessions (it was actually the client who asked for permission to record the sessions).
  • Reflective notes, produced by the client before the final session.

The material is used in different ways. To provide a basic understanding of the coaching process, the first session is presented as a narrative that is developed on the basis of an analysis of the coach's notes and an audio-recording of the session. The rest of the process is examined in a critical analysis of the coach's notes and the audio-recordings. Finally, the client's reflections are presented in their original form as a written narrative submitted to the author. In the concluding section, the author's and the client's respective analyses are compared, and some final conclusions are presented.

Narrative analysis of the first session

The narrative presentation of the first session unfolds on the basis of an analysis of the researcher's notes and of the audio-recording of the session. Specific events and situations were chosen and highlighted to make the narrative coherent while remaining faithful to the data. This process of shaping the storyline was part of the interpretation and analysis of the qualitative material. As in any qualitative analysis, a major effort is done to achieve credibility. The intention was to use Richardson's ( 2000 , p. 923) idea of writing and storytelling as a way of analysing interview material: “Writing is also a way of knowing—a method of discovery and analysis. By writing in different ways, we discover new aspects of our topic and our relationship to it. Form and content are inseparable.” The narrative–analytical perspective used was that the researcher had to develop meaning out of the material.

Cross-session analysis

Subsequently, the entire sequence of sessions was analysed by coding the notes and the speech from the audio-recording of the session, thus extracting the central meaning and forming central themes that represent the core issues of the series of coaching sessions. The idea of writing the narrative down was also applied in this section by combining the presentation of themes with a reflective analysis that includes the theories presented earlier.

The case study as practitioner research

The case study should also be understood as the work of a practitioner researcher (Jarvis, 1999 ) or a scientist practitioner (Lane & Corrie, 2006 ), meaning that the author sees himself both as a researcher and reflective practitioner (Schön, 1983 ; in regard to coaching: see especially Stelter, 2014 ). Lane and Corrie ( 2006 ) defined the ability to formulate the actual case based on its inherent challenges as one of the most essential qualifications of a psychologist or a coach. They saw this process as a form of psychological sense-making , which can be understood as a central criterion for reflective practice and thereby for the development of expertise as professional.

Case study: Anna's path towards weight loss

In this single case study, the author presents an in-depth description based on an analysis of a coaching intervention with focus on weight loss, conducted over 10 sessions spanning the course of 17 months. The last session was in September 2014. The client, we call Anna, a well-educated woman in her late thirties, has tried many different forms of dieting during her adult years—with little and no lasting effect. First in the very final phase of the coaching process, the author decided to publish this work—with the full approval of Anna, who even agreed to participate in this publication with her own reflections written just before the last coaching session, a reflection that was also included in the session to help her planning her future path.

Anna contacted me after she had participated in a coaching workshop of mine, where she became very interested in my work and my approach. Through email, we agreed on having at least five meetings, but we ended up having 10. Her decision to start coaching for weight loss with me was supported by her conviction that my coaching approach might help her more than all her former attempts and strategies towards weight loss, which had not really satisfied her, and which did not live up to her expectations in regard to losing weight and maintaining the success. She also stated a fundamental principal for her: She does not want to lead her life in a state of permanent dieting. Food intake should be a pleasurable part of her life. Her ambition was to lose up to 25 kg. Here is the narrative of the first session, a narrative that shall also help the reader to get a basic impression about how coaching was conducted:

The first session

A Friday afternoon in April 2013, Anna enters my office. She quickly starts talking about her life. She feels that since her childhood she has eaten too much candy, especially when spending time with her grandparents, which she has done quite a lot. When her grandmother died, she was quite depressed for a long period of time. “But I ‘woke up’ again and actually lost weight. I overcame it, and life was more ‘bubbly’ and good. But after writing my master's thesis I have gained weight again. I lost control. I must get it stopped! But I have a hard time doing it myself,” she states with a hint of despair.

She begins to talk about her work: She says that she should actually be very pleased with her job. She uses her university degree and has a good position in the organization. “But I feel sapped. I don't use myself in the right way. I am not on the right shelf. My job holds no meaning for me.” In spite of this slightly discouraging assessment, she at the same time ensures me that her boss is very pleased with her. She considers herself to be a very valuable and well-liked employee. “Even though I am very much a perfectionist when it comes to my work and often work both in the evenings and the weekends, I don't experience my job as being meaningful! I just can't do my best.”

She tells me that she has gone through a short coach training in 2010 and would like to develop a concept concerning coaching the 60 plus age group, where she would like to help people in their transition from work life to retirement, “the third age.” She feels that she has developed many competencies within the field of coaching, and she does work with coaching on a small scale. “I began to see the world in a different way,” she says. “But the coach training has made me feel that it's wrong for me to be doing my current job.”

We talk about what it is about coaching that attracts her, and how coaching helps her experience meaning. She tells me that she has always been interested in performance art, relations, and other people. It is within this field, she wants her future to be.

She talks about the challenges she faces in regard to her weight. “I strive for the perfect image. I either lose weight for real or else I might as well not bother!”—“But I have the feeling that I don't take myself seriously.”

I ask her to tell me more about her job. She says that she often works up to 60 h a week (and here we are talking about a regular “37 h a week” job more or less.). She explains that she often works more than her boss, who usually goes home around 4 p.m. and doesn't read emails during the weekend. In a humorous way—with the hope she will be slightly provoked in a good way—I say to her: “So sometimes, it is as if you have IDIOT stamped on your forehead when you see your boss go home before you.” “Yes, actually, you are right; I can see that sometimes I do have IDIOT stamped on my forehead.”

“To me it seems like you don't take proper care of yourself,” I say with a tone of understanding. “Yes, those words are very fitting: I need to take proper care of myself,” Anna confirms my statement, clearly impressed by how well I capture the essence of her life situation. I talk a little more about taking care of one's self: “It is also manifested in the way you take care of yourself, also in regard to cooking, exercise, and a healthy lifestyle. It is important that you enjoy things and allow yourself the time for it.” “Yes, you are absolutely right, I don't take myself seriously.” To get a sense about taking herself seriously, I ask Anna the following question: “On a scale from 0 to 10, where 0 is ‘I don't take myself seriously’ and 10 is ‘I take myself very seriously’, where are you in regard to work on the one hand, and in private on the other?” Anna answers with conviction as follows: “At work it is a 9–10 and in private 3–4.”

We end up agreeing on the following assumption: The attitude I have towards eating is a reflection of the attitude I have towards myself.

I ask about her eating habits on an ordinary day, which she describes as follows:

  • Breakfast: two pieces of crisp bread and a glass of milk
  • Lunch: salad and a little bit of fish
  • When I get home around 6:30, I don't have energy for much else than a pizza or a pita with chicken and a large Pepsi Max. The “healthy version” is that I eat rye bread. And a bag of candy with a movie is her idea of peace, happiness, and relaxation.

We are getting close to the end of the session, and I am very interested in getting concrete about suggestions for behavioural change: “How about if you substituted your Pepsi Max with water?”—“Ah, then I would like it to be sparkling water because of the fizz,” Anna answers in a supportive and motivated manner. “But could it be okay with Pepsi Max on the weekends?” she asks. I try to describe, how this hope of getting Pepsi on the weekends impairs her ability to fully enjoy the taste of sparkling water—“So you can make it to the point where you almost don't even like Pepsi anymore.” “Actually, Pepsi sometimes has a weird taste of iron—so you're right actually. I don't always like it!”

Anna suddenly seems very motivated to establish MORE different changes in her life. We agree on the following list:

  • Only sparkling water instead of Pepsi
  • Dark chocolate instead of candy
  • Go home earlier from work
  • Go for a walk—in the time where I used to stay late at work

Finally, I ask her about how she has experienced the session and our collaboration. “The most important thing for me was the part about taking care of myself. I really liked when you said it says IDIOT on my forehead. I need you to push me a little bit,” she states. I am almost a little surprised that she appreciates my slightly provoking comment in such a prominent way. I had not expected that.

We agree that we will arrange the next meeting via email, because Anna does not have her calendar with her. For this first meeting, she actually came on her day off.

Reflection on the session

In this section, I would like to see myself as a practitioner researcher/scientist practitioner, and thus I would like to reflect upon some of my main theoretically based positions and strategies, and I also would like to include some statements from Anna's feedback that I received during the very last session:

In the beginning, I tried to get a picture of Anna's life and biography, especially is the parts which were relevant for her ambition to lose weight. The close interconnectedness between specific life events and eating habits became obvious. As the coach I became the intra-active partner by helping her to understand herself in a new way. We focused on specific values, on what was important for her, and how she sometimes struggled with being her. In this collaborative dialogue the intake of sweets and candy could be connected to a state of mind where she felt safe, secure, and trusting . Sweets and food in general had certain significance in her life, which in the end resulted in an unhealthy, uncontrollable, and unreflected food intake. As mentioned earlier, overeating can be “associated with decreased negative affect” (Heatherton & Baumeister, 1991 , p. 101) and with avoiding negative states of mind—the escape theory that explains the avoidance of self-awareness in situations where one's own high standards and demanding ideals are under pressure.

Anna's life situation and material-discursive practices around food intake were strongly influenced by her working life situation. During our coaching conversation we reached the conclusion that a long working day had a negative influence on the way Anna would “take care of herself”—a fact that also had an impact on her food intake. In our final session, Anna concluded that regular working hours without overtime help her very much to keep up a healthy lifestyle and appropriate eating habits. In that sense, Anna's decision of working less was a helpful strategy for her to prevent overeating that arose as part of a motivated attempt to escape from self-awareness in situations where her own high standards and demanding ideals were put under pressure (see the earlier exposition of the escape theory).

In the last phase of the session, our dialogue changed from investigative and reflective with a focus on developmental issues, towards concrete collaboratively developed guidelines that aimed to help Anna reduce her food intake in a way which corresponded with her wish that lifestyle change should not to be associated with a permanent state of suffering. My intention was to negotiate suggestions which Anna could accept as substitutes for high-calorie products (e.g., mineral water instead of cola). Anna and I put focus on collaboratively developed recommendations as a part of most of the following sessions. What we came forward to was a number of guidelines that were not based on a position of pure self-control, but these guidelines were the result of a longer reflective process, where Anna became clear about how her practice in regard to food intake was influenced by especially her work situation. In that sense, the guidelines were the result of a process where they appeared as meaningful to Anna, and where they were much more than just as a diet instruction. In our very last session, Anna evaluated that it was very important to her to realize that not everything was negotiable. From the beginning, she felt that I was rather determined in regard to substituting cola with mineral water. But my invitation to self-reflection and her growing self-understanding were the most important aspects of our coaching which made her ready for change.

From the theoretical perspective, the session had two central phases: (1) in the first and longer part, the main focus was to give Anna new opportunities for being an intra-active individual; thus, Anna became aware of the way her life situation (e.g., long working hours etc.) had shaped her and, ultimately, her habits and (2) specific material-discursive practices (e.g., buying food, non-cooking, food intake, reading while eating) had shaped her in a specific way and put her on a path towards overeating, which eventually resulted in overweight.

Milestones of the further process—a reflective analysis

In the following, I present some key themes that emerged from an analysis of the collected material (notes by the coach, audio recordings) and which could thus be identified as key issues and milestones that helped Anna understand herself differently and progress in regard to lifestyle changes. The presentation of these milestones is combined with theoretical reflections alongside reflections informed by the intra-active model.

Searching for a new job

In the second session, 1 month after the first, Anna told me that she had applied for two jobs and has been invited for interviews at both places. She experienced this situation as a huge dilemma, especially in regard to having to say no to possibly one of the two. “I fell back and drank cola, just to please myself and to care for myself. I am quite afraid to disappoint people, and drinking cola is a way for me to cope with the situation,” is how she described her feelings in the beginning of the session. I got the impression that she was already quite aware about specific mechanisms in regard to her unhealthy eating habits, and we had a longer conversation about how she could take care of herself in other ways.

She also told me with pride that she had actually renounced one of the two job offers, because in the end she did not like the job, and she was happy to accept the offer from the other employer. While remembering the feelings in regard to her work situation from the first session, I congratulated her on her great initiative and the big success of taking responsibility of her own life. “Now I just have to stick to my goal that I do not want to work overtime. I have talked about this in the interview, and they accepted me on these terms anyway. So, now I can only hope that they keep this in mind when the situation comes,” were her final thoughts about this issue.

Concluding this topic, the following can be stated: Anna had already taken a big step forward by changing her job situation and by being serious about specific issues, for example how working overtime in her current job had had a negative impact on her lifestyle. From the perspective of the intra-active model, Anna had a hard time in the first phase of the process in regard to living up to her high personal standards (e.g., not disappointing others). Food intake became a way for her to calm down and achieve a sense of safety. Coaching helped her to abandon some of her old intra-active dialogues: She became aware of how drinking cola was a surrogate for taking care of oneself .

Change of attitude towards work

During several sessions, her work situation was the focus. Already in the first session, Anna became aware that her perfectionism, or seen from another angle: her anxiety of not being good enough, of not living up to her ideals, of losing control, were central factors in regard to keeping bad eating habits alive. On many occasions, the pressure she felt at work led to forms of binge or over eating, where she used food to calm herself down and first afterwards noticed what had happened to her. As Heatherton and Baumeister (1991, p. 10) argued, eating was “associated with decreased negative affect.” During several sessions, we reflected on how Anna could handle job pressure in a different way. She came to the conclusion that working overtime had to be banned, and we also reflected a lot about how she could learn to accept herself as a good enough worker by appreciating her own work effort and by fighting bad conscience (5th session). I introduced her to writing a “scrapbook” based on the following task: Write down three things in the evening which you succeed with or were happy about during the day. Through coaching I invited Anna to develop a “new” intra-active individual by helping her to understand herself in a new way, thus supporting her in developing new material-discursive practices that could help her to view and ultimately handle specific work situations in a new way. Furthermore, by introducing a “scrapbook,” I helped her appreciate herself more and become aware that she actually was not far from living up to her ideals.

Being mindful

We worked quite a bit on being mindful. During the first session, Anna told me about her habits. Preparing food had no high status in her life. Often, when coming home from work late, she just bought a pizza or other types of fast food from a corner shop, plus a big bottle of cola, and while eating she used to watch TV or do further work or reading on her computer. Even when she did her own cooking, she did not seriously value her own effort. Also here, she combined eating with reading or watching TV. I made an “irreverent” 2 statement: “So, the dining Anna does not give much respect to the cooking Anna!” We talked about possible consequences (e.g., the amount of food she might eat), it could have been that she was not thinking about or, to put it even better, fully sensing her dining. I introduced her to the mindfulness exercise where you eat a raisin very slowly and mindfully ( www.staroversky.com/blog/mindfulness-exercise-eating-a-raisin-mindfully , retrieved 20 November 2014). The exercise was an eye-opener.

We developed an agenda for good habits: No TV while eating, be attentive when eating, enjoy the nuances of your food, be in the here and now of the situation, enjoy the pleasure of your own company—and slowly Anna changed her attitude and habits on the basis of new and positive experiences. She felt that she was taking herself much more seriously and taking better care of herself.

Inspired by the intra-active model, developing these new habits became the basis for shaping a new understanding about her material-discursive practices around eating. These new practices were also the basis for understanding and shaping herself in a new way—somehow becoming a different person , because the meaning associated with eating appeared to unfold in a new way.

Making plans—establishing alternatives

As in the end of the first session, we frequently made plans that helped Anna focus on specific aspects of lifestyle change, for example, Monday to Friday: no warm food in the evening, only one to two slices of bread with salad and one piece of fruit, walk to and from work twice a week, practice the raisin exercise every evening, and take a long walk during the weekend. She decided to work on a painting on the topic “my wishful state, my best possible future,” which we talked about in the following session.

These plans were like a contract between us. They helped keep her on track. We celebrated her successes. And if there were objectives she struggled with, we reflected about her challenges.

We developed alternatives together, for example, breathing like in yoga, especially when she experienced a situation as stressful; baking and eating stone-age bread (only with nuts, sesame, etc.—no flour); and inviting her sister to be a buddy to help keep her on track. After 4 months, Anna started to jog or walk every day. At this time, she also decided to begin with the 5:2 diet (Mosley & Spencer, 2013 ), which her sister (with no weight problems) already practiced.

Here, the focus was on her material-discursive practices, which also shape the individual's development. By changing these practices, the individual gets a chance to act and, somehow, to be different. A change of practices in regard to how to eat and live one's life has an impact on the way a person is able to see herself. As our sessions progressed, Anna felt increasingly ready for the changes that were necessary for her to succeed in “reaching her goals,” which basically revolved around developing new practices that were established as meaningful in regard to her life and the life she wished to construct. Gradually, we developed a working relationship, where I as the coach became a fellow human being (Stelter, 2014 ) and an active supporter of the change process.

It is not a cure—it is my life!

This sentence was presented by Anna during the last session, where we were looking back and evaluating the course of our working relationship. Changes are not just a cognitive decision; they have to make sense in the actual world of the client. Changes have to grow based on the reflective process about the lived challenges of the participant. Anna has learned that her binge or over eating was actually closely related to insecurity, loss of control, lack of self-confidence, or stressful life events. During one of our later sessions, she remembered a situation when she was little and her father—who was divorced from her mother—asked her to cook for both of them. Although her father did not have bad intentions, Anna felt terribly pressured by this responsibility. She wanted to do her best but felt that she could hardly handle the pressure. Remembering (Stelter, 2014 ; White, 2007 ) this early event and articulating her feelings suddenly made her ready to understand a lot of later experiences in her life; events that she had reflected on since the first of our sessions. This story really made her ready to see herself in a new light. On the basis of whole-life perspective, weight loss is not just a matter of diet, physical activity, and eating habits. Bad habits have their story, their good reason. When Anna learned to understand her story, the path of development and change was made possible.

In conclusion, this episode and the whole course of sessions are to be examined in light of the intra-active model that was presented as the theoretical framework. Generating meaning in a new way and through the collaborative process between Anna and the coach was regenerated in and through the intra-active dialogue with Anna who was able to reflect on herself and her practices in a new way. In collaboration with the coach, Anna was a narrator of specific events and life situations. In this course of coaching session, the intra-active dialogue was supported by a collaborative partner, the coach. Meaning was shaped through reflecting, re-telling, and remembering of different life situations and specific practices that seemed understandable but inappropriate. This process of achieving a new understanding of oneself and of having a partner in one's intra-active dialogue can be seen as the very foundation of change. The goals that were formulated in specific situations needed to be anchored in a new understanding of what was seen as meaningful and, at the same time, anchored in specific values that were important to the individual and which could be translated into action through specific material-discursive practices.

Anna's final reflection on the coaching process

When I, as the author, decided—with informed consent of Anna—to write this article, I also asked Anna to write an honest final reflection about her perspective to our course of coaching sessions and the impact they had on her. This is what I received from her.

Goal and results

After around a year and a half of coaching, I have become 12 kg lighter. It is a great relief and that's why I choose to use the term “becoming lighter” instead of “losing weight.”

My original goal when I first started was—as far as I remember—to lose 20 kg or a little more than that. I haven't reached that goal yet, but my life has changed in so many other ways, it feels like the goal has been reached. The goal has changed during the coaching process. It quickly became clear to me that it wasn't going to be a diet where I, after my goal was reached, put all the weight on again and maybe even more. That has happened before. This time it was going to be a lasting change in lifestyle.

That point has been quite central in the process, as the goal is more about having a normal relationship with food than losing weight. But that also means that the weight loss hasn't gone as quickly as if I had been on a diet. A challenge in the process has been—and will most certainly continue to be—finding the balance between cutting back on junk food and sweets and also finding a level that is possible to adhere to for the rest of my life. An important point that became clear during the coaching was that it is not about losing something or restricting myself from the things I like. It is more about the amount and trying to come up with good and delicious alternatives, so I don't feel like life is tough and unfair, because I always have to say no to the things I want. The coach made it clear to me that it is all about doing something good for myself.

Added bonuses

Other goals I have reached in the process are: becoming healthier, also in the long run, more mindfulness in my daily life, not only when I eat. One of my underlying goals was to start exercising again—running and playing tennis. I have started running again and besides, I have found joy in walking to and from work—a couple of hours every day.

The most important thing has been the feeling of coming out of a deep dark hole. I have gone from having an inner voice telling me every day that I must eat sweets or the like, to feeling like I have a choice—I can choose to eat it or I can choose not to and do it another day. That is the greatest relief.

Process and theme

How has this happened then? Conversations with the coach about habits and automatisms, presence and being mindful, my childhood, and about my sensing, when I eat—those have been the key ingredients to the success. The fact that we have spoken on the basis of my entire life—especially my work life—and the fact that I have had the opportunity to express myself regarding the pressure I feel in relation to my work life has made a great difference. Along the way, between the sessions, I also gained insights from the past, which I had repressed, which have had a say in why I react as I do to demands, expectations, and perfectionism. This development has led to me being more attentive and not taking everything so seriously. Actually, you could say that I am now in a process, where I am practicing making mistakes and not complaining about it.

The coach has been inspiring me, listening to me, and challenging me—challenging me especially by questioning my understanding of how things are tied together—for example, that it is not a must to overeat because you feel that you are under pressure. The coach has also brought concrete suggestions to the table—for example, the good tip about trying the 5:2 diet, which turned out to fit well with my temperament, and is in line with the idea of not restricting yourself from the things you enjoy, but enjoying less of them.

An important quality of the coach has been setting firm requirements for what I could and couldn't do. For example, I wasn't allowed to drink cola. I suggested that it could be okay on weekends, but that was a no go. And it turned out that I quickly found joy in drinking sparkling water instead. Considering who I am as a person, it has been a plus to have pretty firm orders. Of course I could do as I pleased, but it was clear to me that if I was serious about the coaching I had to “obey orders.” When I first began with the coach, I drank 1½ L of cola every day. Today I drink it on special occasions, and weeks can go by where I don't drink it.

The coach as a role model

It may sound as if it has been easy—and it both has and hasn't been. At times I have been surprised that it isn't harder, but there have also been times, where it has taken a lot of will power not to slip into “today it's okay to eat some sweets.” When having those thoughts, it has surprised me what role a coach plays. In these situations, the coach pops up in my head as a reminder. It is, of course, the feeling of having a responsibility toward the coach (and myself of course, but for me it is played out through the coach). The coach as a person has thus been a point of attention and been an inspiration in regard to keeping the deals we have made. The feeling that the coach has also given a part of himself—he can be under a similar pressure, but that doesn't mean he eats unhealthy things—has also been an inspiration.

In regard to the relief I have achieved—“getting out of the dark hole”—I have afterwards thought about the timing of the process. Would I have achieved the same results if I had gone to see the coach sooner? Could I have avoided gaining the last five kilos? My answer is no, because the process has made it clear to me that it is also very much about being ready for it. Without my commitment and without me being ready to take action it won't succeed—even a motivating coach can't change that.

Naturally, I wish for it to pass quickly—meaning that I quickly will get even lighter and get rid of the last 10–12 kilos. But if I can do it the way I have just done it—lose it over the course of a year and a half, then that would definitely be preferred. Because it feels good that it is happening on the basis of me being aware of having a normal relationship with my food and what I consume.

Conclusions

In this short section, I highlight a couple of conclusions that stand out from both my analysis of the sessions and Anna's evaluating reflections:

  • Health belief is a pure basis for change : Perceived seriousness, perceived susceptibility to specific health and lifestyle issues, might lead to a growing awareness of health issues, but it did not help Anna understand herself and the complexity of her life situation as the basis for changing her lifestyle.
  • Being ready for change : Anna stated that the time was ripe for her to change. Bohart and Talmann ( 2010 ) highlighted the client (in psychotherapy) as the central agent in the change process. With reference to different studies and authors, they see the client as the most important determinant of outcome. This can be assumed to also apply to coaching. Clients are not passive recipients in the dialogue; their willingness, their active involvement, and their way of making meaning of their life is pivotal for the progress and the effect of the intervention. In our last session, Anna emphasized that it is hard work to keep her ambitions alive and to continue to strive towards her goal.
  • Perceived support and encouragement from the social environment and from the coach are fundamental for the client's willingness and ability to develop further and continue on the desired path. From this perspective, Anna's sister, as a supportive buddy, had a major influence on keeping Anna on the right track. They shared something important in their lives— Anna could phone her sister when things got hard (see also Weiner, 1998 ). The change in Anna's working environment (i.e., no overtime work), has been another key element which supported her weight loss ambitions.
  • Anna mentioned the commitment of the coach (i.e., the determination to get her to drink mineral water instead of cola) as an important element for her development. This coach commitment was acknowledged by her as one of the central elements which kept her going. The collaborative perspective, also in regard to what worked well for her during the sessions (Miller, Duncan, Sorrell, Brown, & Chalk, 2006 ), was the key aspect of a fruitful and generative dialogue.

Acknowledgements

I thank Anna for her cooperation, not only during our sessions, but also in regard to supporting the article. I wish her all the best for the future.

Conflict of interest and funding

The authors have not received any funding or benefits from industry or elsewhere to conduct this study.

1 The decision was made on the basis of an invitation by the guest editors of this issue, and after the first review of the suggestion of the topic and abstract provided by the author. The client of the study has given full consent to be the case of this study. On the basis of Danish law, an ethical approval for a social science study like this is not needed. Ethical committees simply do not treat social science research applications.

2 This term is used in systemic therapy and coaching (Cecchin, Lane, & Ray, 1993 ).

  • Adams K. F, Schatzkin A, Harris T. B, Kipnis V, Mouw T, Ballard-Barbash R, et al. Overweight, obesity, and mortality in a large prospective cohort of persons 50 to 71 years old. New England Journal of Medicine. 2006; 355 :763–778. doi: 10.1056/NEJMoa055643. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandura A. Social learning theory. Englewood Cliffs, NJ: Prentice Hall; 1977. [ Google Scholar ]
  • Bandura A. Self-efficacy. The exercise of control. New York: Freeman; 1997. [ Google Scholar ]
  • Barad K. Meeting the universe halfway. London, UK: Duke University Press; 2007. [ Google Scholar ]
  • Becker M. H, Rodenstock I. M. Comparing social learning theory and the health belief model. In: Ward W. B, editor. Advances in health education and promotion. Greenwich, CT: JAI Press; 1987. pp. 235–249. [ Google Scholar ]
  • Bohart A. C, Tallman K. Clients: The neglected common factor in psychotherapy. In: Duncan B. L, Miller S. D, Wampold B. E, Hubble M. A, editors. The heart & soul of change. 2nd ed. Washington, DC: American Psychological Association; 2010. pp. 83–111. [ Google Scholar ]
  • Cecchin G, Lane G, Ray W. A, editors. Irreverence: A strategy for therapists’ survival. London: Karnac; 1993. [ Google Scholar ]
  • Daddario D. A review of the use of the health belief model for weight management. MEDSURG Nursing. 2007; 16 (6):363–366. [ PubMed ] [ Google Scholar ]
  • Dionne M. M, Yeudall F. Monitoring of weight in weight loss programs: A double-edged sword? Journal of Nutrition Education and Behavior. 2005; 37 (6):315–318. [ PubMed ] [ Google Scholar ]
  • Gergen K. J. Relational being: Beyond self and community. Oxford: Oxford University Press; 2009. [ Google Scholar ]
  • Heatherton T. F, Baumeister R. F. Binge eating as an escape from self-awareness. Psychological Bulletin. 1991; 110 :86–108. [ PubMed ] [ Google Scholar ]
  • Herman C. P, Mack J. Restrained and unrestrained eating. Journal of Personality. 1975; 43 :646–660. [ PubMed ] [ Google Scholar ]
  • Hibscher J. A, Herman C. P. Obesity, dieting, and the expression of ‘obese’ characteristics. Journal of Comparative Physiological Psychology. 1977; 91 :374–380. [ PubMed ] [ Google Scholar ]
  • Højgaard L, Søndergaard D. M. Theorizing the complexities of discursive and material subjectivity: Agential realism and poststructural analyses. Theory & Psychology. 2011; 21 (3):338–354. [ Google Scholar ]
  • Illeris K. Transformative learning in the perspective of a comprehensive learning theory. Journal of Transformative Education. 2004; 2 :79–89. [ Google Scholar ]
  • Jarvis P. The practitioner-researcher—Developing theory from practice. San Francisco, CA: Jossey-Bass; 1999. [ Google Scholar ]
  • Lane D. A, Corrie S. The modern scientist-practitioner—A guide to practice in psychology. London: Routledge; 2006. [ Google Scholar ]
  • McLaren L. Socioeconomic status and obesity. Epidemiologic Reviews. 2007; 29 :29–48. [ PubMed ] [ Google Scholar ]
  • Mezirow J, Associates . Fostering critical reflection in adulthood. A guide to transformative and emancipatory learning. San Francisco, CA: Jossey Bass; 1990. [ Google Scholar ]
  • Miller S. D, Duncan B. L, Sorrell R, Brown G. S, Chalk M. B. Using outcome to inform therapy practice. Journal of Brief Therapy. 2006; 5 (1):5–22. [ Google Scholar ]
  • Mosley M, Spencer M. Fast diet—The secret of intermittent fasting—Lose weight, stay healthy, live longer. London: Short Books; 2013. [ Google Scholar ]
  • Ogden J. Psychosocial theory and the creation of the risky self. Social Science & Medicine. 1995; 40 (3):409–415. [ PubMed ] [ Google Scholar ]
  • Ogden J. Diet as a vehicle of self-control. In: Yardley L, editor. Material discourse of health and illness. New York: Taylor & Francis; 1997. pp. 199–216. [ Google Scholar ]
  • Ogden J. Health and the construction of the individual. New York: Taylor & Francis; 2002. [ Google Scholar ]
  • Ogden J, Whyman C. The effect of repeated weighing on psychological state. European Eating Disorders Review. 1997; 5 (2):121–130. [ Google Scholar ]
  • Pavlov I. P. Conditioned reflexes. Oxford: Oxford University Press; 1927. [ Google Scholar ]
  • Richardson L. Writing: A method of inquiry. In: Denzin N, Lincoln Y, editors. The handbook of qualitative research. 2nd ed. Thousand Oaks, CA: Sage; 2000. pp. 923–948. [ Google Scholar ]
  • Rosenstock I. M. The health belief model and preventive health behavior. Health Education Monographs. 1974; 2 (4):355–387. [ PubMed ] [ Google Scholar ]
  • Rosenstock I. M, Strecher V. J, Becker M. H. Social learning theory and the health belief model. Health Education Quarterly. 1988; 15 :175–183. [ PubMed ] [ Google Scholar ]
  • Rubak S, Sandboek A, Lauritzen T, Christensen B. Motivational Interviewing: A systematic review and meta-analysis. British Journal of General Practice. 2005; 55 (513):305–312. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Schachter S. Obesity and eating. Science. 1968; 161 :751–756. [ PubMed ] [ Google Scholar ]
  • Schachter S, Gross L. Manipulated time and eating behaviour. Journal of Personality and Social Psychology. 1968; 10 :98–106. [ PubMed ] [ Google Scholar ]
  • Schön D. A. The reflective practitioner: How professionals think in action. New York: Basic Books; 1983. [ Google Scholar ]
  • Skinner B. F. Science and human behaviour. New York: Macmillan; 1953. [ Google Scholar ]
  • Smith L. L, Lake N. H, Simmons L. A, Perlman A. I, Wroth S, Wolever R. Q. Integrative health coach training: A model for shifting the paradigm toward patient-centricity and meeting new national prevention goals. Global Advances in Health and Medicine. 2013; 2 (3):66–74. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stelter R. The transformation of body experience into language. Journal of Phenomenological Psychology. 2000; 31 (1):63–77. [ Google Scholar ]
  • Stelter R. Coaching: A process of personal and social meaning making. International Coaching Psychology Review. 2007; 2 (2):191–201. [ Google Scholar ]
  • Stelter R. Learning in the light of the first-person approach. In: Schilhab T. S. S, Juelskjær M, Moser T, editors. The learning body. Copenhagen: Danish University School of Education Press; 2008. pp. 45–65. [ Google Scholar ]
  • Stelter R. Coaching as a reflective space in a society of growing diversity—Towards a narrative, postmodern paradigm. International Coaching Psychology Review. 2009; 4 (2):207–217. [ Google Scholar ]
  • Stelter R. A guide to third generation coaching. Narrative-collaborative theory and practice. Dordrecht: Springer Science + Business Media (also available as E-book); 2014. [ Google Scholar ]
  • Stelter R, Law H. Coaching – narrative-collaborative practice. International Coaching Psychology Review. 2010; 5 (2):152–164. [ Google Scholar ]
  • Swencionis C, Rendell S. L. The psychology of obesity. Abdominal Imaging. 2012; 37 :733–737. doi: 10.1007/s00261-012-9863-9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Weiner S. The addiction of overeating: Self-help groups as treatment models. Journal of Clinical Psychology. 1998; 54 (2):163–167. [ PubMed ] [ Google Scholar ]
  • White M. Narrative practice and the unpacking of identity conclusions. In: White M, editor. Narrative practice and exotic lives: Resurrecting diversity in everyday life. Adelaide: Dulwich Centre Publications; 2004. pp. 119–148. [ Google Scholar ]
  • White M. Maps of narrative practice. New York: Norton; 2007. [ Google Scholar ]
  • Whitmore J. Coaching for performance. GROWing human potential and purpose. 4th ed. London: Nicholas Brealey; 2009. [ Google Scholar ]
  • Wolever R. Q, Simmons L. A, Sforzo G. A, Dill D, Kaye M, Bechard E. M, et al. A systematic review of the literature on health and wellness coaching: Defining a key behavioral intervention in healthcare. Global Advances in Health and Medicine. 2013; 2 (4):38–57. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Yin R. K. Case study research: Design and methods. 4th ed. Thousand Oaks: Sage; 2009. [ Google Scholar ]

single case study exercise

GTM Strategist

single case study exercise

Can you get to your first million with a single case study?

Rob snyder, fellow at harvard innovation labs, thinks you can.

single case study exercise

Dear GTM Strategist!

Are we overthinking GTM and wasting our precious resources and time by worrying about the next hot "hack"? 

single case study exercise

GTM should be a straightforward, proprietary strategy you and your team can quickly grasp and implement. It should be crystal clear to everyone and distinctive enough to catch your competition by surprise. 

Here is a simple example: 

I sold all consulting services in Q2 by showcasing a single case study. The case study described how my client made $4 million sales pipeline for a B2B SaaS solution by targeting large companies using outbound. The (not so) secret ingredients: 

Differentiated positioning 

Kick-ass sales deck 

ICP-first landing page for the offer (not a home page!) 

Well-targeted Outbound campaign with 300-400 relevant prospects focusing on a no-brainer offer 

The service consisted of 4 workshops and co-creation of deliverables (contact bases, segmentation, outreach sequences, and refining the business development process) with a client. 

Before I stumbled upon Rob Snyder's Harvard Innovation Labs: The Path to PMF presentation, I thought this success was a stroke of luck. 

But it is actually a proven system that you can replicate. 

I took 56 screenshots and studied his presentation for 6 hours (did all exercises). 

I STRONGLY encourage you to dive deeper into it yourself 

single case study exercise

I've also distilled the most crucial lessons and action points to a shorter version and asked Rob to share more examples. 

In this Substack, you will learn: 

There is no such a thing as demand creation, according to Rob and a giant Poodle Silky 🐩 I met in Budapest 

You will be invited to consider a bold idea that the true PMF is just ♻️ replicating a single case study made with a single person you can help in mind 

Your most essential sales material can be a simple case study , not a 36-slide sales deck or an 87-page long white paper

Become more confident in communicating the results to clients and getting the right job done again and again for the client. 

Let’s dive right in! 

You cannot create demand. You can only find and harness it. 

"I chuckled the first few times I heard the job title “Demand Generation Manager.” Most businesses can’t generate demand any more than they can generate the sun rising. Your aim is to tap into demand rather than trying to generate it. You want to be like a solar panel absorbing the sun’s rays and turning them into usable energy as efficiently as possible."   - Allan Dib, 2x best selling author,  The 1-Page Marketing Plan (2016) & Lean Marketing (2024)

You cannot convince people to buy your product. Instead, you should focus on identifying those who already have a need for it and are willing to pay for it. 

Rob thinks of demand differently than most others. For him, demand is “supply-agnostic”. Don’t think of demand FOR your product. Instead, think of demand as: “What’s the project on someone’s Asana board that I can tie my product to?”

People are searching for solutions and jobs to be done, not products.

Meet Silky, a lovely poodle 🐩 parading in Budapest that our Australian Shepherd Juno enjoyed playing with. 

He sports a continental groom, a classic style for poodles but not for other breeds.

Would I ever groom my dogs to look like Silky? - Nope. 

Is it convenient for a dog guide to do it? - Nope. 

Is Silky a great-looking doggo? - Juno approves. 🐾

single case study exercise

But when I saw Silky for the very first time, I whispered to my husband: “Oh boy, I would never cut Juno like this.”

No fashion trend or expert recommendation would ever make me consider replicating “Silky style” on an Australian Shepherd because it is impractical. She would probably get sunburned and too hot, and TBH, she would look like an overweight lion. 🦁

Silky’s owners and I are not the same ICP. For instance, Silky’s owners' job is to have a great-looking show dog, while my job is to have a well-trained (and tired) working dog. 

Understanding these 'jobs to be done' is crucial for creating products and services that truly meet your customers' needs. 

There is no market such as “all dog owners”. But among dog owners around the globe, there are niche markets for nearly anything. You just have to find your niche and serve it well. 

single case study exercise

Your job is to “find your people,” but Rob Snyder added a brilliant component to the mix: 

*Selling* is the only way to figure this out *for real*

This doesn’t mean “saying that they will buy”, nor “clicking on some fake gateways on the website to learn that a non-existing product is out of stock”. 

They have to pay for it, use it, and keep using it. Hopefully, you can even upsell them, too, so your businesses will grow together. 

Back to Michael Skok’s definition of Product-Market fit, which includes securing willingness to pay.

Rob argues that what’s most important is the post-sale “hell yes” - which for B2B software startups often means renewals and upgrades, but for consultants, it often means testimonials and referrals. 

In simple words: start selling as soon as possible.

Figure out your ONE replicable case study that has intense demand! 

Rob makes 3 arguments about case studies that are mind-bending.

First, he argues that the case study is *the* most upstream part of your business - it’s what defines your ICP, persona, market segment, differentiation, and everything else that matters. When you feel like a million things matter… you’re often focusing on the things that emerge based on the case study, not the case study itself. In this way, the case study model provides a powerful focusing device.

Second, he argues that a business is just a system that replicates case studies . It’s not more complicated than that. And you can look at all the things you do (marketing, sales, success, etc.) as components of the “case study replication factory” - and ask yourself, “where’s the factory bottleneck right now?” With this mental model, you always know exactly where to focus in your business.

Third, case Studies are your strongest sales and learning materials. Shockingly, just showing the case study to potential customers is a highly effective sales approach, which also helps you FIGURE OUT what case study you're delivering.

How to figure out your case study and find product-market fit

Startups should talk to 5-10 potential customers every week to get enough volume to earn an intuition about what the real replicable case study should be. This helps avoid getting “pigeonholed” - and then, as you serve customers, you can see the patterns emerge across customer conversations. 

All this is done on the path to product-market fit. Here are 5 levels of PMF:

single case study exercise

These levels represent different stages of product development and market acceptance, with the ultimate goal being to reach PMF Level 5, where your product is a perfect fit for the market and demand is high. 

Case study: Parallel

Parallel is a growing startup that offers an AI-powered financial modeling tool. The market for such tools is competitive, with tons of options available for startups.

What Parallel found is that startup founders had often tried & failed to implement other financial modeling tools because those tools were meant for finance teams, not founders (who rarely come from a finance background). But founders felt like they needed ways to run hiring and fundraising scenarios, and wanted to be in the driver's seat for this vs. relying on an analyst or fractional CFO.

You can see their case study emerging based on this:

A founder looking towards hiring or fundraising, wanting to run scenarios and see cash implications.

This founder considers an excel DIY model, learning a financial-modeling-for-finance-people tool, or a finance assistant for founders (which is what Parallel offers).

A founder explores Parallel, and is quickly able to model scenarios and get confidence about when to raise, how much to raise, when to hire... and can keep iterating on their scenarios for every big & small decision that impacts burn and runway.

Parallel is uniquely suited to deliver a case study in an area of intense demand. They can focus on replicating this case study and getting  every  customer to say "hell yes" post-sale.

Why case studies work

Unlike a typical problem-solution-result structure of a case study, Rob’s framework acknowledges more realities: 

<5% of the market is ready to buy now (and you know that from your conversion rates). A prospect is not always in a position or willingness to buy. You must recognize buying/sales triggers and nail the best marketing timing.

Good positioning = a promise to a client + why you . We are competing against alternatives and should, therefore, be laser-focused on DIFFERENTIATED VALUE . Why is your solution a better fit for the client than alternatives that they are considering? A buying decision is never made in isolation. You compete against other options and “do nothing” about this. 

Focus on results - clients do not buy how (input) but what (output). Simply put, they do not necessarily care how many hours of research/development you put into your product; they care about what results they will get from using it and how fast. Double down on the promise and provide evidence (case studies) that you will deliver on this promise for them. Never neglect the emotional and personal benefits attached to the promise. Not everything is rational. Remember Silky. 

Again, talking to customers is crucial. You just can't know what will work in advance and from a whiteboard. You figure out all of this - the case study, the product, the segment, the "when" -- all through sales and delivery conversations. Why do people say "hell yes" pre-sale? Why do people say "hell yes" post-sale? What does that tell us about the case study?

After you find your winning formula, you can repurpose this case study for your sales deck, and you are ready to win some new business. Now, let’s book some meetings!    

Do not overthink - start doing ASAP

___________

Learning by doing and staying focused is the answer to our Product-Market fit prayers, my friend. Distractions and FOMO tend to be the founder’s worst enemies. Here is how Rob describes the path to PMF: 

single case study exercise

You will likely get your first ten customers from your personal and second-degree network. If that is not available immediately, you can go on platforms where target customers are actively searching for solutions or send some personalized outbound. 

How to book at least 5-10 relevant meetings a week: 

LinkedIn + email is usually more than enough to get to 5-10 per week. 

You can use automation tools (e.g., Sales Nav + Dripify; Apollo), but don’t sound like you’re using automation tools.

Get to 5-10 per week, then debug your case study, sales + delivery process and targeting

Eventually, your *scalable* pipeline source will emerge based on your case study

It does not hurt to publish your case study on social media, too, but it will take months to build a relevant audience if you have not intentionally done so before. 

Later on your GTM journey, you will have to find 1-3 replicable growth motions and created processes so that they will repeatedly work well for you, but for now, your GTM Motion is: 

Inbound → Have a decent social media presence and communicate results there. 

Outbound → Build a list of 200ish ICP-like decision makers, present a case study, and invite them for a meeting. 

Paid digital  → Do not do it before you have validated your case study. 

ABM → Start building relationships with 5-10 dream clients by value commenting on their posts. 

Community → Create a value post about the case study in communities relevant to your ICP. 

Partners → Do not do it before you have validated your case study. 

Product-led growth → Sell a result of the case study. Do not code yet- show a prototype, do it manually or with some tweaked tools/automation to prove that you can deliver the desired result. 

Action time! 

Read through Rob’s epic presentation to get more insights and ideas. 

Follow him for more PMF wisdom on Substack and LinkedIn .

single case study exercise

Work on your case study using this framework - I did it, and it is great.

No excuse. Let’s go to market! 

One case study at a time.

If you are new here, check out GTM Strategist solutions, which will help you get profitable traction much faster. They were battle-tested with over 6,500 entrepreneurs, and they work.

Get the e-book

single case study exercise

single case study exercise

Liked by Maja Voje

Loved the part with Silky - such a good analogy! :)

Liked by Maja Voje

Ready for more?

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 25 August 2024

Accurate long-read transcript discovery and quantification at single-cell, pseudo-bulk and bulk resolution with Isosceles

  • Michal Kabza 1 ,
  • Alexander Ritter   ORCID: orcid.org/0000-0002-1998-7357 2 ,
  • Ashley Byrne   ORCID: orcid.org/0000-0002-2177-924X 3 ,
  • Kostianna Sereti 4 ,
  • Daniel Le 3 ,
  • William Stephenson   ORCID: orcid.org/0000-0002-3779-417X 3 &
  • Timothy Sterne-Weiler   ORCID: orcid.org/0000-0003-2023-0383 2 , 4  

Nature Communications volume  15 , Article number:  7316 ( 2024 ) Cite this article

Metrics details

  • Computational biology and bioinformatics
  • Computational models
  • RNA sequencing
  • Transcriptomics

Accurate detection and quantification of mRNA isoforms from nanopore long-read sequencing remains challenged by technical noise, particularly in single cells. To address this, we introduce Isosceles, a computational toolkit that outperforms other methods in isoform detection sensitivity and quantification accuracy across single-cell, pseudo-bulk and bulk resolution levels, as demonstrated using synthetic and biologically-derived datasets. Here we show Isosceles improves the fidelity of single-cell transcriptome quantification at the isoform-level, and enables flexible downstream analysis. As a case study, we apply Isosceles, uncovering coordinated splicing within and between neuronal differentiation lineages. Isosceles is suitable to be applied in diverse biological systems, facilitating studies of cellular heterogeneity across biomedical research applications.

Introduction

Alternative splicing (AS) contributes to the generation of multiple isoforms from nearly all human multi-exon genes, vastly expanding transcriptome and proteome complexity across healthy and disease tissues 1 . However, current short-read RNA-seq technology is restricted in its ability to cover most exon-exon junctions in isoforms. Consequently, the detection and quantification of alternative isoforms is limited by expansive combinatorial possibilities inherent in short-read data 2 . Short read lengths can impose additional challenges at the single-cell level. For example, nearly all isoform information is lost with UMI-compatible high-throughput droplet-based protocols which utilize short-read sequencing at the 3′ or 5′ ends 3 . Recent advances in long-read sequencing technologies provide an opportunity to overcome these limitations and study full-length transcripts and complex splicing events at both bulk and single-cell levels, yet downstream analysis must overcome low read depth, high base-wise error, pervasive truncation rates, and frequent alignment artifacts 4 . To approach this task, computational tools have been developed for error prone spliced alignment 5 and isoform detection/quantification 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 . However, these tools vary widely in accuracy for detection and quantification 14 , their applicability to bulk or single-cell resolutions, and in their capabilities for downstream analysis.

Here we present Isosceles (the Iso forms from s ingle- ce ll, l ong-read e xpression s uite); a computational toolkit for reference-guided de novo detection, accurate quantification, and downstream analysis of full-length isoforms at either single-cell, pseudo-bulk, or bulk resolution levels ( https://github.com/Genentech/Isosceles ). In order to achieve a flexible balance between identifying de novo transcripts and filtering misalignment-induced splicing artifacts, the method utilizes acyclic splice-graphs to represent gene structure 15 . In the graph, nodes represent exons, edges denote introns, and paths through the graph correspond to whole transcripts (Fig.  1a ). The splice-graph and transcript set can be augmented from observed reads containing novel nodes and edges that surpass reproducibility thresholds through a de novo discovery mode, enhancing the adaptability of the analysis. In the process, sequencing reads are classified relative to the reference splice-graphs as either node-compatible (utilizing known splice-sites) or edge-compatible (utilizing known introns), and further categorized as truncated or full-length (Fig.  1a ). Full-length reads can be directly assigned to known transcripts, meanwhile those representing novel transcript paths are assigned stable hash identifiers. These identifiers facilitate ease of matching de novo transcripts across data from the same genome build, irrespective of sequencing run, biological sample, or independent studies. In contrast, truncated reads may introduce ambiguity in terms of their transcript of origin, reflecting a challenge commonly found in short-read data analysis. To address this, we utilize a concept developed for short-read methods, Transcript Compatibility Counts (TCC) 16 , as the intermediate quantification of all reads. TCCs are used to obtain the maximum likelihood estimate of transcript expression through the expectation-maximization (EM) algorithm (in ref. 17 , 18 ; see Methods). This approach tackles another challenge: accurately quantifying transcripts at multiple single-cell resolution levels. First, transcripts can be quantified through EM within single-cells, which can be subsequently used to obtain a neighbor graph and low dimensional embedding (eg. with common tools like Seurat 19 ). Second, transcripts can be quantified at the pseudo-bulk level through EM on the TCCs summed within cell groupings (Fig.  1b ). This configuration enables versatility of quantification; pseudo-bulk can be defined by the user in numerous ways, such as through marker labeling, clustering, windows along pseudotime, or for each cell based on its k-nearest neighbors (kNN). Downstream statistical analysis and visualization for percent-spliced-in and alternative start and end sites is seamlessly integrated to facilitate biological interpretation of isoforms. Our performance evaluations demonstrate that these features act together to enhance the accuracy of isoform detection and quantification, particularly at lower expression levels. These findings support Isosceles as a robust and performant tool for long-read transcriptome analysis across resolution levels.

figure 1

a Splice-graph building and path representation of transcripts (colored lines). Augmentation with de novo nodes and edges (dashed). Ambiguous reads are assigned to Transcript Compatibility Counts (TCCs) to be quantified using the expectation-maximization (EM) algorithm (bottom; panel b ). b The Isosceles approach to multi-resolution quantification using the EM algorithm. Transcripts quantified from single-cell TCCs using EM (gray cell, right) can be used for dimensionality reduction (DimRed) with UMAP or to derive a k-nearest neighbors graph (kNN). The original single-cell TCCs can be grouped based on user-defined pseudo-bulk definition and transcripts re-quantified, either for clusters/markers or for each cell based on its neighborhood from kNN. Figure  1 /panel b, created with BioRender.com, released under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.

Isosceles is accurate for transcript discovery and quantification

To robustly assess Isosceles performance against a wide-array of currently available software 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , we simulated ground-truth nanopore reads from reference transcripts proportional to the bulk expression profile of an ovarian cell line, IGROV-1, using NanoSim 20 (see Methods). In the evaluation of annotated transcript quantification against the ground-truth, Isosceles outperforms other programs, achieving a highly correlated Spearman coefficient of 0.96 (Supplementary Fig.  1b ). Bambu was the next best method at 0.92, while both IsoQuant and ESPRESSO were lower at 0.88. Assessing quantification error through absolute relative difference, Isosceles decreases median and mean error by 21% compared to the next most accurate method, Bambu (0.23 vs. 0.29 and 0.41 vs. 0.52; Fig.  2a and Supplementary Fig.  1a ). Importantly, the reduction in error over other methods is even more pronounced, demonstrating ~45% lower error than the median performer ESPRESSO, and 67-85% lower error than the worst performer NanoCount due to lack of detection of many simulated transcripts (Fig.  2a and Supplementary Fig.  1a ).

figure 2

a Median relative difference of transcripts per million [TPM] values as defined by abs(ground_truth - predicted) / ((ground_truth + predicted)/2) for each method on reference transcripts. b Downsampling benchmarks for 30% transcripts withheld. Transcript detection defined as TPM > 0, the TPR, FDR and F1 score metrics as a function of the expression percentile (primary x-axis) and TPM values (secondary x-axis) of the simulated transcripts. For each program, the better of either single-program or pre-detection combination is plotted (see Supplementary Fig.  2 for all combinations), with TPR stratified by annotated, withheld and all transcripts alongside FDR and F1 score, with overall values plotted as bars below the graphs. c Median relative difference of annotated and withheld transcripts (30% downsampling) as a function of the simulated expression level, as defined for panel b . Source data are provided as Source Data files.

Since detection of both known and novel transcripts is a major attraction of long-read sequencing, we investigated the ability of various methods to detect 10%, 20% or 30% of transcripts when they are withheld from the annotation file (3269, 6537 and 9801 transcripts respectively; 30% in Fig.  2b , 10-30% in Supplementary Fig.  2a-c ). Here, detection is defined as output of a transcript annotation with a splicing structure correctly matching a simulated transcript (irrespective of transcript start/end positions) and a quantification value greater than zero in transcripts per million (TPM > 0). We calculate the true-positive rate (TPR) as the number of correct transcripts detected from the total number with reads simulated. The false-discovery rate (FDR) is defined as the percentage of incorrect transcripts out of the total detected. The overall F1-score is computed as the harmonic mean of sensitivity (TPR) and precision (1-FDR). Notably, most methods output low TPR even for transcripts that are not withheld from the annotation file, as we illustrate by separating the TPR calculations for annotated and withheld transcripts (Fig.  2b left, Isosceles=98.9% vs. median other=69.7%). Methods such as NanoCount and LIQA do not have a de novo detection mode, so we benchmark them with a pre-detection step using StringTie2 21 , adding this step to other tools for consistency (eg. Bambu, ESPRESSO, and also include IsoQuant alongside single-method detection for Isosceles; Fig.  2b , Supplementary Fig.  2 , dashed lines). While ESPRESSO and IsoQuant alone have modestly higher single-method TPR for withheld transcripts than Isosceles (1.0 and 6.0 percentage points respectively; Supplementary Fig.  2a ), Isosceles demonstrates the highest single-program TPR across all transcripts, achieving 78.2%, compared to the next best method, IsoQuant, which has a TPR of 74.2% (Supplementary Fig.  2a , middle). Moreover, combining Isosceles with pre-detection by IsoQuant outperforms all other methods and combinations for withheld and annotated transcripts, achieving an 84.5% TPR overall (Fig.  2b ). Importantly, Isosceles exhibits this relative gain in sensitivity at lower expression levels than other methods (<10 TPM), and at a reasonable FDR of 4.3%, which is comparable to other programs (Fig.  2b ; median FDR of 3.0%). Taken together, Isosceles presents the highest F1-score overall both independently (86.2%; Supplementary Fig.  2a ) and with pre-detection using IsoQuant (89.7%; Fig.  2b ). When considering the relative difference of quantification for annotated and withheld transcripts, Isosceles performs at 16.7% to 76.9% decrease in median error compared to other methods on annotated transcripts and 23.4% to 82.0% when including de novo (withheld) transcripts across the range of expression levels (Fig.  2c left & right; Supplementary Fig.  3a ). Similar to detection sensitivity, the most pronounced improvement in quantification accuracy occurs for the lowest half of expressed transcripts.

In addition to read simulations, we benchmarked performance in the context of nanopore sequencing of synthetic molecule spike-ins. To investigate quantification accuracy, two mixtures containing Sequins 22 were analyzed and compared to ground-truth values. Isosceles, Bambu, and IsoQuant achieved equally high Spearman correlations for both mixtures (0.97 and 0.98; Supplementary Fig.  3b ), with Isosceles slightly outperforming the others with a lower mean relative difference for the first mixture (0.71 vs. 0.74). To evaluate transcript detection in a synthetic setting that does not resemble a model organism, we utilized SIRV spike-in samples 23 alongside annotated and withheld transcript sets (Supplementary Fig.  3c-e ). Isosceles, Bambu, and IsoQuant all showed reasonably high precision (96-99%) for read assignment to correct annotations vs. over-annotation decoys (see Methods, Supplementary Fig.  3e ). Similar performance was also achieved for transcript detection with all three methods F1-scores falling into the range of 76-78% (Supplementary Fig.  3c ). Despite identifying fewer withheld transcripts, Isosceles, with zero false positives, outperformed IsoQuant, which had four false positives and missed five annotated structures (Supplementary Fig.  3d ). Taken together, these data suggest Isosceles is able to perform in both well-annotated and non-model organism contexts.

Isosceles also shows favorable performance with the PacBio long-read sequencing platform. We compared nanopore reads from the Nanopore WGS Consortium and PacBio reads from the ENCODE Consortium 24 to short-read Illumina quantifications for the same cell line (GM12878; see Methods). We find that Isosceles, IsoQuant and Bambu all perform well in the ONT vs. Illumina comparison, however Isosceles displays slightly higher Spearman correlation for PacBio vs. Illumina (Supplementary Fig.  4a ). Lastly, computational speed and RAM usage are important metrics that impact the overall usability and feasibility of large-scale analysis efforts. Benchmarking a 5 M read PromethION IGROV-1 sample, Isosceles emerged as one of the more efficient tools, finishing approximately two hours sooner than the median performer IsoQuant (93.0% of total CPU time) and outperforming the slowest software, ESPRESSO, by two days and two hours (33.4% of total CPU time; Supplementary Fig.  4b ).

Isosceles outperforms other methods at single-cell resolution

While known ground-truth values are effective for benchmarking performance, the analysis of true biological data introduces additional complexities that synthetic and simulated data may not fully capture. To address this, we benchmark each method’s fidelity of quantification for the same biological sample and ability to differentiate decoy samples across bulk and single-cell resolutions. We perform nanopore sequencing on 10X Genomics single-cell libraries from the pooling of three ovarian cancer cell lines, IGROV-1, SK-OV-3, and COV504, noting that the cells separate into three clusters by transcript expression and that each cluster corresponds to a separate genetic identity according to Souporcell 25 (Fig.  3a ; see Methods). Conducting bulk nanopore sequencing in parallel on MinION and PromethION platforms, we investigate the consistency of those same cell lines as well as the ability to distinguish against four additional ovarian cancer cell lines sequenced as decoys, namely COV362, OVTOKO, OVKATE, and OVMANA. We find that Isosceles consistently maintains the lowest mean relative difference (24-43% less than other methods) and the highest Spearman correlation (0.87 for Isosceles vs. 0.75 for the next highest, Sicelore) amongst methods quantified on the same cell line in bulk and pseudo-bulk (Fig.  3b and Supplementary Fig.  5a ). We further find that this performance is recapitulated when comparing across technical runs, between platforms, and independent of the number of cells included or transcripts compared for IGROV-1 (Supplementary Fig.  4c-d ). Isosceles’ application of the EM algorithm is designed to result in greater usage of ambiguous reads, which may ultimately provide higher apparent read depths, and influence quantification accuracy of both matched and decoy comparisons. Therefore, to ensure the observed results reflect accuracy and not merely precision, we stringently consider the consistency of difference between matched and decoy comparisons. To enhance discriminatory power we investigate highly variable transcripts (HVT) between cell lines as determined by each program 26 . While all methods perform better using fewer HVTs, Isosceles exhibits a 1.3- to 1.4-fold greater absolute difference in Spearman correlation than the next best method, IsoQuant, using between 500 and 4,000 HVTs (Fig.  3c ). This outperformance is also observed for mean relative difference as compared to the next best method, FLAMES, and is statistically significant across HVT numbers for both ( p value < 5.3×10 −5 for Spearman and p value < 3.4 × 10 −3 for mean relative diff. vs. next best methods; Wilcoxon paired signed-rank test, see Methods; Fig.  3c ). To provide orthogonal support for this conclusion, we simulated 100 cells at approximately 50k reads per cell and 5 M bulk reads for each sample using NanoSim with single-cell and bulk error models respectively (see Methods). We repeated the same benchmark, comparing matched and decoy metrics derived either from each method’s estimates based on simulated reads or from the ground-truth expression profiles used for the simulations. Isosceles outperforms other methods by 1.4 to 2.4-fold across metrics, with the exception of IsoQuant, which is equivalent to Isosceles for Spearman correlation only ( p value = 0.5; Supplementary Fig.  5c ). Last, we compare the simulated single-cell and pseudo-bulk quantifications for IGROV-1 to ground-truth. While all methods show inflated error for single-cells compared to pseudo-bulk, Isosceles harbors lower average error than other methods for both, demonstrating quantification accuracy even in a data-sparse context (Fig.  3d ).

figure 3

a 2D UMAP embedding of transcript-expression level quantifications from nanopore data of pooled IGROV-1, SK-OV-3, and COV504 ovarian cell lines, subsequently colored by genetic identity (according to Souporcell). b Mean relative difference (color scale) of each program’s quantifications across resolutions (pseudo-bulk vs. bulk data) for the top 4,000 highly variable transcripts. c Absolute difference between matched and decoy cell lines across a range of 500-10,000 highly variable transcripts (HVT) comparing mean relative difference and Spearman correlation metrics (shaded ribbons provide the upper and lower bounds of std. error). d Mean relative difference (as defined for Fig.  2a ) between ground-truth and estimated TPM values from simulated reads at pseudo-bulk (solid lines) and single-cell level (dashed lines). Source data are provided as Source Data files.

Isosceles enables biological discovery with single-cell data

Isosceles’ capabilities for accurate and flexible quantification also enhance downstream analysis and biological discovery. To demonstrate, we reanalyzed 951 single-cell nanopore transcriptomes from a mouse E18 brain. Investigating transcriptional markers (Supplementary Fig.  6 ), we observe the major cell types identified in the original study using Sicelore 9 . Isosceles quantifications provide greater resolution however, separating differentiating glutamatergic neurons into two distinct trajectories instead of one (annotated here as T1 and T2), in addition to the single GABAergic trajectory using Slingshot 27 (Fig.  4a ). We also observe separation of radial glia and glutamatergic progenitor cells, which were connected in the original study. Isosceles’ versatility of pseudo-bulk quantification coupled to generalized linear models (GLM), further distinguishes downstream experimental design capabilities for biological discovery. For example, to investigate transcriptional dynamics within trajectories we apply the EM algorithm to pseudo-bulk windows, quantifying transcript expression as a function of pseudotime. To summarize individual transcript-features, Isosceles provides the inclusion levels of alternative splicing (AS) events, such as alternative exons and splice sites quantified as percent-spliced-in 2 , 28 [PSI] or counts-spliced-in [CSI] (see Methods). In order to test for differential inclusion versus exclusion as a function of pseudotime (or any other condition), Isosceles seamlessly integrates with the DEXseq package 29 to utilize GLMs in the context of splicing (see Methods). Applying the method identifies 25 AS events changing within trajectories as well as 21 changing between trajectories respectively (Supplementary Data  1 ). Isosceles also implements the ‘isoform switching’ approach utilized in the original study (see Methods). However, we note that applying this method only identifies transcripts changing between major clusters, and none within glutamatergic or GABAergic neurogenesis trajectories (including the exemplar genes Clta and Myl6 presented in the original study; eg. Supplementary Fig.  7a ).

figure 4

a 2D UMAP embedding from PCA performed jointly on variable gene and transcript features. Gradient coloring by pseudotime according to each trajectory. Neural Progenitor Cells are abbreviated Pro., Immature neurons Imm. and Mature neurons Mat. T1 and T2 describe the two trajectories of Glutamatergic neurogenesis observed. b Heatmap of significant AS events colored by the ratio of observed CSI vs. permuted CSI for permutations within (top) or across all (bottom) trajectories. c UMAP density column from top to bottom: Celf2 gene expression, Celf2 alternative 5′ splice site (A5) in intron 12 ( Celf2:i12:A5 , chr2:6560659-6560670; row highlighted in panel b), and juxtaposed alternative 3′ splice site (A3) for intron 12 ( Celf2:i12:A3 , chr2:6553965-6553982). d AS event diagram on the top of Celf2 gene intron 12 where exons are shown as boxes and introns as lines, with the A5 event in red, and the A3 event in blue, with reads from cells in the beginning and the end of the glutamatergic T1 trajectory shown below respectively (boxed regions from the bottom panel). In the bottom panel are plots of CSI for windows along pseudotime for the observed data (A5, red) and (A3, blue) plotted over the background permutations in gray. e Mean PSI values of sample group quantifications from the human (Hsa38, left) and mouse (Mmu10, right) VastDB splicing databases 30 . Standard error is provided as bars (for sample groups with n  > 1 samples), with source accession identifiers for each sample provided in Supplementary Data  2 and raw values in Source Data.

One major challenge in the interpretation of single-cell data at the transcript-level (or event-level) is that fluctuations in detection or quantification may be attributable to gene expression changes alone. To decouple splicing dynamics and visualize them independently, we utilize a permutation-based approach. We estimate a background distribution by shuffling each gene’s splicing quantification among cells expressing that gene (within and between trajectories). We then visualize log ratios of the observed CSI values versus the mean expected CSI from these permutations (Fig.  4b and Supplementary Fig.  7b ; see Methods). Here, we observe AS events that exhibit precise changes within specific neuronal differentiation trajectories (such as only T1 or T2), including several RNA binding proteins (eg. Celf2 , Hnrnpa2b1 , Luc7l3 , Ythdc1 ). Exemplifying a unique mode of alternative splicing in the gene Celf2 , we observe a coordinated switch from one alternative donor splice site to an alternative acceptor splice site in the same intron as cells differentiate from glutamatergic progenitor to mature neurons (T1 trajectory, Fig.  4c-d ). To validate the statistical significance of this event, we compare observed to permuted values using a stringent empirical test (see Methods). Here, we find the splicing-change is robustly independent of the overall changes in Celf2 expression that simultaneously occur (Fig.  4c-d and Supplementary Fig.  8c ; p value < 3.8 × 10 −4 ). Underscoring biological significance, we note the two alternative splice sites have orthologs in other mammalian species (as annotated in VastDB 30 ) and high sequence conservation in the intronic region surrounding both splice sites (Supplementary Fig.  8a-b ). We show the mutual exclusivity and switch-like splicing change are similarly conserved in human and mouse, recapitulating the longitudinal observation across embryonic brain samples from bulk short-read datasets 30 (Fig.  4e ), including an in vitro study of mouse neuronal differentiation 31 (Supplementary Fig.  8d ).

In summary, Isosceles is a computational toolkit with favorable performance compared to other methods, as demonstrated through rigorous benchmarks on simulated, synthetic spike-in, and biological data from nanopore sequencing across ovarian cell lines. In these benchmarks, Isosceles performs transcript detection and quantification with accuracy, revealing improvements over existing methods that are most pronounced at lower expression levels. Notably, transcription factors and other regulatory proteins typically exhibit low gene expression levels, accompanied by rapid, fine-tuned regulation in mRNA and protein turnover rates 32 . Such regulatory genes are frequently the focus of single-cell biological investigations, underscoring the importance of precision in this range. Through multi-resolution sequencing of ovarian cancer cell lines, we benchmark fidelity of quantification, demonstrating Isosceles’ performant capacity to consistently reproduce results for the same sample, and to differentiate among related yet distinct samples. Such demultiplexing of pooled samples is both a practical task in single-cell analysis 33 , and analogous to the identification of distinct cell types or lineages in single-cell studies where technical noise and data sparsity are common challenges. For example, intrinsic differences between cell lines, even those of the same tissue origin, may be more substantial than many biological changes typically investigated in biomedical research.

We observe that some methods exhibit variability in performance between simulated and biological benchmarks (eg. Figure  3c vs. Supplementary Fig.  5c ), likely attributable to inherent differences between real and simulated long-read data. However, Isosceles remains consistently performant, illustrating accurate quantification in multiple contexts. As a reference-guided method, it is designed to excel in the setting of well-annotated model organisms, such as human and mouse. However, we note that Isosceles also handles SIRV synthetic molecule spike-ins effectively, which feature non-canonical splice sites and artificial sequence content. These findings underscore Isosceles’ methodological robustness and support its utility in multiple settings.

We further illustrate that these performant capabilities are enabling in the context of biological discovery. In our case study, we utilize Isosceles to uncover the dynamics of alternative splicing in differentiating neurons. Here, Isosceles provides enhanced resolution and reveals numerous AS events not reported in the original study. Importantly, these results reveal fine-tuned regulation within fate-determined trajectories and not only between major clusters (eg. radial glia vs. mature neurons). Among these events are genes encoding disease relevant RNA binding proteins that are themselves implicated in the regulation of neuronal differentiation. The Celf2 gene, for instance, plays a central role in neurogenesis, as it modulates the translation of target mRNAs through its shuttling activity 34 . The example in Celf2 (presented in Fig.  4 ) highlights a switch-like splicing event that results in a conserved substitution of five to seven amino acids within the protein’s disordered region. This is akin to peptide changes introduced by microexons, which have been attributed functional roles in neurogenesis, including translational control of mRNAs through recruitment to membrane-less condensates, and dysregulation in disease 35 , 36 , 37 . These results demonstrate that Isosceles is an effective method for hypothesis generation and biological discovery, offering insight into the splicing dynamics of a key regulator of differentiation in our case study.

Taken together, Isosceles is a flexible toolkit for the analysis of long-read bulk and single-cell sequencing that outperforms existing methods in detection and quantification across biological resolution levels. Based on its accuracy and flexibility for experimental designs, Isosceles will significantly aid researchers in transcriptomic studies across diverse biological systems.

Isosceles Splice-graphs

Splice-graph compatibility is defined for reads using various stringency levels to match their concordance with existing knowledge. Reads are classified based on compatibility as Annotated Paths (AP), Path Compatible (PC), Edge Compatible (EC), Node Compatible (NC), De-novo Node (DN), Artifact Fusion (AF), Artifact Splice (AS), and Artifact Other (AX). AP refers to full-length transcript paths that perfectly match a reference transcript from the input gene annotation and are quantified by default. PC reads follow transcript paths that are a traversal of an AP, and may be truncated or full-length or with differing transcript start or end positions. EC reads traverse annotated splice-graph edges (introns) and may be truncated or full-length. NC reads are paths that traverse only annotated splice-graph nodes (splice-sites) but contain at least one novel edge. DN reads have paths that traverse a de novo node (splice-site). AF reads traverse paths connecting at least two splice-graphs for annotated genes that do not share introns with each other. AS reads are assigned to genes, but traverse an unknown and irreproducible node (splice-site), while AX reads lack compatibility due to ambiguous strand or lack of gene assignment.

Reads are also classified based on their truncation status, which includes Full-Length (FL), 5′ Truncation (5 T), 3′ Truncation (3 T), Full-Truncation (FT), and Not Applicable (NA). AP transcripts are automatically annotated as FL, and truncation status is checked only for PC, EC, NC, and DN transcripts. AF, AS, and AX transcripts are automatically labeled NA. Reference transcripts used for truncation status classification are recommended to be filtered to only the GENCODE ‘basic’ dataset (tag=’basic’), but also could be all transcripts in the provided annotations, as decided by the user. Full-length reads are those whose paths splice from a first exon (sharing a reference transcripts first 5′ splice site) and whose paths splice to a last exon (sharing a reference transcripts final 3′ splice site).

To add nodes with one or more de novo splice sites to the splice-graph, each splice-site must meet two conditions: it is observed in at least the minimum number of reads (default: 2) and it is connected to a known splice site in the splice-graph with least a minimum fraction (default: 0.1) of that known splice site’s connectivity. Additionally, annotations for known transcripts and genes are merged and extended based on specific criteria. For example, any annotated genes sharing introns with each other are merged into one gene and given a new gene_id & gene_symbol (comma-separated list of original Ensembl IDs and gene symbols). Annotated spliced (and unspliced) transcripts sharing the same intron structure, as well as transcript start and end bins (default bin size: 50 bp) are merged together and given a unique transcript identifier.

The method offers three modes of extending annotations to include de novo transcripts: strict , de_novo_strict , and de_novo_loose . In the strict mode, only AP transcripts are detected/quantified. In the de_novo_strict mode, AP transcripts and filtered FL transcripts of the EC and NC classes are included in quantification. In the de_novo_loose mode, AP transcripts and filtered FL transcripts of the EC, NC, and DN classes can be included.

For downstream analysis of individual transcript features, AS events are defined as the set of non-overlapping exonic intervals that differ between transcripts of the same gene. These are quantified as percent-spliced-in or counts-spliced-in according to the sum of the relative expression or the raw counts of the transcripts that include the exonic interval respectively. AS events are classified into different types similar to previous methods analyzing splicing from short-read data 2 , including core exon intervals (CE), alternative donor splice sites (A5), alternative acceptor splice sites (A3), and retained introns (RI). Isosceles can also quantify tandem untranslated regions in the first or last exons including transcription start sites (TSS) and alternative polyadenylation sites (TES).

Isosceles quantification

We use the Expectation-Maximization (EM) algorithm to obtain the maximum likelihood estimate (MLE) of transcript abundances, as used previously in transcript quantification methods for short-read data such as our prior software Whippet 2 , or the approach’s conceptual precursors RSEM 17 and/or Kallisto 18 . Specifically, we quantify transcript compatibility counts (TCCs) based on fully contained overlap of reads to the spliced transcript genomic intervals (including an extension [default: 100 bp] for transcript starts/ends), with strand for unspliced reads ignored by default. For computational efficiency, TCCs matching more than one gene are disallowed in the current version. The likelihood function is defined for transcript estimation as it is defined for short-read data with Whippet 2 , as L (α) proportional to the product, over all reads, of the sum of the probabilities α(t) of selecting a read from each compatible transcript t, divided by the effective length of t. However, due to the long length of nanopore reads, we define effective transcript length here to be the maximum of the mean read length or the transcript’s actual length, then divided by the mean read length. This directly accommodates shorter transcripts which would be fully spanned by the average read and are thus assigned an effective length of 1.0, whereas longer transcripts are represented proportionally to that value. In contrast, the user defined parameter specifying single-cell data does not use length normalization due to the anchoring of reads to the 5′ or 3′ ends of transcripts which assumes read coverage irrespective of transcript length. The EM algorithm iteratively optimizes the accuracy of transcript abundance estimates derived from TCCs, continuing until the absolute difference between transcript fractions is less than a given threshold (default:0.01) between iterations, or until the maximum number of iterations is reached (default: 250).

Simulating ONT data

In this study, the Ensembl 90 genome annotation (only transcripts with the GENCODE ‘basic’ tag) was used for all simulations, focusing specifically on spliced transcripts of protein-coding genes to exclude single-isoform non-coding genes. In order to simulate data with realistic transcriptional profiles, we quantified the expression of reference annotations in IGROV-1 cells using publicly available short-read data ([sample, project] accession IDs: [SRR8615844, PRJNA523380]; https://www.ebi.ac.uk/ena/browser/view/SRR8615844 ) and Whippet v1.7.3 using default settings. Only transcripts with non-zero expression in IGROV-1 were retained for simulations. For detection benchmarks, the Ensembl 90 annotation file (in Gene Transfer Format [GTF]) was randomly downsampled such that the longest transcript of each gene was always retained to ensure at least one full-length major isoform for each gene (by 10%, 20%, and 30% downsampling, where 99.8-100.0% of downsampled transcripts had unique exon-intron architectures). To assess performance in de novo transcript detection, each program was run individually (if de novo detection was supported) and in tandem with StringTie2 for a pre-detection step (also including IsoQuant for pre-detection with Isosceles). IsoQuant was executed in a similar manner to other programs but in a consecutive two-step process (where the first IsoQuant run identifies de novo transcripts which are concatenated to the original annotations in a second run) instead of a single-run due to significant improvement in performance observed (Fig.  2b-c and Supplementary Fig.  2 for IsoQuant two-step results; IsoQuant single-run results in Isosceles_Paper: reports_static/simulated_bulk_benchmarks_isoquant.ipynb ).

In order to simulate Oxford Nanopore Technologies (ONT) reads using NanoSim, we trained error models on bulk nanopore RNA-Seq FASTQ files concatenated from sequencing three cell lines: SK-OV-3 (SRR26865806), COV504 (SRR26865804), and IGROV-1 (SRR26865803). Nanopore single-cell RNA-Seq (nanopore scRNA-Seq) read models were also generated from the pooled set of the aforementioned cell lines (SRR26865982). A total of 100 million reads were simulated from each error model and then the first 12 million reads deemed alignable by NanoSim were extracted.

Read model error rates:

 

Bulk RNA-Seq

scRNA-Seq

Mismatch rate

0.02982687241070872

0.02866079657952386

Insertion rate

0.024056603631908934

0.024409736896819117

Deletion rate

0.04654204334915349

0.030249793440489687

Total error rate

0.10042551939177115

0.08332032691683267

To align the simulated reads provided in BAM format to all benchmark programs, Minimap2 was employed, using Ensembl 90 introns given in a BED file and applying a junction bonus parameter of 15 (with the exception of NanoCount, which required read alignment directly to the transcriptome). For the scRNA-Seq ONT dataset used to create the read model, various tools detected a similar number of cells (~2460), but the median number of unique molecular identifiers (UMIs) per cell differed. The Sicelore preprocessing of ONT scRNA-seq, identified between 3,000 and 6,000 UMIs per cell, which were provided in BAM format for biologically derived data benchmarks to Sicelore, IsoQuant, and Isosceles with cell barcode and UMI tags annotated (Fig.  3a-b ). In contrast, FLAMES, with its own UMI detection and deduplication processes, detected around 13,500 UMIs per cell. To strike a balance between the varying results from different tools, a compromise of 10,000 reads per cell was chosen for this study.

To simulate scRNA-Seq ONT data, a BAM file containing aligned simulated reads from the scRNA-Seq read model was randomly downsampled 100 times using samtools, with a subsampling proportion of 0.000833. This resulted in approximately 10,000 reads out of the original 12 million for each BAM file. A custom Python script (see supplemental Benchmark commands) was used to assign unique cell barcode sequences and UMI sequences for each read within the 100 BAM files. These subsampled BAM files were then merged and sorted using samtools.

Synthetic molecules and platform-comparison data processing

The data used for comparative analysis of results from different sequencing platforms included FASTQ files for PacBio (ENCODE: ENCFF450VAU) and ONT (cDNA Pass basecalls from the Nanopore WGS Consortium GitHub repository: https://github.com/nanopore-wgs-consortium/NA12878/blob/master/RNA.md 23 ), as well as Illumina short read transcript quantifications (ENCODE: ENCFF485OUK) for the GM12878 cell line. Long reads were aligned to the reference genome using Minimap2 as discussed previously for simulated data (although in the PacBio dataset, the ‘-ax splice:hq’ parameter was used instead of ‘-ax splice’). Transcripts with >1 TPM in Illumina quantifications (intersected with the Ensembl 90 transcript IDs utilized in this study to account for annotation discrepancies with Ensembl 95 annotation from ENCFF485OUK) were selected, and for those with one-to-many matches of Ensembl IDs, ground-truth values were aggregated.

The alignment file in BAM format for ‘Nanopore cDNA Pass’ reads aligned to the SIRV sequences (SIRV set 3, Lot No. 001485) was downloaded from the Nanopore WGS Consortium GitHub repository (see above). The three top performing tools Isosceles, Bambu and IsoQuant were benchmarked on both insufficient annotations (44 annotated SIRV isoforms [24 withheld], compared to 68 isoforms in the correct annotations) and over-annotations (68 annotated SIRV isoforms with an additional 32 decoy isoforms) obtained from Lexogen’s website ( https://www.lexogen.com/sirvs/download ). For the over-annotations, the fraction of reads assigned to correct transcripts (read assignment precision) was calculated for each tool (utilizing both SIRV transcripts and 92 unspliced ERCC sequences). In case of insufficient annotations, transcript detection (comprising both annotated and withheld) was measured with the precision, recall, and F1 score metrics on spliced (SIRV) data only, with the metrics being calculated on the level of unique transcript splicing structures.

Nanopore raw read files in FASTQ format were obtained from SRA for Sequin mix A data (SRR14286054) and mix B data (SRR14286063), then aligned using Minimap2 and processed using individual tools. Sequin reference sequences and annotations used for the analysis were downloaded from ( https://github.com/XueyiDong/LongReadRNA/tree/master/sequins/annotations ) as described previously 22 , 38 . Quantifications from each tool were compared to ground-truth Sequin expression values for mix A and mix B in order to calculate Spearman correlations and mean relative differences for each mix as well as for concatenated expression values from both mixes.

Biological data processing

The bulk RNA-Seq data (GSE248114) included Promethion data, featuring eight sequencing libraries for seven ovarian cancer cell lines (OVMANA, OVKATE, OVTOKO, SK-OV-3, COV362, COV504, and IGROV-1), as well as two technical replicates for IGROV-1. For MinION platform data, two technical replicates for IGROV-1 were sequenced. Factors such as RAM performance and program speed determined the number of reads simulated in bulk simulations and downsampled in bulk data. For example, for performing cross platform correlations, the Promethion data was downsampled to 5 million reads to make it more comparable to MinION (~6-7 million raw reads) and pseudo-bulk scRNA-Seq (3.5-4.5 million UMIs per cluster, as detected by Isosceles) in terms of total read depth. This decision was also influenced by an issue with IsoQuant ( https://github.com/ablab/IsoQuant/issues/69 ), which limited its ability to process large read files in our hands. Notably, this issue persisted on a cluster node with 20 CPUs of 2.4 GHz and allocated 230 GB of RAM.

The scRNA-Seq data (GSE248115) consisted of a mix of three cell lines (SK-OV-3, COV504, and IGROV-1). The Illumina sequencing (SRR26865983) was preprocessed using CellRanger (Version 6.0.1). For ONT sequencing data (SRR26865982) we considered two barcode preprocessing methods (Sicelore and wf-single-cell) for cell barcode (CBC) and unique molecular identifier (UMI) detection. We observe similar average Spearman correlation (0.85 vs 0.88) and mean relative diff. (0.57 vs 0.60) between the same cell lines in pseudo-bulk and bulk between the two. However, better performance was achieved with Sicelore preprocessing for matched vs. decoy (0.26 vs 0.16 for Spearman correlation, 0.22 vs 0.14 for mean relative diff.). Therefore, we used Sicelore preprocessing to annotate the CBC and UMI tags in the ONT sequencing BAM files for Isosceles, Sicelore, and IsoQuant (Supplementary Fig.  5d ; see below).

Mitochondrial transcripts common to all method’s output were removed, as they were strong outliers across methods. Additionally, three specific transcripts outliers across methods were removed: ENST00000445125 (18 S ribosomal pseudogene), ENST00000536684 (MT-RNR2 like 8), and ENST00000600213 (MT-RNR2 like 12).

Benchmarks using biological data

The correlation and relative difference analyses (Supplementary Fig.  4c ) compared annotated transcripts between bulk RNA-Seq data from two Promethion and two MinION sequencing replicates of IGROV-1, both within each platform (using replicates) and between platforms (using averaged data for each platform). For each comparison, only transcripts with a mean expression of at least 1 TPM were used. In Supplementary Fig.  4d , scRNA-Seq and bulk RNA-Seq data were also compared, again considering only annotated transcripts. For each program, the IGROV-1 scRNA-Seq pseudo-bulk cluster (according to genetic identity from Souporcell) was compared with the averaged bulk RNA-Seq IGROV-1 expression values from two replicates for each platform. Analyses were also restricted to transcripts with an expression of at least 1 TPM in the single-cell RNA-Seq results. Comparisons were made for each platform using top k cells (highest UMI count) using the top 5000 transcripts (highest mean expression) to ensure a comparable number of transcripts across software package, and top N transcripts (highest mean expression) for 64 top cells (highest UMI count) (Supplementary Fig.  4d ).

For Fig.  3a-c , scRNA-Seq and bulk RNA-Seq data analysis was conducted using Bioconductor packages (eg. scran, scater, etc.) on the transcript level for cells with at least 500 genes, for a range of top highly variable transcript numbers (500, 1,000, 2,000, 4,000, 6,000 and 10,000), as determined by the scran::getTopHVGs function 39 . Heatmaps were generated to show correlations and mean relative difference between scRNA-Seq pseudo-bulk results for three cell line clusters and Promethion bulk RNA-Seq results for seven ovarian cancer cell lines, similarly only including annotated transcripts. IGROV-1 expression was averaged from two replicates. To compare difference between matched and decoy metrics (Spearman correlation and mean relative difference), we calculated the absolute difference and computed the upper and lower bounds of the standard error using error propagation (as sqrt(se( x )^2 + se( y )^2)). To assess the overall significance of Isosceles results compared to each program in matched versus decoy metrics, we computed the differences between each matched cell line and the mean of decoys across a range of 500-10,000 HVTs. The set of differences is then compared against the matched results from Isosceles using a Wilcoxon matched-pairs signed-rank test.

For the simulated data version of Fig.  3c presented in Supplementary Fig.  5c , nanopore reads were simulated for SK-OV-3, IGROV-1, OVMANA, OVKATE, OVTOKO and COV362. These were based on short read TPM values obtained from Whippet v1.7.3 and Ensembl 90 transcripts with the GENCODE ‘basic’ tag (excluding mitochondrial transcripts) and mean expression of at least 0.1 TPM across all analyzed cell lines. 5 M reads were produced by NanoSim for both bulk RNA-Seq and scRNA-Seq read models, which were aligned to the genome using Minimap2. For the latter, cell barcodes randomly selected from 100 sequences and unique UMI sequences were added to the BAM files. Simulated bulk RNA-Seq and scRNA-Seq samples were analyzed as described for biological data presented in Fig.  3c .

We also perform this benchmark for Isosceles on the BAM files obtained from Sicelore and wf-single-cell (for the latter, Minimap2 alignment junction bonus of 15 was specified using the ‘resources_mm2_flags’ flag and the expected number of cells was set to 2,000). As wf-single-cell doesn’t produce a deduplicated BAM file, UMI deduplication using UMI-Tools was applied. Isosceles results for both BAM files were compared for the top 4,000 highly variable transcripts, defining the choice of Sicelore for single-cell barcode preprocessing used in the manuscript (see Supplementary Fig.  5d ).

Case-study analysis of biological data

For the case-study in Fig.  4 , the raw reads were pre-processed to identify cell barcodes (CBC) and unique molecular identifiers (UMI) according to the Sicelore workflow. The reads were subsequently aligned to the reference genome mm10/GRCm38 (with annotations derived from GENCODE M25), using Minimap2 with a junction bonus of 15, which targeted both annotated introns from Gencode M25 and those extracted from the VastDB mm10 GTF file 30 . The aligned reads with CBC and UMI annotations were subsequently quantified with Isosceles. The 951-cell dataset was filtered to exclude cells that expressed fewer than 100 genes. For dimensionality reduction, we combine Isosceles gene and transcript counts, culminating in the total identification of 3760 variable features (with a target of 4000), comprising 1735 genes and 2025 transcripts. We applied Principal Component Analysis (PCA), calculating 30 components using the scaled expression of the variable features. Cells were clustered using Louvain clustering (with resolution parameter of 2) on the Shared Nearest Neighbor (SNN) graph (setting a k-value of 10). The clusters’ identities were determined through gene set scores, particularly the mean TPM values of markers delineated in the original study (see Supplementary Fig.  6 ). Additional marker genes were identified via the scran::findMarkers function requiring the t-test FDR to be significant ( q value < 0.05) in at least half of the comparisons to other clusters (selecting top 5 markers of each cluster).

Pseudotime analysis was performed using Slingshot for differentiating glutamatergic neurons (identifying two trajectories, T1 and T2), differentiating GABAergic neurons, radial glia, cycling radial glia and Cajal-Retzius cells (with one trajectory each). To implement the original ‘isoform switching’ analysis, pairs of clusters were compared, detecting marker transcripts through the specific scran::findMarkers function (Wilcoxon test). We filter for transcripts of the same gene showing statistically significant differences in opposite directions (i.e. one upregulated in one cluster, the other in another cluster). To analyze splicing changes within each trajectory, we used Isosceles to calculate aggregated TCC values for windows along pseudotime, defining the window size as 30 cells and the step size as 15 cells. AS events from variable transcripts abiding by further criteria were selected for downstream analysis. First, mean PSI values across all cells from the trajectory were between 0.025 and lower than 0.975 to exclude constitutively included/excluded events. Second, at least 30 cells must have values not equal to 0, 1, or 0.5, and 30 cells must have a value above 0.1 to select against events with only low counts. Redundant PSI events, identical in read counts profiles within a trajectory, were excluded, and those with >0.99 spearman correlation were excluded from visualization in Fig.  4b and Supplementary Fig.  7b . For comparative analysis, percent-spliced-in (PSI) count values are denoted as counts-spliced-in (CSI) and defined by PSI * gene counts. These are juxtaposed with exclusion PSI counts, calculated as [(1 - PSI value) * gene counts] and the inclusion/exclusion pair input into DEXSeq 29 . For each intra-trajectory comparison, our experimental design encompassed ‘~sample + exon + pseudotime:exon‘. Meanwhile, the inter-trajectory analysis included all trajectories with a design of ‘~sample + exon + pseudotime:exon + trajectory:exon‘, compared against a null model of ‘~sample + exon‘ using the LRT test.

To determine ratios of observed vs. expected CSI, we shuffle TCCs across cells with non-zero counts and apply the EM algorithm, calculating PSI for each window. To obtain expected CSI we multiply the shuffled PSI values * observed gene counts. The permutations are conducted for each AS event across 100 bootstraps. For empirical statistical validation of changes between the first and last windows of a trajectory (eg. for Celf2 ), we fit a negative binomial distribution to each window using maximum likelihood estimation (‘fitdistrplus‘ package) on the permuted CSI, and calculate high and low one-tailed p values for the observed CSI. Combining the high and low, and low and high p values of the first and last windows respectively using Fisher’s method, we defined an overall p-value as two times the minimum combined p value. Specifically for heatmap visualization, a broad window size of 100 cells for glutamatergic & GABAergic neurons, and 50 cells for glia and CR cells, with a consistent step size of 3 cells for smoothing was utilized. The heatmap values were given as the log2 ratio of observed to expected, with a pseudocount of 0.1, defining the ratio between PSI counts and the average of the corresponding permuted PSI counts.

Benchmark command summary:

https://github.com/Genentech/Isosceles_Paper/blob/devel/Benchmark_commands.md

Software versions:

Software

Version

Isosceles

v0.2.0

Flair

v1.7.0

StringTie2

v2.2.1

IsoQuant

v3.0.3

NanoCount

v1.0.0.post6

Sicelore

v2.0

Bambu

v3.2.5 (R 4.3.0, Bioconductor 3.17)

FLAMES

v0.1

ESPRESSO

beta1.3.0

NanoSim

v3.1.0

Minimap2

v2.24-r1122

wf-single-cell

v1.1.0

UMI-tools

v1.1.5

Cell culture

All cell lines used in this study were validated by STR analysis and verified mycoplasma negative by PCR. No commonly misidentified cell lines were used in this study. IGROV1, SK-OV-3, OVTOKO, OVKATE and OVMANA cell lines were cultured in RPMI-1640 supplemented with 10% heat-inactivated fetal bovine serum (FBS) and 2mM L-Glutamine. COV362 and COV504 cells were cultured in DMEM supplemented with 10% FBS and 2mM L-Glutamine. Cells were cultured in 37 °C and 5% CO 2 in a humidified incubator. Cell line source and catalog numbers are provided in the table below. Cells were cultured in 10cm 2 plates until they reached ~60-80% confluency. For bulk analysis, RNA was purified using Qiagen’s RNeasy Plus Mini kit (Cat. #74134) according to the manufacturer’s instructions. For single-cell analysis, IGROV1, SK-OV-3 and COV504 cells were trypsinized and pooled together at a 1:1:1 ratio at a concentration of 1000 cells / μl and submitted for single cell long read sequencing.

Cell line

Provider

Catalog number

IGROV-1

NCI DCTD

 

SK-OV-3

ATCC

HTB-77

OVTOKO

JCRB Cell Bank

JCRB1048

OVKATE

JCRB Cell Bank

JCRB1044

OVMANA

JCRB Cell Bank

JCRB1045

COV362

ECACC

07071910 Lot# 07G029

COV504

ECACC

07071902 Lot# 07I007

  • Reference. 40 :

Single-cell, long-read library preparation and nanopore sequencing

Approximately 10 ng of cDNA generated from the Next GEM Single Cell 3′ Gene expression kit (10X Genomics, Cat # PN-100268) was amplified using 10uM of the biotinylated version of the forward primer and a reverse primer from the single cell 3′ transcriptomics protocol (ONT, SQK-LSK114), [Btn]_Fwd_3580_partial_read1_defined_for_3′_cDNA, 5′-/Biosg/CAGCACTTGCCTGTCGCTCTATCTTC CTACACGACGCTCTTCCGATCT-3′ and Rev_PR2_partial_TSO_defined_for_3′_cDNA, 5′-CAGCTTTCTGTTGGTGCTGATATTGCAAGCAGTGGTA TCAACGCAGAG-3′. To ensure enough cDNA was generated for the pull-down reaction (200 ng), two PCR reactions were carried out using 2X LongAmp Taq (NEB, Cat # M0287S) with the following PCR parameters 94°C for 3 minutes, with 5 cycles of 94°C 30 secs, 60°C 15 secs, and 65°C for 3 mins, with a final extension of 65°C for 5 minutes. The cDNA was pooled and cleaned up with 0.8X SPRI bead ratio and eluted in 40 μL RNAse free H20. Concentration was evaluated using the QuBit HS dsDNA assay (Thermofisher, Cat No. Q32851). The amplified cDNA was then captured using 15 μL M270 streptavidin beads (Thermofisher, Cat # 65305). Beads were washed three times with 1 mL of the 1X SSPE buffer (150 mM NaCl, 10 mM NaH 2 PO 4 , and 1 mM EDTA). Beads were then resuspended in 10 μL of 5X SSPE buffer (750 mM NaCl, 50 mM NaH 2 PO 4 , and 5 mM EDTA). Approximately 200 ng of the cDNA in 40 μL were added together with the 10 μL M270 beads and incubated at room temperature for 15 minutes. After incubation, the sample and beads were washed twice with 1 mL of 1X SSPE. A final wash was performed with 200 uL of 10 mM Tris-HCl (pH 8.0) and the beads bound to the sample were resuspended 10 μL of RNAse free H 2 O. PCR was performed on-bead using the unbiotinylated version of the primers from the ONT single 3’ transcriptomics protocol discussed earlier for 5 cycles according to the same PCR program shown above. A 0.8X SPRI was performed. The cDNA was eluted in 50 μL in RNAse free H 2 O and the concentration was evaluated with QuBit HS dsDNA assay and Tapestation D5000 DNA kit (Agilent Technologies, Cat # 5067-5589).

Library preparation for nanopore sequencing was performed according to the SQK-LSK110 protocol with the exception of the end-repair step time which was increased to 30 min. 125 fmol of final library was loaded on the PromethION using the FLO-PRO002 flow cells, R9.4.1 chemistry and sequenced for 72 h. Reads were basecalled using Guppy v5.0.11.

Statistics and reproducibility

No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

All biological sequencing data generated in the manuscript is deposited in the NCBI Gene Expression Omnibus (GEO) under GSE248118 . Mouse E18 brain long-read single-cell sequencing data is available at GSE130708 . Sequin spike-in ONT data is available at GSE172421 . SIRV spike-in ONT data and GM12878 ONT data available from GitHub at nanopore-wgs-consortium/NA12878 [ https://github.com/nanopore-wgs-consortium/NA12878 ]. PacBio data for GM12878 is available from the ENCODE Consortium at ENCFF450VAU and the transcript quantification file from ENCFF485OUK . Accession identifiers for source data in Fig.  4e and Supplementary Fig.  8d are listed in Supplementary Data  2 .  Source data are provided with this paper.

Code availability

Isosceles R package code, documentation, and vignettes are released on GitHub ( https://github.com/Genentech/Isosceles ) 41 under an open source GPL-3 license. All benchmarking code, virtual environments, and quantification data necessary to reproduce the figures/analyses in the manuscript are similarly released (analysis code: https://github.com/Genentech/Isosceles_paper 42 , singularity containers: https://doi.org/10.5281/zenodo.8180648 , benchmark quantifications: https://doi.org/10.5281/zenodo.8180604 , raw simulated data: https://doi.org/10.5281/zenodo.8180695 , simulated ovarian cell line bulk RNA-Seq data: https://doi.org/10.5281/zenodo.10895721 , simulated ovarian cell line scRNA-Seq data: https://doi.org/10.5281/zenodo.10895894 , mouse E18 brain scRNA-Seq data: https://doi.org/10.5281/zenodo.10028908 ).

Pan, Q., Shai, O., Lee, L. J., Frey, B. J. & Blencowe, B. J. Deep surveying of alternative splicing complexity in the human transcriptome by high-throughput sequencing. Nat. Genet 40 , 1413–1415 (2008).

Article   CAS   PubMed   Google Scholar  

Sterne-Weiler, T., Weatheritt, R. J., Best, A. J., Ha, K. C. H. & Blencowe, B. J. Efficient and Accurate Quantitative Profiling of Alternative Splicing Patterns of Any Complexity on a Laptop. Mol. Cell 72 , 187–200.e6 (2018).

Ziegenhain, C. et al. Comparative Analysis of Single- Cell RNA Sequencing Methods. Mol. Cell 65 , 631–643.e4 (2017).

Wang, Y., Zhao, Y., Bollas, A., Wang, Y. & Au, K. F. Nanopore sequencing technology, bioinformatics and applications. Nat. Biotechnol. 39 , 1348–1365 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Li, H. New strategies to improve minimap2 alignment accuracy. Bioinformatics 37 , 4572–4574 (2021).

Gao, Y. et al. ESPRESSO: Robust discovery and quantification of transcript isoforms from error-prone long-read RNA-seq data. Sci. Adv. 9 , eabq5072 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Tang, A. D. et al. Full-length transcript characterization of SF3B1 mutation in chronic lymphocytic leukemia reveals downregulation of retained introns. Nat. Commun. 11 , 1438 (2020).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Tian, L. et al. Comprehensive characterization of single-cell full-length isoforms in human and mouse with long-read sequencing. Genome Biol. 22 , 310 (2021).

Lebrigand, K., Magnone, V., Barbry, P. & Waldmann, R. High throughput error corrected Nanopore single cell transcriptome sequencing. Nat. Commun. 11 , 4025 (2020).

Prjibelski, A. D. et al. Accurate isoform discovery with IsoQuant using long reads. Nat. Biotechnol . 41 , 915–918 (2023).

Hu, Y. et al. LIQA: long-read isoform quantification and analysis. Genome Biol. 22 , 182 (2021).

Gleeson, J. et al. Accurate expression quantification from nanopore direct RNA sequencing with NanoCount. Nucleic Acids Res 50 , e19–e19 (2021).

Article   PubMed Central   Google Scholar  

Chen, Y. et al. Context-Aware Transcript Quantification from Long Read RNA-Seq data with Bambu. Nat Methods 20 , 1187–1195 (2023).

Pardo-Palacios, F. J. et al. Systematic assessment of long-read RNA-seq methods for transcript identification and quantification. Nat. Methods 21 , 1349–1363 (2024).

Heber, S., Alekseyev, M., Sze, S.-H., Tang, H. & Pevzner, P. A. Splicing graphs and EST assembly problem. Bioinformatics 18 , S181–S188 (2002).

Article   PubMed   Google Scholar  

Ntranos, V., Kamath, G. M., Zhang, J. M., Pachter, L. & Tse, D. N. Fast and accurate single-cell RNA-seq analysis by clustering of transcript-compatibility counts. Genome Biol. 17 , 112 (2016).

Article   PubMed   PubMed Central   Google Scholar  

Li, B. & Dewey, C. N. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. Bmc Bioinforma. 12 , 323–323 (2011).

Article   CAS   Google Scholar  

Bray, N. L., Pimentel, H., Melsted, P. & Pachter, L. Near-optimal probabilistic RNA-seq quantification. Nat. Biotechnol. 34 , 525–527 (2016).

Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. Spatial reconstruction of single-cell gene expression. Nat. Biotechnol. 33 , 495–502 (2015).

Yang, C., Chu, J., Warren, R. L. & Birol, I. NanoSim: nanopore sequence read simulator based on statistical characterization. Gigascience 6 , gix010 (2017).

Article   Google Scholar  

Kovaka, S. et al. Transcriptome assembly from long-read RNA-seq alignments with StringTie2. Genome Biol. 20 , 278 (2019).

Dong, X. et al. Benchmarking long-read RNA-sequencing analysis tools using in silico mixtures. Nat. Methods 20 , 1810–1821 (2023).

Workman, R. E. et al. Nanopore native RNA sequencing of a human poly(A) transcriptome. Nat. Methods 16 , 1297–1305 (2019).

Dunham, I. et al. An integrated encyclopedia of DNA elements in the human genome. Nature 489 , 57–74 (2012).

Article   ADS   CAS   Google Scholar  

Heaton, H. et al. Souporcell: robust clustering of single-cell RNA-seq data by genotype without reference genotypes. Nat. Methods 17 , 615–620 (2020).

Luecken, M. D. & Theis, F. J. Current best practices in single‐cell RNA‐seq analysis: a tutorial. Mol. Syst. Biol. 15 , e8746 (2019).

Street, K. et al. Slingshot: cell lineage and pseudotime inference for single-cell transcriptomics. BMC Genom. 19 , 477 (2018).

Katz, Y., Wang, E. T., Airoldi, E. M. & Burge, C. B. Analysis and design of RNA sequencing experiments for identifying isoform regulation. Nat. Methods 7 , 1009–1015 (2010).

Anders, S., Reyes, A. & Huber, W. Detecting differential usage of exons from RNA-seq data. Genome Res. 22 , 2008–2017 (2012).

Tapial, J. et al. An atlas of alternative splicing profiles and functional associations reveals new regulatory programs and genes that simultaneously express multiple major isoforms. Genome Res. 27 , 1759–1768 (2017).

Hubbard, K. S., Gut, I. M., Lyman, M. E. & McNutt, P. M. Longitudinal RNA sequencing of the deep transcriptome during neurogenesis of cortical glutamatergic neurons from murine ESCs. F1000Research 2 , 35 (2013).

Buccitelli, C. & Selbach, M. mRNAs, proteins and the emerging principles of gene expression control. Nat. Rev. Genet. 21 , 630–644 (2020).

McFarland, J. M. et al. Multiplexed single-cell transcriptional response profiling to define cancer vulnerabilities and therapeutic mechanism of action. Nat. Commun. 11 , 4296 (2020).

MacPherson, M. J. et al. Nucleocytoplasmic transport of the RNA-binding protein CELF2 regulates neural stem cell fates. Cell Rep. 35 , 109226 (2021).

Irimia, M. et al. A highly conserved program of neuronal microexons is misregulated in autistic brains. Cell 159 , 1511–1523 (2014).

Garcia-Cabau, C. et al. Kinetic stabilization of translation-repression condensates by a neuron-specific microexon. bioRxiv 2023.03.19.532587 https://doi.org/10.1101/2023.03.19.532587 (2023).

Gonatopoulos-Pournatzis, T. et al. Autism-Misregulated eIF4G Microexons Control Synaptic Translation and Higher Order Cognitive Functions. Mol. Cell 77 , 1176–1192.e16 (2020).

Dong, X. et al. The long and the short of it: unlocking nanopore long-read RNA sequencing data with short-read differential expression analysis tools. NAR Genom. Bioinform. 3 , lqab028 (2021).

Lun, A. T. L., McCarthy, D. J. & Marioni, J. C. A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor. F1000Research 5 , 2122 (2016).

PubMed   PubMed Central   Google Scholar  

Yu, M. et al. A resource for cell line authentication, annotation and quality control. Nature 520 , 307–311 (2015).

Article   ADS   CAS   PubMed   Google Scholar  

Kabza, M. & Sterne-Weiler, T. Accurate long-read transcript discovery and quantification at single-cell, pseudo-bulk and bulk resolution with Isosceles, http://github.com/Genentech/Isosceles https://doi.org/10.5281/zenodo.12702401 (2024).

Kabza, M., Ritter, A. & Sterne-Weiler, T. Accurate long-read transcript discovery and quantification at single-cell, pseudo-bulk and bulk resolution with Isosceles, http://github.com/Genentech/Isosceles_Paper , https://doi.org/10.5281/zenodo.12702743 (2024).

Download references

Acknowledgements

We would like to thank Bo Li, Hector Corrada-Bravo, William Forrest, John Marioni, Luca Gerosa, Marc Hafner, and Robert Piskol for helpful suggestions and feedback.

Author information

Authors and affiliations.

Roche Informatics, F. Hoffmann-La Roche Ltd, Poznań, Poland

Michal Kabza

Computational Biology & Translation, Genentech Inc., South San Francisco, CA, USA

Alexander Ritter & Timothy Sterne-Weiler

Department of Next Generation Sequencing and Microchemistry, Proteomics and Lipidomics, Genentech Inc., South San Francisco, CA, USA

Ashley Byrne, Daniel Le & William Stephenson

Department of Discovery Oncology, Genentech Inc., South San Francisco, CA, USA

Kostianna Sereti & Timothy Sterne-Weiler

You can also search for this author in PubMed   Google Scholar

Contributions

T.S.W. and M.K. conceived of and designed the software methodology and computational experiments with contributions from the other authors. M.K. implemented the Isosceles package and both M.K. and A.R. performed benchmarking analyses. M.K. and T.S.W. designed and performed the case study. K.S. performed the cell culture and A.B. and W.S. performed the sequencing protocols with preliminary analyses from D.L. T.S.W. wrote the manuscript with contributions from M.K. and all other authors.

Corresponding author

Correspondence to Timothy Sterne-Weiler .

Ethics declarations

Competing interests.

The authors are employees and shareholders of Genentech/Roche.

Peer review

Peer review information.

Nature Communications thanks Andrey Prjibelski and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary data 1, supplementary data 2, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kabza, M., Ritter, A., Byrne, A. et al. Accurate long-read transcript discovery and quantification at single-cell, pseudo-bulk and bulk resolution with Isosceles. Nat Commun 15 , 7316 (2024). https://doi.org/10.1038/s41467-024-51584-3

Download citation

Received : 23 November 2023

Accepted : 07 August 2024

Published : 25 August 2024

DOI : https://doi.org/10.1038/s41467-024-51584-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

single case study exercise

IMAGES

  1. Practical Exercise Based on Case Study

    single case study exercise

  2. 49 Free Case Study Templates ( + Case Study Format Examples + )

    single case study exercise

  3. Case Study, Exercise Your Options

    single case study exercise

  4. How to Create a Case Study + 14 Case Study Templates

    single case study exercise

  5. An overview of the single-case study approach

    single case study exercise

  6. Case Studies In Sport And Exercise Psychology

    single case study exercise

COMMENTS

  1. Single-Case Design, Analysis, and Quality Assessment for Intervention Research

    The purpose of this article is to describe single-case studies, and contrast them with case studies and randomized clinical trials. We will highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation ...

  2. Single-Case Experimental Designs: A Systematic Review of Published

    The reader will garner a fundamental understanding of what constitutes appropriate methodological soundness in single-case experimental research according to the established standards in the field, which can be used to guide the design of future studies, improve the presentation of publishable empirical findings, and inform the peer-review process.

  3. The Family of Single-Case Experimental Designs

    Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  4. Single-case experimental designs to assess intervention effectiveness

    Single-case experimental designs (SCED) are experimental designs aiming at testing the effect of an intervention using a small number of patients (typically one to three), using repeated measurements, sequential (± randomized) introduction of an intervention and method-specific data analysis, including visual analysis and specific statistics.

  5. Single case studies are a powerful tool for developing ...

    The majority of methods in psychology rely on averaging group data to draw conclusions. In this Perspective, Nickels et al. argue that single case methodology is a valuable tool for developing and ...

  6. The Use of Single-Case Experimental Research to Examine Physical

    This review identified studies that have utilized single-case experimental methods to evaluate physical activity, exercise, and physical fitness interventions. A total of 10 studies met the ...

  7. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  8. Single Case Research Methods in Sport and Exercise Psychology

    Abstract • What is single-case research? • How can single-case methods be used within sport and exercise? Single-case research is a powerful method for examining change in outcome variables ...

  9. (PDF) Quality of Single-Case Designs Targeting Adults' Exercise and

    Quality of Single-Case Designs Targeting Adults' Exercise and Physical Activity . × ... Quality of Single-Case Designs Targeting Adults' Exercise and Physical Activity. Kelley Strohacker. 2019, Translational Journal of the ACSM. See Full PDF Download PDF.

  10. Single Case Research Methods in Sport and Exercise Psychology

    Including case studies and examples from across sport and exercise psychology, the book provides practical guidance for students and researchers and demonstrates the advantages and common pitfalls of single-case research for anybody working in applied or behavioural science in a sport or exercise setting.

  11. Application of a Single-Case Research Design to Present the

    Clinical benefits of rehabilitation are very difficult to present because of various factors such as very small sample sizes, no control comparison, or short period of intervention. However, clinical improvement can be presented with a single case study research design that is very interesting and challenging technique. Basic and advanced single-research designs can be performed in various ...

  12. Single-Case Designs

    Single case designs in clinical practice: A contemporary CBS perspective on why and how to Gareth Holman, Kelly Koerner, in Journal of Contextual Behavioral Science, 2014 Highlights • Single-case designs are useful in clinical practice. • Skill with Single-case designs supplements the toolbox of evidence-based practice. • This article surveys clinical implementation of Single-case ...

  13. Single-Case Research Methods in Sport and Exercise Psychology

    Including case studies and examples from across sport and exercise psychology, the book provides practical guidance for students and researchers and demonstrates the advantages and common pitfalls of single-case research for anybody working in applied or behavioural science in a sport or exercise setting.

  14. The Advantages and Limitations of Single Case Study Analysis

    Single case study analyses offer empirically-rich, context-specific, holistic accounts and contribute to both theory-building and, to a lesser extent, theory-testing.

  15. Single-Case Research Methods in Sport and Exercise Psychology

    What is single-case research? How can single-case methods be used within sport and exercise? Single-case research is a powerful method for examining change in

  16. PDF Microsoft Word

    Total Yield of Articles from Each Database ... PHYSICAL ACTIVITY SINGLE-CASE DESIGN QUALITY ... Single-Case Design terms: "single case design", "single case experimental design", "single case study", "single subject design", "single case research design", "ABAB", "multiple baseline", "changing criterion design"

  17. Case Study Method: A Step-by-Step Guide for Business Researchers

    Abstract Qualitative case study methodology enables researchers to conduct an in-depth exploration of intricate phenomena within some specific context. By keeping in mind research students, this article presents a systematic step-by-step guide to conduct a case study in the business discipline. Research students belonging to said discipline face issues in terms of clarity, selection, and ...

  18. Single-Case Design, Analysis, and Quality Assessment for Int ...

    Background and Purpose: The purpose of this article is to describe single-case studies and contrast them with case studies and randomized clinical trials. We highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation research.

  19. Predicting hydration status using machine learning models from ...

    In this work, we present the first study on predicting hydration status using machine learning models from single-subject experiments, which involve 32 exercise sessions of constant moderate intensity performed with and without fluid intake.

  20. Predicting Hydration Status Using Machine Learning Models From

    Improper hydration routines can reduce athletic performance. Recent studies show that data from noninvasive biomarker recordings can help to evaluate the hydration status of subjects during endurance exercise. These studies are usually carried out on multiple subjects. In this work, we present the first study on predicting hydration status using machine learning models from single-subject ...

  21. Physical exercise as a treatment for non-suicidal self-injury ...

    Physical exercise as a treatment for non-suicidal self-injury: evidence from a single-case study. Physical exercise as a treatment for non-suicidal self-injury: evidence from a single-case study. Am J Psychiatry. 2007 Feb;164 (2):350-1. doi: 10.1176/ajp.2007.164.2.350a.

  22. "I tried so many diets, now I want to do it differently"—A single case

    In this single case study, the author presented an in-depth description and analysis of a coaching intervention with focus on weight loss, conducted over 10 sessions in the course of 17 months. The client was a well-educated woman in her late 30s, who ...

  23. Can you get to your first million with a single case study?

    I sold all consulting services in Q2 by showcasing a single case study. The case study described how my client made $4 million sales pipeline for a B2B SaaS solution by targeting large companies using outbound. The (not so) secret ingredients: Differentiated positioning Kick-ass sales deck ICP-first landing page for the offer (not a home page!)

  24. Accurate long-read transcript discovery and quantification at single

    Here we show Isosceles improves the fidelity of single-cell transcriptome quantification at the isoform-level, and enables flexible downstream analysis. As a case study, we apply Isosceles ...