• Our Mission

Research Zeroes In on a Barrier to Reading (Plus, Tips for Teachers)

How much background knowledge is needed to understand a piece of text? New research appears to discover the tipping point.

Photo collage illustration concept for reading comprehension and background knowledge

By now, you’ve probably heard of the baseball experiment. It’s several decades old but has experienced a resurgence in popularity since Natalie Wexler highlighted it in her best-selling new book, The Knowledge Gap .

In the 1980s, researchers Donna Recht and Lauren Leslie asked middle school students to read a passage describing a baseball game, then reenact it with wooden figures on a miniature baseball field. They were surprised by the results: Even the best readers struggled to re-create the events described in the passage. 

“Prior knowledge creates a scaffolding for information in memory,” they explained after seeing the results. “Students with high reading ability but low knowledge of baseball were no more capable of recall or summarization than were students with low reading ability and low knowledge of baseball.”

That modest experiment kicked off 30 years of research into reading comprehension, and study after study confirmed Recht and Leslie’s findings: Without background knowledge, even skilled readers labor to make sense of a topic. But those studies left a lot of questions unanswered: How much background knowledge is needed for better decoding? Is there a way to quantify and measure prior knowledge?

A 2019 study published in Psychological Science is finally shedding light on those mysteries. The researchers discovered a “knowledge threshold” when it comes to reading comprehension: If students were unfamiliar with 59 percent of the terms in a topic, their ability to understand the text was “compromised.”

In the study, 3,534 high school students were presented with a list of 44 terms and asked to identify whether each was related to the topic of ecology. Researchers then analyzed the student responses to generate a background-knowledge score, which represented their familiarity with the topic. 

Without any interventions, students then read about ecosystems and took a test measuring how well they understood what they had read.

Students who scored less than 59 percent on the background-knowledge test also performed relatively poorly on the subsequent test of reading comprehension. But researchers noted a steep improvement in comprehension above the 59 percent threshold—suggesting both that a lack of background knowledge can be an obstacle to reading comprehension, and that there is a baseline of knowledge that rapidly accelerates comprehension.

Why does background knowledge matter? Reading is more than just knowing the words on the page, the researchers point out. It’s also about making inferences about what’s left off the page—and the more background knowledge a reader has, the better able he or she is to make those inferences.

“Collectively, these results may help identify who is likely to have a problem comprehending information on a specific topic and, to some extent, what knowledge is likely required to comprehend information on that topic,” conclude Tenaha O'Reilly, the lead author of the study, and his colleagues.

5 Ways Teachers Can Build Background Knowledge 

Spending a few minutes making sure that students meet the knowledge threshold for a topic can yield outsized results. Here’s what teachers can do:

  • Mind the gap: You may be an expert in civil war history, but be mindful that your students will represent a wide range of existing background knowledge on the topic. Similarly, take note of the cultural, social, economic, and racial diversity in your classroom. You may think it’s cool to teach physics using a trebuchet, but not all students have been exposed to the same ideas that you have.
  • Identify common terms in the topic. Ask yourself, “What are the main ideas in this topic? Can I connect what we’re learning to other big ideas for students?” If students are learning about earthquakes, for example, take a step back and look at what else they should know about—perhaps Pangaea, Earth’s first continent, or what tectonic plates are. Understanding these concepts can anchor more complex ideas like P and S waves. And don’t forget to go over some broad-stroke ideas—such as history’s biggest earthquakes—so that students are more familiar with the topic.
  • Incorporate low-stakes quizzes. Before starting a lesson, use formative assessment strategies such as entry slips or participation cards to quickly identify gaps in knowledge.
  • Build concept maps. Consider leading students in the creation of visual models that map out a topic’s big ideas—and connect related ideas that can provide greater context and address knowledge gaps. Visual models provide another way for students to process and encode information, before they dive into reading.
  • Sequence and scaffold lessons. When introducing a new topic, try to connect it to previous lessons: Reactivating knowledge the students already possess will serve as a strong foundation for new lessons. Also, consider your sequencing carefully before you start the year to take maximum advantage of this effect.  

BRIEF RESEARCH REPORT article

The use of new technologies for improving reading comprehension.

\r\nAgnese Capodieci*

  • 1 Department of General Psychology, University of Padova, Padua, Italy
  • 2 Azienda Sociosanitaria Ligure 5 Spezzino, La Spezia, Italy

Since the introduction of writing systems, reading comprehension has always been a foundation for achievement in several areas within the educational system, as well as a prerequisite for successful participation in most areas of adult life. The increased availability of technologies and web-based resources can be a really valid support, both in the educational and clinical field, to devise training activities that can also be carried out remotely. There are studies in current literature that has examined the efficacy of internet-based programs for reading comprehension for children with reading comprehension difficulties but almost none considered distance rehabilitation programs. The present paper reports data concerning a distance program Cloze , developed in Italy, for improving language and reading comprehension. Twenty-eight children from 3rd to 6th grade with comprehension difficulties were involved. These children completed the distance program for 15–20 min for at least three times a week for about 4 months. The program was presented separately to each child, with a degree of difficulty adapted to his/her characteristics. Text reading comprehension (assessed distinguishing between narrative and informative texts) increased after intervention. These findings have clinical and educational implications as they suggest that it is possible to promote reading comprehension with a distance individualized program, avoiding the need for the child displacements, necessary for reaching a rehabilitation center.

Introduction

Reading comprehension is a fundamental cognitive ability for children, that supports school achievement and successively participation in most areas of adult life ( Hulme and Snowling, 2011 ). Therefore, children with learning disabilities (LD) and special educational needs who show difficulties in text comprehension, sometimes also in association with other problems, may have an increased risk of life and school failure ( Woolley, 2011 ). Reading comprehension is, indeed, a complex cognitive ability which involves not only linguistic (e.g., vocabulary, grammatical knowledge), but also cognitive (such as working memory, De Beni and Palladino, 2000 ), and metacognitive skills (both for the aspects of knowledge and control, Channa et al., 2015 ), and, more specifically, higher order comprehension skills such as the generation of inferences ( Oakhill et al., 2003 ).

Recently, due to the diffusion of technology in many fields of daily life, text comprehension at school, at home during homework, and at work is based on an increasing number of digital reading devices (computers and laptops, e-books, and tablet devices) that can become a fundamental support to improve traditional reading comprehension and learning skills (e.g., inference generation).

Some authors contrasted in children with typical development the effects of the technological interface on reading comprehension vs printed texts ( Kerr and Symons, 2006 ; Rideout et al., 2010 ; Mangen et al., 2013 ; Singer and Alexander, 2017 ; Delgado et al., 2018 ). Results were consistent and showed a worse comprehension performance in screen texts compared to printed texts for children ( Mangen et al., 2013 ; Delgado et al., 2018 ) and adolescents who nonetheless showed a preference for digital texts compared to printed texts ( Singer and Alexander, 2017 ). Regarding children with learning problems, only few studies considered the differences between printed texts and digital devices ( Chen, 2009 ; Gonzalez, 2014 ; Krieger, 2017 ) finding no significant differences, suggesting that the use of compensative digital tools for children with a learning difficulty could be a valid alternative with respect to the traditional written texts in facilitating their academic and work performance. This conclusion is also supported by the results of a meta-analysis ( Moran et al., 2008 ), regarding the use of digital tools and learning environments for enhancing literacy acquisition in middle school students, which demonstrates that technology can improve reading comprehension.

Different procedures and abilities are targeted in the international literature concerning computerized training programs for reading comprehension. In particular, various studies include activities promoting cognitive (e.g., vocabulary, inference making) and metacognitive (e.g., the use of strategies, comprehension monitoring, and identification of relevant parts in a text) components of reading comprehension. Table 1 reports the list of papers proposing computerized training programs with a summary of the findings encountered. Participants involved cover different ages and school grades, the majority belonging to middle school and high school. The general outcome of the studies is positive due to a significant improvement in comprehension skills after the training program with long-lasting effects also during follow-up; indeed, the majority of participants involved in training programs outperformed their peers assigned to comparison groups and maintained their improvements. Specifically, several studies ( O’Reilly et al., 2004 ; Magliano et al., 2005 ; McNamara et al., 2006 ) used the iSTART program with adolescents and young adults. This program promotes self-explanation, prior knowledge and reading strategies to enhance understanding of descriptive scientific texts. Results demonstrated that students who followed the iSTART program received more benefits than their peers, improving self-explanation and summarization. Additionally, strategic knowledge was a relevant factor for the outcome in comprehension tasks including multiple choice questions: students who already possessed good strategic knowledge improved their accuracy when answering to bridging inference questions, whereas students with low strategic knowledge became more accurate with text-based questions. Another program, ITSS, was used with younger students ( Meyer et al., 2011 ; Wijekumar et al., 2012 , 2013 , 2017 ), with the objective to support activities based on identifying main parts and key words in a text and classifying information in a hierarchical order. Positive outcomes were found also with such program since students who followed the ITSS program significantly improved text comprehension compared to their peers in the control group.

www.frontiersin.org

Table 1. Synthesis of the main results of the computerized training programs on comprehension present in the literature.

Although most of the literature deals with typical development, also cases of students with learning difficulties were considered. For example, Potocki et al. (2013) (see also Potocki et al., 2015 ) examined the effects of two different computerized programs with specific aims: one focusing on comprehension features, such as inference making and the analysis of text structure, the other considering decoding skills. Both training programs brought some benefits to reading comprehension, however larger effects were found with the program focused on comprehension with long-lasting effects in listening and reading comprehension (see also Kleinsz et al., 2017 ). Studies by Johnson-Glenberg (2005) and Kim et al. (2006) , using respectively the programs 3D Readers and CACSR, were able to promote reading comprehension abilities in middle school students through metacognitive activities. Thanks to these programs students also became more aware of reading strategies and implemented them more successfully during text comprehension. In particular, a study by Niedo et al. (2014) , obtained positive results on silent reading in a small group of children struggling with reading using the “cloze” procedure. This procedure proposes exercises in which parts of a text, typically words, are missing and participants are required to complete the text guessing what is missing.

Thus, computerized programs generally seem to improve reading comprehension skills. However, it should be noticed that, in most cases, students were trained at school, without the personalized support of a clinician taking into consideration the cognitive and psychological needs of the child. In particular, to our knowledge, no program examined the effects of an internet-based distance reading comprehension program which allows the child to be trained at home in a personalized way. A useful aspect of an internet-based distance training is that the psychologist can monitor with the application ( app ) the child’s results and activities and write him/her some motivational messages, reducing the attritions present in programs carried out at home with the only supervision of parents. Literature concerning distance trainings is still rare, however, some evidence suggests that these programs may represent a good integration to other types of intervention, usually carried out at school, in a rehabilitation center or at home (e.g., Mich et al., 2013 ).

Therefore, despite still preliminary, we think that it is relevant to present data about a distance program developed in Italy named Cloze ( Cornoldi and Bertolo, 2013 ), devised for rehabilitation purposes but with potential implication also for educational contexts. Cloze has been developed to promote inferential abilities both at a sentence- and discourse-level using the “cloze” procedure. Several findings in the literature demonstrate that abilities, such as anticipating text parts and inference making, bring improvements in text comprehension (e.g., Yuill and Oakhill, 1988 ) and it has been shown that one way to promote inferential competences is to improve the ability to predict parts of the text that are missing or that follow, considering the available information: the “cloze” technique appears to be one of the most successful ways for this purpose (e.g., Greene, 2001 ).

In the current study the effectiveness of this training program has been tested on a clinical population who exhibited, for various reasons, difficulties in reading comprehension. Participants were 28 children (16 male and 12 female) attending a private practice for learning difficulties in the city of La Spezia, in the north-west of Italy, from 3rd to 6th school grade (5 of 3rd, 9 of 4th, 11 of 5th and 3 of 6th grade), with a mean age of children of M = 9.79 years (SD = 1.03). Seventeen children had a current or past speech disorder: of these children 10 also had a LD (Learning Disabilities) and one was bilingual (speech problems were not due to bilingualism). The other 11 children had a LD or important learning difficulties, and one of them had also ADHD (Attention Deficit/Hyperactivity Disorder). For the goals of the study, all these children were considered together as they all presented a severe reading comprehension difficulty as reported by parents and teachers and confirmed by the initial assessment.

All children had received a comprehensive psychological assessment (see Table 2 ), adapted to their particular needs and ages. In particular all children had an IQ >80 assessed with the Wechsler Intelligence Scale for Children-IV (WISC-IV; Wechsler, 2003 ) and did not have anxiety disorders, mood affective disorders or other developmental disorders, with the exception of the cases with language disorder and the case with ADHD. Children were not receiving any additional treatment, including medication. Written consent was obtained from the children’s parents in the context of the private practice.

www.frontiersin.org

Table 2. Main characteristics of the sample in terms of reading and cognitive abilities.

Materials and Methods

Pre-/post-test assessment and procedure of the training.

Each child started a training program through the distance rehabilitation platform Ridinet, using the Cloze app, after the assessment of learning and cognitive abilities, including comprehension assessment with two texts, one narrative and one informative ( Cornoldi and Carretti, 2016 ; Cornoldi et al., 2017 ). Connection to the Ridinet web site was required in order to access to the app, three or four times a week for more or less 15/20 min. The period of use was of 3 months for 6 children and 4 months for 22 children. After this period children’s comprehension was assessed again. Additionally, some questions were asked to parents and children about the app’s utility and pleasantness. In particular, children were asked: “Do you think the program helped you improve your text comprehension skills?,” “Did you like doing this program instead of the same exercises on paper?”; and parents were asked: “Was it difficult to start the Cloze activities on days when it had to be done?,” “Compared to the beginning of the treatment, how do you currently judge the ability of your child to understand the texts?”. For all questions, except the last one, the answer had to be given on a 5-point scale with 1 = not at all, 2 = a little, 3 = enough, 4 = very, 5 = very much. For the last question the answer changed on a 4-point scale with 1 = got worse, 2 = unchanged, 3 = slightly improved, and 4 = greatly improved.

Comprehension Tasks

Reading comprehension was assessed with two texts, the first narrative and the other informative, taken from Italian batteries for the assessment of reading ( Cornoldi and Carretti, 2016 ; Cornoldi et al., 2017 ). The texts range between 226 and 455 words in length, and their length increases with school grade (in order to have texts and questions matching the degrees of expertise at different grades the batteries include a different pair of texts for each grade). Students read the text in silence at their own pace, then answer a variable number of multiple-choice questions (depending on school grade), choosing one of four possible answers. There is no time limit, and students can reread the text whenever they wish. The final score is calculated as the total number of correct answers for each text. Alpha coefficients, as reported by the manuals, range between 0.61 and 0.83. For the purposes of the study we decided to use the same two comprehension texts, at pre-test and post-test, as the procedure offered the opportunity of directly examining and showing to parents changes in comprehension and previous evidence had shown the absence of relevant retest effects with this material in a retest carried out after 3 months ( Viola and Carretti, 2019 ).

Distance Rehabilitation Program: Cloze

Cloze ( Cornoldi and Bertolo, 2013 ) is an app for the promotion of text comprehension with the specific aim to recover processes of lexical and semantic inference. At each work session the child works with texts that lack words and must complete the empty spaces by choosing the correct alternative from those automatically proposed by the app, so that the text becomes congruent. The program is adaptive, as text complexity and proportion of missing words vary according to the previous level of response, and is designed for children who have weaknesses in written text comprehension, mainly due to poor skills in lexical and semantic inferential processes. The app also allows to enhance a set of language skills (phonology, syntax, semantics) which contribute to ensuring the fluidity of text and production processing. The recommended age range for the use of this program is between 7 and 14 years. In this study the semantic mode (only content words may be missing and no syntactic cues can be used for deciding between the alternatives) was proposed to 21 children and the syntactic mode (where all words may be missing) to 7 children. The mode type selected for each child depends from the performance at pre-test and diagnosis. A clinician, co-author of the present study (LB), monitored the child’s results and activities with the app and sent him/her from time to time some motivational messages. The motivational messages were typically sent once a week for congratulating with children for the work done and check with him/her possible problems emerged. Training lasted from 3 to 4 months and involved between 3 and 4 sessions of 15–20 min per week. The variation in duration depended on the decision of each individual family. In fact, children were required to use the software for about 4 months or in any case for a minimum period of 3 months (choice made by six families).

Effects on Reading Comprehension of Cloze Training

All analyses were carried out with SPSS 25 ( IBM Corp, 2017 ). A preliminary analysis found that all the examined variables met the assumptions of normality (K-S between 0.106 and 0.143, p > 0.05). Then, we compared the reading comprehension performance of children before and after the computerized training with Cloze . For this analysis, a repeated measure Analysis of Variance (ANOVA) was conducted on comprehension scores to examine the differences in the whole group of children between the scores obtained before and after the training. A significant difference was found for both comprehension texts [ F (1,27) = 22.37, p < 0.001, η 2 p = 0.453 and F (1,27) = 38.90, p < 0.001, η 2 p = 0.599, respectively]. Possible differences between the two training modalities (semantic vs syntactic) and between different training periods (3 months vs 4 months) were then analyzed; no significant differences emerged between groups in both cases [ F (1,27) < 1].

Secondly, to analyze the role of individual differences at pre-test, the standardized training gain score (STG; Jaeggi et al., 2011 ) – computed by subtracting post-test score minus pre-test score, divided by the SD of the pre-test – was calculated for the two texts comprehension. Pearson correlations were computed between the STG and the variable collected at pre-test (reading speed and errors, WISC IV – Full scale IQ, Verbal Comprehension, Perceptual Reasoning, Working Memory and Processing Speed indexes). The only significant correlation was between STG of the narrative text and Verbal Comprehension Index of the WISC-IV Scale ( r = 0.38, p = 0.048). Finally, individual improvements from pre- to post-test were also confirmed considering changes in performance in terms of standard deviation in relations to norms (provided by the manual). Table 3 shows the number of children for each comprehension text who improved their performance moving from a performance at least 2 standard deviations or between 1 and 2 negative standard deviations under the mean to a performance above one negative standard deviation.

www.frontiersin.org

Table 3. Changes in performance in relations to norms (provided by the manual) after the training program Cloze.

Perceived Utility, Pleasantness, Parents and Child’s Improvements of Cloze

Results concerning the answers of parents and children about utility, pleasantness and self-perceived efficacy of the app, were also analyzed. At the first question, addressing children’s perceived improvement in comprehension skills, more than half of the sample chose the alternatives “very” or “very much” (15 “very” and 5 “very much”), only 1 child answered “a little” and the others chose “enough.” At the second question, about the pleasure of doing this kind of activity instead of pen and paper activities, all children answered “very” or “very much.” Concerning parents’ questions, at the first question about the difficulty to start the Cloze activity, only one parent answered “enough,” a quarter of the sample chose “a little” (seven families) and all the other 20 families chose the alternative “not at all.” At the last question about the perceived training efficacy on their child’s performance, the large majority of the families chose “slightly improved” or “greatly improved” and only three parents thought their children’s ability had remained unchanged. However, no correlations between parents and child’s perceived improvements and STG in reading comprehension were found.

The present study examined the effects of the use of Cloze , a distance rehabilitation program focused on inference skills, for improving reading comprehension, on the basis of the hypothesis that, being inference making related to reading comprehension at different ages (e.g., Oakhill and Cain, 2012 ), positive effects of the training activities on reading comprehension should be found.

Concerning the efficacy of computer-assisted training programs, literature highlights that many training programs are devised for an educational context. Results are generally encouraging with positive effects on reading comprehension, measured with materials different from those practiced during the training. However, few studies analyzed the efficacy in children with specific reading comprehension problems, and no studies considered the possibility of carrying out a training at home under the distance supervision of an expert. The latter characteristics are those that make the Cloze peculiar compared to the existent literature. Cloze is indeed based on a rehabilitation online platform which allows the child to complete personalized training activities several times a week, without moving from his/her home, and concurrently enabling the clinician to monitor the child’s progress or manage activities’ characteristics. The advantage of this procedure is twofold: on one hand it increases the potential number of training sessions per week, on the other hand it permits to save the necessary time to reach the center for rehabilitation and to reduce the costs of the intervention.

The preliminary data on Cloze were generally positive: children, working on either two slightly different versions of the same program, showed a generalized improvement in reading comprehension tasks and, together with their families, expressed appreciation for the pleasantness and the efficacy of the program. Encouraging results emerged also from the analysis of individual improvements referring to normative scores, as reported in Table 3 : most of the children’s performance migrated from a highly negative level to an average level.

It is noticeable that the efficacy of the training was assessed with materials different from those practiced during the training sessions, since reading comprehension tasks required to read a paper text and complete a series of multiple-choice questions. In future studies it would be interesting to analyze the effects of the program on skills known to be related to text comprehension, such as vocabulary or comprehension monitoring, for example. There is good reason to believe that since these variables are highly predictive of comprehension skills (and given that training in these skills sometimes improve comprehension; e.g., Beck et al., 1982 ; see also Hulme and Snowling, 2011 ), training that specifically targets comprehension might, in turn, lead to improvements in vocabulary or comprehension monitoring skills. Further studies are needed to explore this hypothesis.

A second relevant finding of the present study is the presence of a positive correlation between the gain obtained in one of the reading comprehension text (the narrative one) and the Verbal Comprehension Intelligence Quotient (VCIQ) index of the WISC-IV battery, showing that children who started with more resources in verbal intelligence achieved greater improvements in text comprehension at least with one type of text through the Cloze . The activities probably required to develop some kind of strategies, and for this reason students with larger verbal intellectual resources, who were presumably more able to develop new strategies, were more advantaged. Indeed, this amplification effect is usually found when training activities require the development of strategies ( von Bastian and Oberauer, 2014 ). Such result has clinical and educational implications, inviting professionals and teachers to consider children’s starting resources and, if necessary, to combine activities conducted through distance rehabilitation programs with personal intervention sessions that could teach strategies and promote a metacognitive approach to reading comprehension. However, some limitations of the present study must be acknowledged. Firstly, study did not include a control group, therefore findings should be taken with caution, although normative data and previous results obtained with the same test offer support to the robustness of our results and the use of normative data offers a control measure of how reading comprehension skills are acquired in typically developing children without specific training, therefore functioning as a sort of passive control group. Secondly, the treated group, although characterized by a common reading comprehension difficulty, was partly heterogeneous, as children attended different grades and could have different diagnoses. Unfortunately, the limited number of subjects, with the consequence that it was not possible to form groups defined both by the grade and the diagnosis, did not permit to make analyses taking into account the grade and the diagnosis as between-subjects factors. Future studies should examine a more homogeneous population or consider a larger sample of children, giving more information about the efficacy of training in different children population. Additionally, the fact that the treatment was concluded with the post-training assessment did not offer the opportunity to further examine the procedure and maintenance effects with a follow-up. Despite the limitations, this study offers evidence concerning the efficacy of new methods, based on computer-assisted training programs that could be beneficial in training high-level skills such as comprehension and inference generation. Such tools can be extremely worthwhile for struggling readers who may need to receive further attention in mastering higher level reading comprehension.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author Contributions

AC, CC and BC contributed to the design and implementation of the research. LB provided the data. BC organized the database. AC performed the statistical analysis. ED did the literature research and wrote the section about the review of the literature. AC and BC wrote the other sections. CC contributed to the manuscript revision, read and approved the submitted version.

The present work was carried out within the scope of the research program Dipartimenti di Eccellenza (art.1, commi 314-337 legge 232/2016), which was supported by a grant from MIUR to the Department of General Psychology, University of Padua and partially supported by a grant (PRIN 2015, 2015AR52F9_003) to Cesare Cornoldi funded by the Italian Ministry of Research and Education (MIUR).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Beck, I. L., Perfetti, C. A., and McKeown, M. G. (1982). Effects of long-term vocabulary instruction on lexical access and reading comprehension. J. Educ. Psychol. 74, 506–521. doi: 10.1037/0022-0663.74.4.506

CrossRef Full Text | Google Scholar

Channa, M. A., Nordin, Z. S., Siming, I. A., Chandio, A. A., and Koondher, M. A. (2015). Developing reading comprehension through metacognitive strategies: a review of previous studies. Eng. Lang. Teach. 8, 181–186. doi: 10.5539/elt.v8n8p181

Chen, H. (2009). Online reading comprehension strategies among fifth- and sixth-grade general and special education students. Educ. Res. Perspect. 37, 79–109.

Google Scholar

Cornoldi, C., and Bertolo, L. (2013). Cloze Ridinet. Bologna: Anastasis.

Cornoldi, C., and Carretti, B. (2016). Prove MT-3-Clinica. Firenze: Giunti Edu.

Cornoldi, C., Carretti, B., and Colpo, C. (2017). Prove MT-Kit Scuola. Dalla valutazione degli Apprendimenti di Lettura E Comprensione Al Potenziamento. [MT-Kit for the Assessment In The School. From Reading Assessment To Its Enhancement]. Firenze: Giunti Edu.

Cullen, J. M., Alber-Morgan, S. R., Schnell, S. T., and Wheaton, J. E. (2014). Improving reading skills of students with disabilities using headsprout comprehension. Remed. Spec. Educ. 35, 356–365. doi: 10.1177/0741932514534075

De Beni, R., and Palladino, P. (2000). Intrusion errors in working memory tasks: are they related to reading comprehension ability? Learn. Individ. Differ. 12, 131–143. doi: 10.1016/s1041-6080(01)00033-4

Delgado, P., Vargas, C., Ackerman, R., and Salmerón, L. (2018). Don’t throw away your printed books: a meta-analysis on the effects of reading media on reading comprehension. Educ. Res. Rev. 25, 23–38. doi: 10.1016/j.edurev.2018.09.003

Gonzalez, M. (2014). The effect of embedded text-to-speech and vocabulary eBook scaffolds on the comprehension of students with reading disabilities. Intern. J. Spec. Educ. 29, 111–125.

Greene, B. (2001). Testing reading comprehension of theoretical discourse with cloze. J. Res. Read. 24, 82–98. doi: 10.1111/1467-9817.00134

Hulme, C., and Snowling, M. J. (2011). Children’s reading comprehension difficulties: nature, causes, and treatments. Curr. Direct. Psychol. Sci. 20, 139–142. doi: 10.1177/0963721411408673

IBM Corp (2017). IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp.

Jaeggi, S. M., Buschkuehl, M., Jonides, J., and Shah, P. (2011). Short-and long-term benefits of cognitive training. Proc. Natl. Acad. Sci. U.S.A. 108, 10081–10086. doi: 10.1073/pnas.1103228108

Johnson-Glenberg, M. C. (2005). Web-based training of metacognitive strategies for text comprehension: focus on poor comprehenders. Read. Writ. 18, 755–786. doi: 10.1007/s11145-005-0956-5

Kerr, M. A., and Symons, S. E. (2006). Computerized presentation of text: effects on children’s reading of informational material. Read. Writ. 19, 1–19. doi: 10.1007/s11145-003-8128-y

Kim, A. H., Vaughn, S., Klingner, J. K., Woodruff, A. L., Klein Reutebuch, C., and Kouzekanani, K. (2006). Improving the reading comprehension of middle school students with disabilities through computer-assisted collaborative strategic reading. Remed. Spec. Educ. 27, 235–249. doi: 10.1177/07419325060270040401

Kleinsz, N., Potocki, A., Ecalle, J., and Magnan, A. (2017). Profiles of French poor readers: underlying difficulties and effects of computerized training programs. Learn. Individ. Differ. 57, 45–57. doi: 10.1016/j.lindif.2017.05.009

Krieger, R. (2017). The Effect of Electronic Text Reading on Reading Comprehension Scores of Students with Disabilities. Master thesis, Governors State University, Park, IL.

Leong, C. K. (1992). Enhancing reading comprehension with text-to-speech (DECtalk) computer system. Read. Writ. 4, 205–217. doi: 10.1007/bf01027492

Magliano, J. P., Todaro, S., Millis, K., Wiemer-Hastings, K., Kim, H. J., and McNamara, D. S. (2005). Changes in reading strategies as a function of reading training: a comparison of live and computerized training. J. Educ. Comput. Res. 32, 185–208. doi: 10.2190/1ln8-7bqe-8tn0-m91l

Mangen, A., Walgermo, B. R., and Brønnick, K. (2013). Reading linear texts on paper versus computer screen: effects on reading comprehension. Intern. J. Educ. Res. 58, 61–68. doi: 10.1016/j.ijer.2012.12.002

McNamara, D. S., O’Reilly, T. P., Best, R. M., and Ozuru, Y. (2006). Improving adolescent students’ reading comprehension with iSTART. Intern. J. Educ. Res. 34, 147–171. doi: 10.2190/1ru5-hdtj-a5c8-jvwe

Meyer, B. J., Wijekumar, K. K., and Lin, Y. C. (2011). Individualizing a web-based structure strategy intervention for fifth graders’ comprehension of nonfiction. J. Educ. Psychol. 103, 140–168. doi: 10.1037/a0021606

Mich, O., Pianta, E., and Mana, N. (2013). Interactive stories and exercises with dynamic feedback for improving reading comprehension skills in deaf children. Comput. Educ. 65, 34–44. doi: 10.1016/j.compedu.2013.01.016

Moran, J., Ferdig, R. E., Pearson, P. D., Wardrop, J., and Blomeyer, R. L. Jr. (2008). Technology and reading performance in the middle-school grades: a meta-analysis with recommendations for policy and practice. J. Liter. Res. 40, 6–58. doi: 10.1080/10862960802070483

Niedo, J., Lee, Y. L., Breznitz, Z., and Berninger, V. W. (2014). Computerized silent reading rate and strategy instruction for fourth graders at risk in silent reading rate. Learn. Disabil. Q. 37, 100–110. doi: 10.1177/0731948713507263

Oakhill, J. V., and Cain, K. (2012). The precursors of reading ability in young readers: evidence from a four-year longitudinal study. Sci. Stud. Read. 162, 91–121. doi: 10.1080/10888438.2010.529219

Oakhill, J. V., Cain, K., and Bryant, P. E. (2003). The dissociation of word reading and text comprehension: evidence from component skills. Lang. Cogn. Process. 18, 443–468. doi: 10.1080/01690960344000008

O’Reilly, T. P., Sinclair, G. P., and McNamara, D. S. (2004). “iSTART: A web-based reading strategy intervention that improves students’ science comprehension,” in Proceedings of the IADIS International Conference Cognition and Exploratory Learning in Digital Age: CELDA 2004 , eds D. G. Kinshuk and P. Isaias (Lisbon: IADIS), 173–180.

Ortlieb, E., Sargent, S., and Moreland, M. (2014). Evaluating the efficacy of using a digital reading environment to improve reading comprehension within a reading clinic. Read. Psychol. 35, 397–421. doi: 10.1080/02702711.2012.683236

Potocki, A., Ecalle, J., and Magnan, A. (2013). Effects of computer-assisted comprehension training in less skilled comprehenders in second grade: a one-year follow-up study. Comput. Educ. 63, 131–140. doi: 10.1016/j.compedu.2012.12.011

Potocki, A., Magnan, A., and Ecalle, J. (2015). Computerized trainings in four groups of struggling readers: specific effects on word reading and comprehension. Res. Dev. Disabil. 45, 83–92. doi: 10.1016/j.ridd.2015.07.016

Rideout, V. J., Foehr, U. G., and Roberts, D. F. (2010). Generation M 2: Media in the Lives of 8-to 18-Year-Olds. San Francisco, CA: Henry J. Kaiser Family Foundation.

Singer, L. M., and Alexander, P. A. (2017). Reading on paper and digitally: what the past decades of empirical research reveal. Rev. Educ. Res. 87, 1007–1041. doi: 10.3102/0034654317722961

Viola, F., and Carretti, B. (2019). Cambiamentinelleabilità di letturanelcorso di unostesso anno scolastico [Changes in readingskillsduring the sameschoolyear]. Dislessia 16, 147–159. doi: 10.14605/DIS1621902

von Bastian, C. C., and Oberauer, K. (2014). Effects and mechanisms of working memory training: a review. Psychol. Res. 78, 803–820. doi: 10.1007/s00426-013-0524-526

Wechsler, D. (2003). Wechsler Intelligence Scale For Children–Fourth Edition (WISC-IV). San Antonio, TX: The Psychological Corporation.

Wijekumar, K. K., Meyer, B. J., and Lei, P. (2012). Large-scale randomized controlled trial with 4th graders using intelligent tutoring of the structure strategy to improve nonfiction reading comprehension. Educ. Technol. Res. Dev. 60, 987–1013. doi: 10.1007/s11423-012-9263-4

Wijekumar, K. K., Meyer, B. J., and Lei, P. (2013). High-fidelity implementation of web-based intelligent tutoring system improves fourth and fifth graders content area reading comprehension. Comput. Educ. 68, 366–379. doi: 10.1016/j.compedu.2013.05.021

Wijekumar, K. K., Meyer, B. J., and Lei, P. (2017). Web-based text structure strategy instruction improves seventh graders’ content area reading comprehension. J. Educ. Psychol. 109, 741–760. doi: 10.1037/edu0000168

Woolley, G. (2011). “Reading comprehension,” in Reading Comprehension: Assisting Children With Learning Difficulties , ed Springer Science+Business Media (Dordrecht, NL: Springer), doi: 10.1007/978-94-007-1174-7_2

Yuill, N., and Oakhill, J. (1988). Effects of inference awareness training on poor reading comprehension. Appl. Cogn. Psychol. 2, 33–45. doi: 10.1002/acp.2350020105

Keywords : reading comprehension, training, distance rehabilitation program, digital device, Cloze app

Citation: Capodieci A, Cornoldi C, Doerr E, Bertolo L and Carretti B (2020) The Use of New Technologies for Improving Reading Comprehension. Front. Psychol. 11:751. doi: 10.3389/fpsyg.2020.00751

Received: 20 November 2019; Accepted: 27 March 2020; Published: 23 April 2020.

Reviewed by:

Copyright © 2020 Capodieci, Cornoldi, Doerr, Bertolo and Carretti. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Agnese Capodieci, [email protected] ; Laura Bertolo, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

A Synthesis of Reading Interventions and Effects on Reading Comprehension Outcomes for Older Struggling Readers

This article reports a synthesis of intervention studies conducted between 1994 and 2004 with older students (Grades 6–12) with reading difficulties. Interventions addressing decoding, fluency, vocabulary, and comprehension were included if they measured the effects on reading comprehension. Twenty-nine studies were located and synthesized. Thirteen studies met criteria for a meta-analysis, yielding an effect size (ES) of 0.89 for the weighted average of the difference in comprehension outcomes between treatment and comparison students. Word-level interventions were associated with ES = 0.34 in comprehension outcomes between treatment and comparison students. Implications for comprehension instruction for older struggling readers are described.

Although educators have historically emphasized improving students’ reading proficiency in the elementary school years, reading instruction for secondary students with reading difficulties has been less prevalent. As a result, secondary students with reading difficulties are infrequently provided reading instruction, thus widening the gap between their achievement and that of their grade-level peers. Recent legislation, such as the No Child Left Behind Act ( NCLB; 2002 ), has prompted schools to improve reading instruction for all students, including those in middle and high school. Many secondary students continue to demonstrate difficulties with reading, and educators continue to seek information on best practices for instructing these students.

The National Assessment of Educational Progress (NAEP) administered a reading assessment in 2002 to approximately 343,000 students in Grades 4 and 8. According to the NAEP data, there was no significant change in progress for students between 1992 and 2002, and Grade 8 scores in 2003 actually decreased ( Grigg, Daane, Jin, & Campbell, 2003 ). The NAEP also conducted a long-term trend assessment in reading, which documented performance from 1971 to 2004 for students ages 9, 13, and 17. Although scores for the 9-year-olds showed improvements compared to the scores for this age in 1971 and 1999, this was not the case for the 13- and 17-year-olds. Although the scores at the 75th and 90th percentile for the 13-year-olds significantly improved from 1971 to 2004, there were no significant differences between scores in 1999 and 2004. For the 17-year-olds, there were no significant differences at any of the percentiles selected in 2004, nor were there differences between the 1971 and 1999 scores. These data suggest that the education system is not effectively preparing some adolescents for reading success and that information on effective instructional practices is needed to improve these trends.

Expectations

Secondary students face increasing accountability measures along with a great deal of pressure to meet the demands of more difficult curricula and content ( Swanson & Hoskyn, 2001 ). In the past decade, students have become responsible for learning more complex content at a rapid pace to meet state standards and to pass outcome assessments ( Woodruff, Schumaker, & Deschler, 2002 ).

Our educational system expects that secondary students are able to decode fluently and comprehend material with challenging content ( Alvermann, 2002 ). Some struggling secondary readers, however, lack sufficient advanced decoding, fluency, vocabulary, and comprehension skills to master the complex content ( Kamil, 2003 ).

In a climate where many secondary students continue to struggle with reading and schools face increasingly difficult accountability demands, it is essential to identify the instruction that will benefit struggling secondary readers. Secondary teachers require knowledge of best practices to provide appropriate instruction, prevent students from falling farther behind, and help bring struggling readers closer to reading for knowledge and pleasure.

Comprehension Research

The ultimate goal of reading instruction at the secondary level is comprehension—gaining meaning from text. A number of factors contribute to students’ not being able to comprehend text. Comprehension can break down when students have problems with one or more of the following: (a) decoding words, including structural analysis; (b) reading text with adequate speed and accuracy (fluency); (c) understanding the meanings of words; (d) relating content to prior knowledge; (e) applying comprehension strategies; and (f) monitoring understanding ( Carlisle & Rice, 2002 ; National Institute for Literacy, 2001 ; RAND Reading Study Group, 2002 ).

Because many secondary teachers assume that students who can read words accurately can also comprehend and learn from text simply by reading, they often neglect teaching students how to approach text to better understand the content. In addition, because of increasing accountability, many teachers emphasize the content while neglecting to instruct students on how to read for learning and understanding ( Pressley, 2000 ; RAND Reading Study Group, 2002 ). Finally, the readability level of some text used in secondary classrooms may be too high for below-grade level readers, and the “unfriendliness” of some text can result in comprehension challenges for many students ( Mastropieri, Scruggs, & Graetz, 2003 ).

The RAND Reading Study Group (2002) created a heuristic for conceptualizing reading comprehension. Fundamentally, comprehension occurs through an interaction among three critical elements: the reader, the text, and the activity. The capacity of the reader, the values ascribed to text and text availability, and reader’s activities are among the many variables that are influenced and determined by the sociocultural context that both shapes and is shaped by each of the three elements. This synthesis addresses several critical aspects of this proposed heuristic—the activity or intervention provided for students at risk and, when described in the study, the text that was used. Because the synthesis focuses on intervention research, questions about what elements of interventions were associated with reading comprehension were addressed. This synthesis was not designed to address other critical issues, including the values and background of readers and teachers and the context in which teachers and learners interacted. Many of the social and affective variables associated with improved motivation and interest in text for older readers and how these variables influenced outcomes are part of the heuristic of reading comprehension, but we were unable to address them in this synthesis.

Rationale and Research Question

Many of the instructional practices suggested for poor readers were derived from observing, questioning, and asking good and poor readers to “think aloud” while they read ( Dole, Duffy, Roehler, & Pearson, 1991 ; Heilman, Blair, & Rupley, 1998 ; Jiménez, Garcia, & Pearson, 1995 , 1996 ). These reports described good readers as coordinating a set of highly complex and well-developed skills and strategies before, during, and after reading so that they could understand and learn from text and remember what they read ( Paris, Wasik, & Tumer, 1991 ). When compared with good readers, poor readers were considerably less strategic ( Paris, Lipson, & Wixson, 1983 ). Good readers used the following skills and strategies: (a) reading words rapidly and accurately; (b) noting the structure and organization of text; (c) monitoring their understanding while reading; (d) using summaries; (e) making predictions, checking them as they read, and revising and evaluating them as needed; (g) integrating what they know about the topic with new learning; and (h) making inferences and using visualization ( Jenkins, Heliotis, Stein, & Haynes, 1987 ; Kamil, 2003 ; Klingner, Vaughn, & Boardman, 2007 ; Mastropieri, Scruggs, Bakken, & Whedon, 1996 ; Pressley & Afflerbach, 1995 ; Swanson, 1999 ; Wong & Jones, 1982 ).

Previous syntheses have identified critical intervention elements for effective reading instruction for students with disabilities across grade levels (e.g., Gersten, Fuchs, Williams, & Baker, 2001 ; Mastropieri et al., 1996 ; Swanson, 1999 ). For example, we know that explicit strategy instruction yields strong effects for comprehension for students with learning difficulties and disabilities ( Biancarosa & Snow, 2004 ; Gersten et al., 2001 ; National Reading Panel [NRP], 2000 ; RAND Reading Study Group, 2002 ; Swanson, 1999 ). We also know that effective comprehension instruction in the elementary grades teaches students to summarize, use graphic organizers, generate and answer questions, and monitor their comprehension ( Mastropieri et al., 1996 ; Kamil, 2004 ).

However, despite improved knowledge about effective reading comprehension broadly, much less is known regarding effective interventions and reading instruction for students with reading difficulties in the middle and high school grades ( Curtis & Longo, 1999 ). The syntheses previously discussed focused on students identified for special education, examined specific components of reading, and did not present findings for older readers. In recognition of this void in the research, the report on comprehension from the RAND Reading Study Group (2002) cited the need for additional knowledge on how best to organize instruction for low-achieving students. We have conducted the following synthesis to determine the outcome of comprehension, word study, vocabulary, and fluency interventions on reading comprehension of students in Grades 6 through 12. Furthermore, we extended the synthesis to include all struggling readers, not just those with identified learning disabilities. We addressed the following question: How does intervention research on decoding, fluency, vocabulary, and comprehension influence comprehension outcomes for older students (Grades 6 through 12) with reading difficulties or disabilities?

For this synthesis, we conducted a comprehensive search of the literature through a three-step process. The methods described below were developed during prior syntheses conducted by team members ( Kim, Vaughn, Wanzek & Wei, 2004 ; Wanzek, Vaughn, Wexler, Swanson, & Edmonds, 2006 ). We first conducted a computer search of ERIC and PsycINFO to locate studies published between 1994 and 2004. We selected the last decade of studies to reflect the most current research on this topic. Descriptors or root forms of those descriptors ( reading difficult *, learning disab *, LD , mild handi *, mild disab * reading disab *, at-risk , high-risk , reading delay *, learning delay *, struggle reader , dyslex *, read *, comprehen *, vocabulary , fluen *, word , decod *, English Language Arts ) were used in various combinations to capture the greatest possible number of articles. We also searched abstracts from prior syntheses and reviewed reference lists in seminal studies to assure that all studies were identified.

In addition, to assure coverage and because a cumulative review was not located in electronic databases or reference lists, a hand search of 11 major journals from 1998 through 2004 was conducted. Journals examined in this hand search included Annals of Dyslexia, Exceptional Children, Journal of Educational Psychology, Journal of Learning Disabilities, Journal of Special Education, Learning Disability Quarterly, Learning Disabilities Research and Practice, Reading Research Quarterly, Remedial and Special Education , and Scientific Studies of Reading .

Studies were selected if they met all of the following criteria:

  • Participants were struggling readers. Struggling readers were defined as low achievers or students with unidentified reading difficulties, with dyslexia, and/or with reading, learning, or speech or language disabilities. Studies also were included if disaggregated data were provided for struggling readers regardless of the characteristics of other students in the study. Only disaggregated data on struggling readers were used in the synthesis.
  • Participants were in Grades 6 through 12 (ages 11–21). This grade range was selected because it represents the most common grades describing secondary students. When a sample also included older or younger students and it could be determined that the sample mean age was within the targeted range, the study was accepted.
  • Studies were accepted when research designs used treatment–comparison, single-group, or single-subject designs.
  • Intervention consisted of any type of reading instruction, including word study, fluency, vocabulary, comprehension, or a combination of these.
  • The language of instruction was English.
  • At least one dependent measure assessed one or more aspects of reading.
  • Data for calculating effect sizes were provided in treatment–comparison and single-group studies.
  • Interrater agreement for article acceptance or rejection was calculated by dividing the number of agreements by the number of agreements plus disagreements and was computed as 95%.

Data Analysis

Coding procedures.

We employed extensive coding procedures to organize pertinent information from each study. We adapted previously designed code sheets that were developed for past intervention syntheses ( Kim, Vaughn, Wanzek, & Wei, 2004 ). The code sheet included elements specified in the What Works Clearinghouse Design and Implementation Assessment Device ( Institute of Education Sciences, 2003 ), a document used to evaluate the quality of studies.

The code sheet was used to record relevant descriptive criteria as well as results from each study, including data regarding participants (e.g., number, sex, exceptionality type), study design (e.g., number of conditions, assignment to condition), specifications about conditions (e.g., intervention, comparison), clarity of causal inference, and reported findings. Participant information was coded using four forced-choice items (socioeconomic status, risk type, the use of criteria for classifying students with disabilities, and gender) and two open-ended items (age as described in text and risk type as described in text). Similarly, design information was gathered using a combination of forced-choice (e.g., research design, assignment method, fidelity of implementation) and open-ended items (selection criteria). Intervention and comparison information was coded using 10 open-ended items (e.g., site of intervention, role of person implementing intervention, duration of intervention) as well as a written description of the treatment and comparison conditions.

Information on clarity of causal inference was gathered using 11 items for true experimental designs (e.g., sample sizes, attrition, plausibility of intervention contaminants) and 15 items for quasiexperimental designs (e.g., equating procedures, attrition rates). Additional items allowed coders to describe the measures and indicate measurement contaminants. Finally, the precision of outcome for both effect size estimation and statistical reporting was coded using a series of 10 forced-choice yes–no questions, including information regarding assumptions of independence, normality, and equal variance. Effect sizes were calculated using information related to outcome measures, direction of effects, and reading outcome data for each intervention or comparison condition.

After extensive training (more than 10 hr) on the use and interpretation of items from the code sheet, interrater reliability was determined by having six raters independently code a single article. Responses from the six coders were used to calculate the percentage of agreement (i.e., agreements divided by agreements plus disagreements). An interrater reliability of .85 was achieved. Teams of three coded each article, compared results, and resolved any disagreements in coding, with final decisions reached by consensus. To assure even higher reliability than .85 on coding, any item that was not unambiguous to coders was discussed until a clear coding response could be determined. Finally, two raters who had achieved 100% reliability on items related to outcome precision and data calculated effect sizes for each study.

After the coding had been completed, the studies were summarized in a table format. Table 1 contains information on study design, sample, and intervention implementation (e.g., duration and implementation personnel). In Table 2 , intervention descriptions and effect sizes for reading outcomes are organized by each study’s intervention type and design. Effect sizes and p values are provided when appropriate data were available.

Intervention characteristics

Note. NR = not reported; LD = learning disability; MMR = mild mental retardation; MR = mental retardation; RD = reading disability; ESL = English as a Second Language; EBD = emotional or behavioral disability.

Outcomes by intervention type and design

Note. T = treatment; C = comparison; ES = effect size; PND = percentage of nonoverlapping data; SAT = Stanford Achievement Test; WJRM = Woodcock Johnson Reading Mastery; CBM = curriculum-based measure; WRMT = Woodcock Reading Mastery Test; WRMT-R = Woodcock Reading Mastery Test-Revised; TOWRE = Test of Word Reading Efficiency; CTOP p = Comprehensive Test of Phonological Processing; PPVT = Peabody Picture Vocabulary Test; CRAB = Comprehensive Reading Assessment Battery; SDRT = Stanford Diagnostic Reading Test; SRA = Science Research Associates.

Effect size calculation

Effect sizes were calculated for studies that provided adequate information. For studies lacking data necessary to compute effect sizes, data were summarized using findings from statistical analyses or descriptive statistics. For treatment–comparison design studies, the effect size, d , was calculated as the difference between the mean posttest score of the participants in the intervention condition minus the mean posttest score of the participants in the comparison condition divided by the pooled standard deviation. For studies in this synthesis that employed a treatment–comparison design, effect sizes can be interpreted as d = 0.20 is small, d = 0.50 is medium, and d = 0.80 is a large effect ( Cohen, 1988 ). Effects were adjusted for pretest differences when data were provided. For single-group studies, effect sizes were calculated as the standardized mean change ( Cooper, 1998 ). Outcomes from single-subject studies were calculated as the percentage of nonoverlapping data (PND) ( Scruggs, Mastropieri, & Casto, 1987 ). PND is calculated as the percentage of data points during the treatment phase that are higher than the highest data point from the baseline phase. PND was selected because it offered a more parsimonious means of reporting outcomes for single-subject studies and provided common criteria for comparing treatment impact.

Data Analysis Plan

A range of study designs and intervention types was represented in this synthesis. To fully explore the data, we conducted several types of analyses. First, we synthesized study features (e.g., sample size and study design) to highlight similarities, differences, and salient elements across the corpus of studies. Second, we conducted a meta-analysis of a subset of treatment–comparison design studies to determine the overall effect of reading interventions on students’ reading comprehension. In addition to an overall point estimate of reading intervention effects, we reported effects on comprehension by measurement and intervention type. Last, we synthesized trends and results by intervention type across all studies, including single-group and single-subject design studies.

Study Features

A total of 29 intervention studies, all reported in journal articles, met our criteria for inclusion in the synthesis. Studies appeared in a range of journals (as can be seen in the reference list) and were distributed relatively evenly across the years of interest (1994 to 2004). Each study’s design and sample characteristics are described in Table 1 . In the following sections, we summarize information on study features, including sample characteristics, design, and duration of the intervention as well as fidelity of implementation.

Sample characteristics

The 29 studies included 976 students. Sample sizes ranged from 1 to 125, with an average of 51 participants for treatment–comparison studies. The majority of studies targeted middle school students ( n = 19). Five studies focused on high school students, 2 on both middle and high school students, and 3 reported only students’ ages. Although our criteria included interventions for all struggling readers, including those without identified disabilities, only 8 studies included samples of struggling readers without disabilities. The other studies included students with learning or reading disabilities ( n = 17) or a combination of both students with and without disabilities ( n = 4).

Study design

The corpus of studies included 17 treatment–comparison, 9 single-subject, and 3 single-group design studies. The distribution of intervention type by design is displayed in Table 3 . The number of treatment–comparison studies with specific design elements that are characteristic of high quality studies ( Institute of Education Sciences, 2003 ; Raudenbush, 2005 ; Shadish, 2002 ) is indicated in Table 4 . The three elements in Table 4 were selected because they strengthen the validity of study conclusions when appropriately employed. As indicated, only 2 studies ( Abbott & Berninger, 1999 ; Allinder, Dunse, Brunken, & Obermiller-Krolikowski, 2001 ) randomly assigned students to conditions, reported implementation fidelity, and measured student outcomes using standardized measures.

Type of intervention by study design

Quality of treatment–comparison studies

Intervention design and implementation

The number of intervention sessions ranged from 2 to 70. For 11 studies, the number of sessions was not reported and could not be determined from the information provided. Similarly, the frequency and length of sessions was inconsistently reported but is provided in Table 1 when available. For studies that reported the length and number of sessions ( n = 12), students were engaged in an average of 23 hr of instruction. For treatment–comparison design studies, the average number of instructional hours provided was 26 ( n = 10).

Narrative text was used in most text-level interventions ( n = 12). Two studies used both narrative and expository text during the intervention, and 7 used expository text exclusively. For 4 studies, the type of text used was not discernable, and as would be expected, the word-level studies did not include connected text. About an equal number of study interventions was implemented by teachers ( n = 13) and researchers ( n = 12). Two interventions were implemented by both teachers and researchers, and the person implementing the intervention could not be determined from 2 studies.

Meta-Analysis

To summarize the effect of reading interventions on students’ comprehension, we conducted a meta-analysis of a study subset ( k = 13; Abbott & Berninger, 1999 ; Alfassi, 1998 ; Allinder et al., 2001 ; Anderson, Chan, & Henne, 1995 ; DiCecco & Gleason, 2002 ; L. S. Fuchs, Fuchs, & Kazdan, 1999 ; Hasselbring & Goin, 2004 ; Jitendra, Hoppes, & Xin, 2000 ; Mastropieri et al., 2001 ; Moore & Scevak, 1995 ; Penney, 2002 ; Wilder & Williams, 2001 ; Williams, Brown, Silverstein, & deCani, 1994 ). Studies with theoretically similar contrasts and measures of reading comprehension were included in the meta-analysis. All selected studies compared the effects of a reading intervention with a comparison condition in which the construct of interest was absent. By selecting only studies with contrasts between a treatment condition and a no-treatment comparison condition, we could ensure that the resulting point estimate of the effect could be meaningfully interpreted.

The majority of qualifying studies reported multiple comprehension dependent variables. Thus, we first calculated a composite effect for each study using methods outlined by Rosenthal and Rubin (1986) such that each study contributed only one effect to the aggregate. In these calculations, effects from standardized measure were weighted more heavily ( w = 2) than effects from research-developed measures. We analyzed a random-effects model with one predictor variable (intervention type) to account for the presence of unexplained variance and to provide a more conservative estimate of effect significance. A weighted average of effects was estimated and the amount of variance between study effects calculated using the Q statistic ( Shadish & Haddock, 1994 ). In addition to an overall point estimate of the effect of reading interventions, we also calculated weighted averages to highlight effects of certain intervention characteristics (e.g., using narrative versus expository text). When reporting weighted mean effects, only outcomes from studies with treatment–comparison conditions were included. Effects from single-group studies were excluded because only one study ( Mercer, Campbell, Miller, Mercer & Lane, 2000 ) provided the information needed to convert the repeated-measures effect size into the same metric as an independent group effect size.

Overall effect on comprehension

The 13 treatment–comparison studies were included in the meta-analysis because they (a) had theoretically similar contrasts and measures of reading comprehension and (b) examined the effects of a reading intervention with a comparison in which the construct of interest was absent. In 8 studies, the contrast was between the intervention of interest and the school’s current reading instruction. In 5 studies, the comparison condition also received an intervention, but the construct or strategy of interest was absent from that condition. The remaining 4 treatment–comparison studies in the synthesis were eliminated from the meta-analysis because they did not include a comprehension measure ( Bhat, Griffin, & Sindelair, 2003 ; Bhattacharya & Ehri, 2004 ) or they did not include a no-treatment comparison condition ( Chan, 1996 ; Klingner & Vaughn, 1996 ).

A random-effects model was used to provide a more conservative estimate of intervention effect significance. In this model, the weighted average of the difference in comprehension outcomes between students in the treatment conditions and students in the comparison conditions was large (effect size = 0.89; 95% confidence interval (CI) = 0.42, 1.36). That is, students in the treatment conditions scored, on average, more than two thirds of a standard deviation higher than students in the comparison conditions on measures of comprehension, and the effect was significantly different from zero.

To examine whether researcher-developed or curriculum-based measures inflated the effect of reading interventions, we also calculated the effect based on standardized measures only. For this analysis, seven studies were included; the other six studies were eliminated from this secondary analysis because they did not include a standardized measure of comprehension. When limited to only studies that included a standardized measure of comprehension, the random-effects model yielded a moderate average effect (effect size = 0.47; 95% CI = 0.12, 0.82). The effect of reading interventions on comprehension was quite large (effect size = 1.19; 95% CI = 1.10, 1.37) when researcher-developed measures were used to estimate the effect ( k = 9).

In a fixed-effects model, intervention type was a significant predictor of effect size variation ( Q between = 22.33, p < .05), which suggests that the effect sizes were not similar across the categories. Weighted average effects for each intervention type (comprehension, fluency, word study, and multicomponent) were calculated and are presented in Table 5 . For fluency and word study interventions, the effect was not significant—the average effect on comprehension was not different from zero. For the other intervention types, the effect was significantly different from zero but differed in magnitude. Bonferroni post hoc contrasts showed a significant difference in effects on comprehension between comprehension and multicompo-nent interventions ( p < .025). There was no significant difference between the effects of word study interventions and multicomponent interventions ( p > .025).

Average weighted effects by measurement and intervention type

We also computed weighted average effects for studies with common characteristics. Whether an intervention was implemented by the researcher ( n = 4, average effect size = 1.15) or the students’ teacher ( n = 8, effect size = 0.77), the effects were large. The 95% CIs for these two conditions did not overlap, suggesting that they are significantly different. Effects on comprehension were different depending on the student population. Moderate average effects were found for samples of struggling readers ( n = 5, effect size = 0.45) or both struggling readers and students with disabilities ( n = 4, effect size = 0.68), but a large effect ( n = 4, effect size = 1.50) was found for studies with samples of only students with disabilities.

Eleven of the 13 studies included in the meta-analysis used reading of connected text as part of the intervention. In an analysis of studies that reported the type of text used, the weighted average effect for interventions using expository text was moderate ( n = 3, effect size = 0.53), whereas the average effect for those focusing on narrative text was high ( n = 6, effect size = 1.30). Closer examination of the studies with interventions focused on expository text ( Alfassi, 1998 ; DiCecco & Gleason, 2002 ; Moore & Scevak, 1995 ) showed that two studies tested the effects of a multicomponent intervention similar in structure to reciprocal teaching and one examined the effects of using graphic organizers.

Intervention Variables

For this synthesis, we examined findings from treatment–comparison design studies first, because the findings from these studies provide the greatest confidence about causal inferences. We then used results from single-group and single-subject design studies to support or refute findings from the treatment–comparison design studies. Findings are summarized by intervention type. Intervention type was defined as the primary reading component addressed by the intervention (i.e., word study, fluency, vocabulary, comprehension). The corpus of studies did not include any vocabulary interventions but did include several studies that addressed multiple components in which vocabulary instruction was represented. Within each summary, findings for different reading outcomes (e.g., fluency, word reading, comprehension) are reported separately to highlight the interventions’ effects on component reading skills.

Comprehension

Nine treatment–comparison studies ( Alfassi, 1998 ; Anderson et al., 1995 ; Chan, 1996 ; DiCecco & Gleason, 2002 ; Jitendra et al., 2000 ; Klingner & Vaughn, 1996 ; Moore & Scevak, 1995 ; Wilder & Williams, 2001 ; Williams et al., 1994 ) focused on comprehension. Among these studies, several ( Alfassi, 1998 ; Anderson et al., 1995 ; Klingner & Vaughn, 1996 ; Moore & Scevak, 1995 ) examined interventions in which students were taught a combination of reading comprehension skills and strategies, an approach with evidence of effectiveness in improving students’ general comprehension ( NRP, 2000 ; RAND Reading Study Group, 2002 ). Two studies ( Alfassi, 1998 ; Klingner & Vaughn, 1996 ) employed reciprocal teaching ( Palincsar, Brown, & Martin, 1987 ), a model that includes previewing, clarifying, generating questions, and summarizing and has been shown to be highly effective in improving comprehension (see for review, Rosenshine & Meister, 1994 ). Klingner and Vaughn (1996) reported mixed results when the grouping structure of a reciprocal teaching intervention was manipulated during student application and practice. On a standardized measure of comprehension, cooperative grouping was the more effective model (effect size = 1.42). On a researcher-developed comprehension measure, the effects were small but favored the peer tutoring group (effect size = 0.35). It is likely that the standardized test outcome is more reliable, suggesting greater effects from the use of cooperative grouping structures, at least for English language learners with reading difficulties. In another study, effects of reciprocal teaching on comprehension were moderate to high (effect size = 0.35 to 1.04; Alfassi, 1998 ) when implemented in a remedial high school setting, a context not typically examined in previous studies of reciprocal teaching ( Alfassi, 1998 ).

The multiple-strategy intervention in Anderson et al. (1995) resulted in large effects (effect size = 0.80 to 2.08). The repertoire of strategies included previewing and using knowledge of text structure to facilitate understanding. However, another study ( Moore & Scevak, 1995 ), which focused on teaching students to use text structure and features to summarize expository text, reported no effects (effect size = −0.57 to 0.07). It should be noted that the intervention provided in the Anderson and colleagues study (1995) was conducted for 140 hr (a very extensive intervention), and the amount of time for the intervention in the Moore and Scevak study (1995) was not specified, but the study was conducted for only 7 weeks—suggesting a significantly less extensive intervention.

Chan (1996) manipulated both strategy instruction and attribution training and found that poor readers benefited from some attribution training, with the most effective model being attribution training plus successive strategy training (effect size = 1.68). In addition, all three strategy conditions were more effective than the attribution-only condition, which suggests that poor readers also benefit from explicit strategy instruction.

Using graphic organizers is another strategy with demonstrated efficacy in improving comprehension ( Kim et al., 2004 ). One experimental study ( DiCecco & Gleason, 2002 ) and two single-subject studies ( Gardhill & Jitendra, 1999 ; Vallecorsa & deBettencourt, 1997 ) examined the impact of teaching students to use graphic organizers. In DiCecco and Gleason (2002) , the effect of a concept relationship graphic organizer intervention on relational statement production was large (effect size = 1.68). However, the effect was mixed for measures of content knowledge (effect size = 0.08 to 0.50). Other studies also indicated that graphic organizers assisted students in identifying information related to the organizer but were less effective in improving students’ overall understanding of text. For example, in a single-subject study of a story mapping intervention, Gardhill and Jitendra (1999) found mixed results on general comprehension questions (PND = 13% to 100%) but consistent improvement compared to baseline on story retell (PND = 100%). Similarly, all three students in a study of explicit story mapping ( Vallecorsa & deBettencourt, 1997 ) increased the number of story elements included in a retell (PND = 67% to 100%).

Other studies focused on a single comprehension strategy ( Jitendra et al., 2000 ; Wilder & Williams, 2001 ; Williams et al., 1994 ). Studies of single-strategy interventions showed large effects on measures aligned closely with the intervention but limited examples of transfer to more general comprehension measures. For example, students who were taught to identify main ideas within text outperformed students in the comparison condition on a task of identifying and producing main idea statements (effect size = 2.23; Jitendra et al., 2000 ). Although the treatment effects were maintained on near and far transfer measures (effect size = 1.84 to 2.57), scores decreased significantly for both conditions on transfer passages, indicating a lack of transfer to novel contexts. Similarly, interventions in which students were taught to identify and apply story themes ( Wilder & Williams, 2001 ; Williams et al., 1994 ) resulted in large effects on measures of theme identification and application (effect size = 1.41 to 5.93). Effects of this intervention on general comprehension tasks were somewhat attenuated, although still demonstrating moderate effects (effect size = 0.41 to 0.59; Wilder & Williams, 2001 ).

Three studies included information about students’ decoding abilities ( Alfassi, 1998 ; DiCecco & Gleason, 2002 ; Jitendra et al., 2000 ). In all three studies, students were adequate decoders but poor comprehenders. The average effect of the comprehension interventions was large (effect size = 1.04).

Multicomponent

Studies ( L. S. Fuchs et al., 1999 ; Hasselbring & Goin; 2004 ; Mastropieri et al., 2001 ) were classified as multicomponent when the interventions included instruction in more than one component of reading, such as word study with fluency or fluency with comprehension. Two multicomponent studies ( L. S. Fuchs et al., 1999 ; Mastropieri et al., 2001 ) featured a slightly modified version of a peer-assisted learning comprehension and fluency intervention, an instructional model with demonstrated efficacy in the early elementary grades ( D. Fuchs, Fuchs, Mathes, & Simmons, 1997 ). Results when using this intervention model with older struggling readers were mixed. When implemented in an inclusive setting on a biweekly basis, effects on comprehension skills were small (effect size = 0.31; L. S. Fuchs et al., 1999 ) yet were quite large when implemented daily in a self- contained resource room (effect size = 1.18; Mastropieri et al., 2001 ). It should be noted that the large effect size was computed from data on a researcher-developed measure, whereas the smaller effect was based on data from a standardized measure, which is a more reliable measure of the intervention’s effect.

In a single-group design study ( Bryant et al., 2000 ), students participated in an enhanced collaborative strategic reading intervention during which they applied word learning, word reading, and comprehension strategies and practiced fluent reading. This was the only study that examined the effects of an instructional model with all four components included. Effects on word identification and oral reading fluency were moderate (effect size = 0.64, effect size = 0.67, respectively), but effects on comprehension were small (effect size = 0.22).

Hasselbring and Goin (2004) implemented a computer-based intervention that provided students with word reading and spelling practice and comprehension support during text reading. Effects on comprehension (effect size = 1.0) and vocabulary (effect size = 0.75) were large. Effects on word-level skills, however, were small (effect size = 0.23 to 0.44). Results from a single-subject design study with word study as one instructional component ( Strong, Wehby, Falk, & Lane, 2004 ), indicated more consistent improvement in students’ oral reading fluency when word study was combined with fluency practice than when word study instruction alone was provided. However, Steventon and Frederick (2003) had less success with one student who participated in a similar word study and fluency intervention. Their results showed less improvement compared to baseline for oral reading fluency and virtually no transfer of fluent reading to novel text.

There were only two studies that featured technology prominently in the instruction. One was the previously discussed multicomponent intervention by Hasselbring and Goin (2004) . The other was a study that used computers to enhance text and support reading ( MacArthur & Haynes, 1995 ), which yielded an effect size in favor of basic text support (word recognition and decoding with vocabulary support) when compared with enhanced text support (additional support that includes question windows, glossary, teacher comments, and speech synthesis) for comprehending expository text.

The synthesis included one treatment–comparison design study of fluency ( Allinder et al., 2001 ). Allinder et al. (2001) studied the effects of prompting students to use strategies for fluent reading (e.g., reading with inflection) and found no effects on standardized word-level or comprehension measures. The other studies of fluency focused on improving oral reading fluency, often through word or phrase reading fluency and/or repeated reading. Results were mixed with inconsistent improvements in oral reading fluency compared to baseline ( Freeland, Skinner, Jackson, McDaniel & Smith, 2000 ; Mercer et al., 2000 ; Valleley & Shriver, 2003 ).

Three of four experimental word-level studies examined the effects of advanced word reading strategies ( Abbott & Berninger, 1999 ; Bhattacharya & Ehri, 2004 ; Penney, 2002 ). The fourth ( Bhat et al., 2003 ) studied the effects of a phonemic awareness intervention. Results of the phonemic awareness intervention were positive, with large effects on phonemic processing (effect size = 1.59). However, the overall effect of improved phonemic processing transferred minimally to improved word identification (effect size = 0.15).

Results for the three structural analysis studies were mixed, with effects ranging from −0.31 to 1.40. Bhattacharya and Ehri (2004) found that although having students practice whole-word reading versus providing no word reading instruction at all had a small effect (effect size = 0.43), teaching students a structural analysis approach (i.e., multisyllabic chunking) had a large effect (effect size = 1.40). In another study that compared a structural analysis approach to typical reading instruction, the effects on word reading were moderate (effect size = 0.43 to 0.48; Penney, 2002 ). In the third study ( Abbott & Berninger, 1999 ), the effect of phonics and structural analysis instruction on word reading skills was minimal (effect size = −.31 to .04). However, in the latter study, the comparison and treatment conditions received identical interventions, with the exception of the decoding strategy taught: The comparison condition was taught a synthetic phonics strategy and the treatment condition a combination of phonics and structurally analysis. Results may have been lower in this study because, with both conditions being provided a fairly robust treatment, the contrasted conditions were not as dissimilar as in the other two studies.

Across studies, the weighted average effect of structural analysis instruction on word reading skills was moderate (effect size = .36, 95% CI = .03, .69). Two studies ( Abbott & Berninger, 1999 ; Penney, 2002 ) measured comprehension as an outcome of a word-level intervention. Again, the results were mixed (effect size = −0.12 to 0.65).

Results from the meta-analysis indicate that students with reading difficulties and disabilities can improve their comprehension when provided with a targeted reading intervention in comprehension, multiple reading components, or, to a lesser extent, word reading strategies. Even when using standardized measures, which offer a more generalized measure of comprehension, the effect is moderate, providing students with an average of a half standard deviation advantage compared to their peers without the treatment.

A primary finding from this synthesis is that struggling readers can improve in their reading comprehension when taught reading comprehension practices. Seemingly obvious, this phenomenon is quite significant because many struggling readers in older grades (6 through 12) are not provided effective instruction in reading comprehension. In fact, interventions that specifically targeted students with learning disabilities were associated with the highest gains in reading comprehension. Results from this synthesis suggest that explicit instruction in comprehension benefited students with reading difficulties and disabilities. Findings also suggest that there may be a diminishing relationship between accuracy (e.g., word recognition and fluent reading) and comprehension with secondary students. When students reach the upper elementary grades, other factors, such as background knowledge, word knowledge, and use of strategies, contribute to comprehension ( Kintsch & Kintsch, 2004 ). The large effects of interventions that developed students’ strategy knowledge and use and the relatively lower effects of other types of interventions on comprehension support these previous findings. Thus, for students who lack word reading skills, it is necessary to build these word-level skills while teaching comprehension so that access to increasingly difficult levels of print is available to them.

As indicated by the meta-analysis, word-level interventions are associated with small to moderate effects on comprehension ( d = .34). This supports some studies in early grade levels (e.g., Baumann et al., 2002 ) that found little effect on comprehension from structural analysis interventions. Although the average effect was not significantly different from zero, the small to moderate effect is an important finding, particularly for older students with very low decoding skills who require extensive instruction in word-level skills. It is valuable to know that there is a small to moderate effect for comprehension from word-level interventions.

The data trend from the studies of fluency indicates that increased reading rate and accuracy did not always result in improved comprehension (e.g., Allinder et al., 2001 ). These results support other research on the relationship between comprehension and fluency for older students. For example, Kuhn and Stahl (2003) found that although fluency instruction improved the processing skills that facilitate comprehension, few fluency interventions fostered better general comprehension. Stated more succinctly, as students improved their oral reading fluency, comprehension did not jointly improve. Others also report that the correlation between oral reading fluency and comprehension appears to be a developmental relationship, decreasing steadily with age and with text difficulty ( Francis, Fletcher, Catts, & Tomblin, 2004 ; Paris, Carpenter, Paris, & Hamilton, 2004 ). For educators, the message from these findings is that “an intense focus on fluency may pay a short-term dividend, [however] the cost-benefit analysis of such an emphasis for adolescent learners looks less attractive” ( Underwood & Pearson, 2004 , p. 139).

Although we do not think the evidence from this synthesis would suggest forgoing instruction in reading skills such as fluency or advanced decoding strategies with secondary struggling readers—particularly for students whose word reading skills are exceedingly low—the findings from this synthesis do encourage educators to include instruction targeting comprehension skills. Results from this synthesis suggest that older struggling readers benefit from explicit comprehension strategy instruction—that is, modeling and thinking aloud how to self-question and reflect during and after reading and engaging students to become actively involved in monitoring their understanding and processing text meaning. This form of collaboration among students as they read and construct meaning has been well defined by Beck and colleagues in their work on “questioning the author” ( Beck & McKeown, 2006 ; Beck, McKeown, Worthy, Sandora, & Kucan, 1997).

The moderate and large effects on training and near-transfer measures did not frequently generalize to measures of broader, more general comprehension. It appears that comprehension and multicomponent interventions can result in students’ becoming more proficient in applying learned strategies and learning taught content, but they often do not result in readers who use the strategies independently and flexibly in novel contexts. For example, Alfassi (1998) found that the significant effect for condition on researcher-developed measures (effect size = 1.04) did not generalize to standardized measures of broad comprehension and vocabulary skills (0.35 and 0.16, respectively). For single-strategy interventions, students were successful on measures related to the targeted strategy (e.g., identifying the main idea after explicit main idea instruction; Jitendra et al., 2000 ), but on broader measures of comprehension, effects were generally lower and less consistent. These results suggest that older struggling readers may need additional opportunities to apply newly learned strategies to novel text or may need to learn other practices related to text reflection, self-questioning, and engagement.

On the basis of the mixed results from studies that examined the effects of early reading instructional practices (e.g., reciprocal teaching and graphic organizers), we conclude that educators cannot assume that instructional practices with demonstrated efficacy in the lower grades will be equally as effective when implemented with older struggling readers. There are several possible explanations for this. First, the learning needs of this population may differ from those of younger students. Some of these students may have had extensive interventions addressing word-level skills and few interventions addressing practices for comprehending text. This may explain why comprehension interventions for students with learning disabilities were associated with exceedingly high effect sizes. It may be that students with disabilities have had relatively limited instruction in this area. Second, older readers are required to read more information or expository text. Although the number of expository text studies was few in this synthesis, overall narrative text was associated with higher effect sizes from comprehension interventions than expository text. Thus, comprehension practices developed to address narrative text comprehension may benefit narrative text comprehension and have a lower impact on reading expository text—at least for older struggling readers. It may also be that older struggling readers display reading difficulties that are more recalcitrant and require more intensive interventions (e.g., longer duration, more targeted) to achieve similar results.

Limitations

As with any synthesis, our findings are tempered by a few limitations. First, issues of measurement in the area of comprehension are extensive ( Snow, 2003 ). Comprehension is a difficult construct to assess, and many of the studies measured comprehension in varied ways. Comprehension was measured by tasks that ranged from memorization activities (e.g., recall) to indications of complex cognitive behaviors (drawing inferences). Some theorists would argue that pooling or comparing outcomes from measures assessing a spectrum of skills may be misleading. Given the limited number of measures and the limited number of studies within each given category of skill complexity, however, we believed that gaining an understanding of the overall effect on comprehension provides a summary of what we know and insight into future research needed.

Second, the use of researcher-developed measures (or nonstandardized measures) was associated with higher effect sizes than standardized measures. This is a consistent finding from intervention research in education (e.g., Swanson, Hoskyn, & Lee, 1999 ) and should be considered when interpreting the results from intervention studies.

Finally, syntheses are only as good as the quality of the research articles available. We think that this synthesis yields valuable findings; however, only additional research and better-quality research will determine whether these findings will be supported over time.

Implications and Future Research

This synthesis yields several implications for educators. First, we think that these studies indicate that comprehension practices that engage students in thinking about text, learning from text, and discussing what they know are likely to be associated with improved comprehension outcomes for students with reading difficulties and disabilities. Second, the comprehension practices used are more effective for narrative text than expository text. We think that teachers may want to consider the use of additional elements, such as graphic organizers and calling students’ attention to text structures when students are reading relevant expository or information texts. Third, comprehension outcomes were higher when interventions were implemented by researchers in contrast to when implemented by teachers. Because it is likely that researchers are more attentive to implementing interventions with high levels of fidelity, teachers may want to consider their fidelity of implementation when targeting comprehension practices.

There are several important areas related to reading comprehension that this synthesis was unable to address and would be important to consider in future syntheses. As stated in the introduction, RAND Reading Study Group (2002) identified several critical elements that contributed to comprehension: the reader, the text, and the activity. This synthesis examined the extent to which students identified by previous researchers as having reading difficulties or disabilities could demonstrate improved comprehension when participating in specified interventions designed to improve their reading. There are many other key areas related to reading comprehension, including the relationship between the sociocultural context and the student, teacher, and setting. We think that these variables as well as social and affective variables related to students’ interest and motivation would make for valuable understanding of the role of context on students’ comprehension. This synthesis also did not examine the relationship between writing interventions on reading comprehension outcomes for older struggling readers. An extension of this synthesis may provide additional insight into effects of writing interventions on comprehension for struggling readers in middle and high school.

We also think that this synthesis provides ample support for additional research in the area of reading comprehension. Recently, a report on adolescent literacy indicated that as many as 70% of secondary students require some form of reading remediation ( Biancarosa & Snow, 2004 ). The type of reading instruction required for this large number of secondary students is not well defined; however, we can be certain that many of these students will require effective instruction targeted at improving their reading comprehension. Future research addressing the needs of this varied group of struggling adolescent readers is needed, including improved measurement in reading comprehension; effective interventions for various text types, including information text; studies that improve our confidence of effectiveness by adhering to experimental design principles; and studies that align the intervention with the specific needs of students (e.g., decoding, vocabulary, and/or comprehension). We also acknowledge that essential aspects of reading comprehension with older students include consideration of engagement and involvement with text, motivation, self-efficacy, and how to nurture and expand reading interests. Many of these variables are considered to be primary sources of variance when attempting to positively influence the reading comprehension of older students with reading difficulties (Guthrie, Wigfield, & VonSecker, 2000). A better understanding of these key variables will assist teachers and educational decision makers in improving reading instruction for older students.

Biographies

MEAGHAN S. EDMONDS, PhD, is a research associate at the Vaughn Gross Center for Reading and Language Arts at the University of Texas at Austin, Meadows Center for Preventing Educational Risk, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; ude.saxetu.liam@sdnomdesm . She holds a doctorate in educational psychology, a master’s degree in curriculum and instruction, and an MEd in program evaluation. Her current research is focused on reading comprehension and policy evaluation.

SHARON VAUGHN, PhD, holds the H. E. Hartfelder/Southland Corporation Regents Chair in Human Development and is the executive director of the Meadows Center for Preventing Educational Risk at the University of Texas at Austin, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; moc.loa@munhguavrs . She was the editor in chief of the Journal of Learning Disabilities and the coeditor of Learning Disabilities Research and Practice. She is the recipient of the American Educational Research Association Special Education SIG Distinguished Researcher award. She is currently the principal investigator or coprincipal investigator on several Institute for Education Science, National Institute for Child Health and Human Development, and Office of Special Education Programs research grants investigating effective interventions for students with reading difficulties as well as students who are English language learners.

JADE WEXLER, PhD, is a research associate at the Meadows Center for Preventing Educational Risk at the University of Texas at Austin, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; ude.saxetu.liam@relxewj . Her research interests are interventions for adolescents with reading difficulties, response to intervention, and teacher education.

COLLEEN REUTEBUCH, PhD, serves as a project coordinator for the Center for Research of the Educational Achievement and Teaching of English Language Learners (CREATE) Project at the Meadows Center for Preventing Educational Risk at the University of Texas at Austin, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; ude.saxetu.liam@hcubetuerkc . Her research interests include reading and content area interventions.

AMORY CABLE, PhD, is a speech-language pathologist and is currently writing summaries of research for a speech-language clinical database. Via Siciliani 44, Bisceglie, BA CAP 70052; [email protected] . KATHRYN KLINGLER TACKETT, MEd, is a doctoral candidate at the University of Texas at Austin in the Department of Special Education (in learning disabilities and behavior disorders), an assistant instructor with the Department of Special Education, and a research assistant with the Center on Instruction, Special Education Strand, which is housed at the Vaughn Gross Center for Reading and Language Arts, University of Texas at Austin, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; ude.saxetu.liam@relgnilkeitak .

JENNIFER WICK SCHNAKENBERG is the associate director of the Texas Reading First Initiative at the Vaughn Gross Center for Reading and Language Arts at the University of Texas at Austin, College of Education SZB 228, 1 University Station D4900, Austin, TX 78712-0365; ude.saxetu.liam@kciwnnej . She provides technical assistance to state-level, district-level, and campus-level personnel. She trains personnel on using assessment, implementing the three-tier model effectively, and providing effective and comprehensive reading instruction to all students. In addition, she supervises technical assistance and professional development team members.

  • Abbott SP, Berninger VW. It’s never too late to remediate: Teaching word recognition to students with reading disabilities in Grades 4–7. Annals of Dyslexia. 1999; 49 :223–250. [ Google Scholar ]
  • Alfassi M. Reading for meaning: The efficacy of reciprocal teaching in fostering reading comprehension in high school students in remedial reading classes. American Educational Research Journal. 1998; 35 :309–332. [ Google Scholar ]
  • Allinder RM, Dunse L, Brunken CD, Obermiller-Krolikowski HJ. Improving fluency in at-risk readers and students with learning disabilities. Remedial and Special Education. 2001; 22 :48–45. [ Google Scholar ]
  • Alvermann DE. Effective literacy instruction for adolescents. Journal of Literacy Research. 2002; 34 :189–208. [ Google Scholar ]
  • Anderson V, Chan KK, Henne R. The effects of strategy instruction on the literacy models and performance of reading and writing delayed middle school students. In: Hinchman KA, Leu DJ, Kinzer CK, editors. Perspectives on literacy research and practice: Forty-fourth yearbook of the National Reading Conference. Chicago: National Reading Conference; 1995. pp. 180–189. [ Google Scholar ]
  • Baumann JF, Edwards EC, Font G, Tereshinski CA, Kameenui EJ, Olejnik J. Teaching morphemic and contextual analysis to fifth-grade students. Reading Research Quarterly. 2002; 37 (2):150–176. [ Google Scholar ]
  • Beck IL, McKeown MG. Improving comprehension with questioning the author. New York: Scholastic; 2006. [ Google Scholar ]
  • Beck IL, McKeown MG, Worthy J, Sandora CA, Kucan L. Questioning the author: A year-long classroom implementation to engage students with text. Elementary School Journal. 1996; 96 :385–414. [ Google Scholar ]
  • Bhat P, Griffin CC, Sindelair PT. Phonological awareness instruction for middle school students with learning disabilities. Learning Disability Quarterly. 2003; 26 :73–87. [ Google Scholar ]
  • Bhattacharya A, Ehri LC. Graphosyllabic analysis helps adolescent struggling readers read and spell words. Journal of Learning Disabilities. 2004; 37 :331–348. [ PubMed ] [ Google Scholar ]
  • Biancarosa G, Snow CE. Reading next: A vision for action and research in middle and high school literacy. A report to the Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education; 2004. [ Google Scholar ]
  • Bryant DP, Vaughn S, Linan-Thomason S, Ugel N, Hamff A, Hougen M. Reading outcomes for students with and without reading disabilities in general education middle-school content area classes. Learning Disabilities Quarterly. 2000; 23 :238–252. [ Google Scholar ]
  • Carlisle JF, Rice MS. Improving reading comprehension: Research-based principles and practices. Baltimore; York: 2002 . [ Google Scholar ]
  • Chan LKS. Combined strategy and attributional training for seventh-grade average and poor readers. Journal of Research in Reading. 1996; 19 :111–127. [ Google Scholar ]
  • Cohen J. Statistical power analysis for the behavioral sciences. 2. Hillsdale, NJ: Lawrence Erlbaum; 1988. [ Google Scholar ]
  • Cooper H. Synthesizing research: A guide for literature reviews. 3. Thousand Oaks, CA: Sage; 1998 . [ Google Scholar ]
  • Curtis ME, Longo AM. When adolescents can’t read: Methods and materials that work. Cambridge, MA: Brookline; 1999. [ Google Scholar ]
  • Daly EJ, Martens BK. A comparison of three interventions for increasing oral reading performance: Application of the instructional hierarchy. Journal of Applied Behavioral Analysis. 1994; 27 :459–469. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • DiCecco VM, Gleason MM. Using graphic organizers to attain relational knowledge from expository texts. Journal of Learning Disabilities. 2002; 35 :306–320. [ PubMed ] [ Google Scholar ]
  • Dole JA, Duffy GG, Roehler LR, Pearson PD. Moving from the old to the new: Research on reading comprehension instruction. Review of Educational Research. 1991; 61 (2):239–264. [ Google Scholar ]
  • Francis DJ, Fletcher JM, Catts HW, Tomblin JB. Dimensions affecting the assessment of reading comprehension. In: Paris SG, Stahl SA, editors. Children’s reading comprehension and assessment. Mahwah, NJ: Lawrence Erlbaum; 2004. pp. 369–394. [ Google Scholar ]
  • Freeland JT, Skinner CH, Jackson B, McDaniel CE, Smith S. Measuring and increasing silent reading comprehension rates: Empirically validating a repeated readings intervention. Psychology in the Schools. 2000; 37 :415–429. [ Google Scholar ]
  • Fuchs D, Fuchs LS, Mathes P, Simmons D. Peer-assisted learning strategies: Making classroom more responsive to diversity. American Educational Research Journal. 1997; 34 (1):174–206. [ Google Scholar ]
  • Fuchs LS, Fuchs D, Kazdan S. Effects of peer-assisted learning strategies on high school students with serious reading problems. Remedial and Special Education. 1999; 20 :309–319. [ Google Scholar ]
  • Gardhill MC, Jitendra AK. Advanced story map instruction: Effects on the reading comprehension of students with learning disabilities. Journal of Special Education. 1999; 33 :2–17. 28. [ Google Scholar ]
  • Gersten R, Fuchs LS, Williams JP, Baker S. Teaching reading comprehension strategies to students with learning disabilities: A review of research. Review of Educational Research. 2001; 71 :279–320. [ Google Scholar ]
  • Grigg WS, Daane MC, Jin Y, Campbell JR. The nation’s report card: Reading 2002. Washington, DC: U.S. Department of Education, National Center for Education Statistics, Institute of Education Sciences; 2003. NCES 2003-521. Retrieved January 25, 2006, from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2003521 . [ Google Scholar ]
  • Guthrie JT, Wigfield A, VonSecker C. Effects of integrated instruction on motivation and strategy use in reading. Journal of Educational Psychology. 2001; 92 (2):331–341. [ Google Scholar ]
  • Hasselbring TS, Goin LI. Literacy instruction for older struggling readings: What is the role of technology? Reading and Writing Quarterly. 2004; 20 :123–144. [ Google Scholar ]
  • Heilman AW, Blair TR, Rupley WH. Principles and practices of teaching reading. 9. Columbus, OH: Merrill/Prentice Hall; 1998. [ Google Scholar ]
  • Jenkins JR, Heliotis J, Stein ML, Haynes M. Improving reading comprehension by using paragraph restatements. Exceptional Children. 1987; 54 :54–59. [ PubMed ] [ Google Scholar ]
  • Jiménez RT, Garcia GE, Pearson PD. Three children, two languages, and strategic reading: Case studies in bilingual/monolingual reading. American Educational Research Journal. 1995; 32 :67–97. [ Google Scholar ]
  • Jiménez RT, Garcia GE, Pearson PD. The reading strategies of bilingual Latino students who are successful English readers: Opportunities and obstacles. Reading Research Quarterly. 1996; 31 :90–112. [ Google Scholar ]
  • Institute of Education Sciences. What Works Clearinghouse study review standards. 2003. Retrieved January 10, 2005, from http://www.whatworks.ed.gov/reviewpro-cess/study_standards_final.pdf .
  • Jitendra AK, Hoppes MK, Xin YP. Enhancing main idea comprehension for students with learning problems: The role of a summarization strategy and self-monitoring instruction. Journal of Special Education. 2000; 34 :127–139. [ Google Scholar ]
  • Kamil ML. Adolescents and literacy: Reading for the 21st century. Washington, DC: Alliance for Excellent Education; 2003. [ Google Scholar ]
  • Kamil ML. Vocabulary and comprehension instruction: Summary and implications of the National Reading Panel findings. In: McCardle P, Chhabra V, editors. The voice of evidence in reading research. Baltimore: Paul H. Brookes; 2004. pp. 213–234. [ Google Scholar ]
  • Kim A, Vaughn S, Wanzek J, Wei S. Graphic organizers and their effects on the reading comprehension of students with learning disabilities. Journal of Learning Disabilities. 2004; 37 :105–118. [ PubMed ] [ Google Scholar ]
  • Kintsch W, Kintsch E. Comprehension. In: Paris SG, Stahl SA, editors. Children’s reading comprehension and assessment. Mahwah, NJ: Lawrence Erlbaum; 2004. pp. 71–92. [ Google Scholar ]
  • Klingner JK, Vaughn S. Reciprocal teaching of reading comprehension strategies for students with learning disabilities who use English as a second language. Elementary School Journal. 1996; 96 :275–293. [ Google Scholar ]
  • Klingner JK, Vaughn S, Boardman A. Teaching reading comprehension to students with learning disabilities. New York: Guilford; 2007. [ Google Scholar ]
  • Kuhn MR, Stahl SA. Fluency: A review of developmental and remedial practices. Journal of Educational Psychology. 2003; 95 (1):3–21. [ Google Scholar ]
  • Lauterbach SL, Bender WN. Cognitive strategy instruction for reading comprehension: A success for high school freshmen. High School Journal. 1995; 79 (1):58–64. [ Google Scholar ]
  • MacArthur CA, Haynes JB. Student Assistant for Learning from Text (SALT): A hypermedia reading aid. Journal of Learning Disabilities. 1995; 28 :150–159. [ PubMed ] [ Google Scholar ]
  • Mastropieri MA, Scruggs TE, Bakken JP, Whedon C. Reading comprehension: A synthesis of research in learning disabilities. Advances in Learning and Behavioral Disabilities. 1996; 10B :201–227. [ Google Scholar ]
  • Mastropieri MA, Scruggs TE, Graetz JE. Reading comprehension instruction for secondary students: Challenges for struggling students and teachers. Learning Disability Quarterly. 2003; 26 :103–116. [ Google Scholar ]
  • Mastropieri MA, Scruggs T, Mohler L, Beranek M, Spencer V, Boon RT, et al. Can middle school students with serious reading difficulties help each other and learn anything? Journal of Learning Disabilities. 2001; 16 :18–27. [ Google Scholar ]
  • Mercer CD, Campbell KU, Miller MD, Mercer KD, Lane HB. Effects of a reading fluency intervention for middle schoolers with specific learning disabilities. Learning Disabilities Research and Practice. 2000; 15 :179–189. [ Google Scholar ]
  • Moore PJ, Scevak JJ. The effects of strategy training on high school students’ learning from science texts. European Journal of Psychology of Education. 1995; 10 :401–410. [ Google Scholar ]
  • National Institute for Literacy. Put reading first: The research building blocks for teaching children to read. Jessup, MD: Author; 2001. [ Google Scholar ]
  • National Reading Panel. Report of the national reading panel: Teaching children to read. Rockville, MD: National Institute of Child Health and Human Development; 2000. [ Google Scholar ]
  • No Child Left Behind Act of 2001, Pub. L. No. 107–110, 115 Stat. 1425 (2002).
  • Palincsar AS, Brown AL, Martin SM. Peer interaction in reading comprehension instruction. Educational Psychologist. 1987; 22 :231–253. [ Google Scholar ]
  • Paris SG, Carpenter RD, Paris AH, Hamilton EE. Spurious and genuine correlates of children’s reading comprehension. In: Paris SG, Stahl SA, editors. Children’s reading comprehension and assessment. Mahwah, NJ: Lawrence Erlbaum; 2004. pp. 131–160. [ Google Scholar ]
  • Paris SG, Lipson MY, Wixson KK. Becoming a strategic reader. Contemporary Educational Psychology. 1983; 8 (3):293–316. [ Google Scholar ]
  • Paris SG, Wasik BA, Turner JC. The development of strategic readers. In: Pearson PD, Barr R, Kamil ML, Mosenthal P, editors. Handbook of reading research. Vol. 2. White Plains, NY: Longman; 1991. pp. 609–640. [ Google Scholar ]
  • Penney CG. Teaching decoding skills to poor readers in high school. Journal of Literacy Research. 2002; 34 :99–118. [ Google Scholar ]
  • Pressley M. What should comprehension instruction be the instruction of? In: Kamil M, Mosenthal P, Pearson P, Barr R, editors. Handbook of reading research. Vol. 3. Mahwah, NJ: Lawrence Erlbaum; 2000. pp. 545–562. [ Google Scholar ]
  • Pressley M, Afflerbach P. Verbal protocols of reading: The nature of constructively responsive reading. Hillsdale, NJ: Lawrence Erlbaum; 1995. [ Google Scholar ]
  • RAND Reading Study Group. Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: RAND; 2002. [ Google Scholar ]
  • Raudenbush SW. Learning from attempts to improve schooling: The contribution of methodological diversity. Educational Researcher. 2005; 34 (5):25–31. [ Google Scholar ]
  • Rosenshine B, Meister C. Reciprocal teaching: A review of the research. Review of Educational Research. 1994; 64 (4):479–530. [ Google Scholar ]
  • Rosenthal R, Rubin DB. Meta-analytic procedures for combining studies with multiple effect sizes. Psychological Bulletin. 1986; 99 (3):400–406. [ Google Scholar ]
  • Scott TM, Shearer-Lingo A. The effects of reading fluency instruction on the academic and behavioral success of middle school students in a self-contained EBD classroom. Preventing School Failure. 2002; 46 :167–173. [ Google Scholar ]
  • Scruggs TE, Mastropieri MA, Casto G. The quantitative synthesis of single subject research. Remedial and Special Education. 1987; 8 (2):24–33. [ Google Scholar ]
  • Shadish WR. Revisiting field experimentation: Field notes for the future. Psychological Methods. 2002; 7 (1):3–18. [ PubMed ] [ Google Scholar ]
  • Shadish WR, Haddock CK. Combining estimates of effect size. In: Cooper H, Hedges L, editors. The handbook of researcher synthesis. New York: Sage; 1994. pp. 261–284. [ Google Scholar ]
  • Snow CE. Assessment of reading comprehension. In: Sweet AP, Snow CE, editors. Rethinking reading comprehension. New York: Guilford; 2003. pp. 192–206. [ Google Scholar ]
  • Steventon CE, Frederick LD. The effects of repeated readings on student performance in the corrective reading program. Journal of Direct Instruction. 2003; 3 :17–27. [ Google Scholar ]
  • Strong AC, Wehby JH, Falk KB, Lane KL. The impact of a structured reading curriculum and repeated reading on the performance of junior high students with emotional and behavioral disorders. School Psychology Review. 2004; 33 :561–581. [ Google Scholar ]
  • Swanson HL. Reading research for students with LD: A meta-analysis of intervention outcomes. Journal of Learning Disabilities. 1999; 32 :504–532. [ PubMed ] [ Google Scholar ]
  • Swanson HL, Hoskyn M. Instructing adolescents with learning disabilities: A component and composite analysis. Learning Disabilities Research and Practice. 2001; 16 :109–120. [ Google Scholar ]
  • Swanson HL, Hoskyn M, Lee C. Interventions for students with learning disabilities. New York: Guilford; 1999. [ Google Scholar ]
  • Underwood T, Pearson DP. Teaching struggling adolescent readers to comprehend what they read. In: Jetton TL, Dole JA, editors. Adolescent literacy research and practice. New York: Guilford; 2004. pp. 135–161. [ Google Scholar ]
  • Vallecorsa AL, deBettencourt LU. Using a mapping procedure to teach reading and writing skills to middle grade students with learning disabilities. Education and Treatment of Children. 1997; 20 :173–189. [ Google Scholar ]
  • Valleley RJ, Shriver MD. An examination of the effects of repeated readings with secondary students. Journal of Behavioral Education. 2003; 12 (1):55–76. [ Google Scholar ]
  • Wanzek J, Vaughn S, Wexler J, Swanson EA, Edmonds M. A synthesis of spelling and reading interventions and their effects on the spelling outcomes for students with LD. Journal of Learning Disabilities. 2006; 39 (6):528–543. [ PubMed ] [ Google Scholar ]
  • Wilder AA, Williams JP. Students with severe learning disabilities can learn higher order comprehension skills. Journal of Educational Psychology. 2001; 93 :268–278. [ Google Scholar ]
  • Williams JP, Brown LG, Silverstein AK, deCani JS. An instructional program in comprehension of narrative themes for adolescents with learning disabilities. Learning Disabilities Quarterly. 1994; 17 :205–221. [ Google Scholar ]
  • Wong BYL, Jones W. Increasing metacomprehension in learning disabled and normally achieving students through self-questing training. Learning Disability Quarterly. 1982; 5 :228–240. [ Google Scholar ]
  • Woodruff S, Schumaker JB, Deschler D. The effects of an intensive reading intervention on the decoding skills of high school students with reading deficits. Washington, DC: Special Education Programs; 2002. Report No. RR-15. ERIC Document Reproduction Service No. ED46929. [ Google Scholar ]

Advertisement

Advertisement

Inferencing in Reading Comprehension: Examining Variations in Definition, Instruction, and Assessment

  • Original research
  • Published: 04 June 2023

Cite this article

  • Marianne Rice   ORCID: orcid.org/0000-0001-8935-4734 1 ,
  • Kausalai Wijekumar   ORCID: orcid.org/0000-0002-0768-5693 2 ,
  • Kacee Lambright   ORCID: orcid.org/0000-0002-8955-4135 2 &
  • Abigail Bristow   ORCID: orcid.org/0009-0009-7093-3678 2  

400 Accesses

4 Altmetric

Explore all metrics

Inferencing is an important and complex process required for successful reading comprehension. Previous research has suggested instruction in inferencing is effective at improving reading comprehension. However, varying definitions of inferencing is likely impacting how inferencing instruction is implemented in practice and inferencing ability is measured. The goal of this study was, first, to systematically review the literature on inference instruction to compile a list of definitions used to describe inferences, and second, to review textbooks used in instruction and assessments used in research and practice to measure inferencing skills. A systematic literature search identified studies that implemented inferencing instruction with learners across all ages from preschool to adults. After screening and elimination, 75 studies were identified and reviewed for inference definitions, instructional practices, and assessments used. A widely-used reading textbook and two reading comprehension assessments were reviewed for grade 4 (elementary school) and grade 7 (middle school) to connect inferences taught and measured with the identified definitions. Reviewing the 75 studies suggested 3 broad categories of inferences and 9 definitions of specific inference types. Textbook and assessment review processes revealed differences between the types of inference questions practiced and tested. The large variation in inference types and definitions may create difficulties in schools implementing inference instruction and/or attempting to measure students’ inference abilities. More alignment between research studies on inference instruction and the textbooks and assessments used in schools to teach and assess inference skills is needed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research questions about reading comprehension

Similar content being viewed by others

Inference instruction for struggling readers: a synthesis of intervention research.

Colby S. Hall

research questions about reading comprehension

Using Read-Alouds to Teach Inferencing from the Start

Kathleen A. J. Mohr, Jacob D. Downs, … Hsiaomei Tsai

research questions about reading comprehension

Inferential comprehension differences between narrative and expository texts: a systematic review and meta-analysis

Virginia Clinton, Terrill Taylor, … Ben Seipel

Applegate, M. D., Quinn, K. B., & Applegate, A. (2002). Levels of thinking required by comprehension questions in informal reading inventories. The Reading Teacher, 56 , 174–180.

Google Scholar  

Beerwinkle, A. L., Owens, J., & Hudson, A. (2021). An analysis of comprehension strategies and skills covered within grade 3–5 reading textbooks in the United States. Technology, Knowledge, and Learning, 26 (2), 311–338. https://doi.org/10.1007/s10758-020-09484-0

Article   Google Scholar  

Cain, K., & Oakhill, J. V. (1999). Inference making and its relation to comprehension failure. Reading and Writing, 11 , 489–503. https://doi.org/10.1023/A:1008084120205

Cain, K., Oakhill, J. V., Barnes, M. A., & Bryant, P. E. (2001). Comprehension skill, inference-making ability, and their relation to knowledge. Memory and Cognition, 29 (6), 850–859. https://doi.org/10.3758/BF03196414

Cain, K., Oakhill, J. V., & Bryant, P. E. (2004). Children’s reading comprehension ability: Concurrent prediction by working memory, verbal ability, and component skill. Journal of Educational Psychology, 96 , 671–681. https://doi.org/10.1037/0022-0663.96.1.31

Clinton, V., Taylor, T., Bajpayee, S., Davison, M. L., Carlson, S. E., & Seipel, B. (2020). Inferential comprehension differences between narrative and expository texts: A systematic review and meta-analysis. Reading and Writing: An Interdisciplinary Journal, 33 , 2223–2248. https://doi.org/10.1007/s11145-020-10044-2

Common Core State Standards Initiative. (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects . Retrieved from: https://learning.ccsso.org/wp-content/uploads/2022/11/ELA_Standards1.pdf

Dewitz, P., Jones, J., & Leahy, S. (2009). Comprehension strategy instruction in core reading programs. Reading Research Quarterly, 44 (2), 102–126. https://doi.org/10.1598/RRQ.44.2.1

Elleman, A. M. (2017). Examining the impact of inference instruction on the literal and inferential comprehension of skilled and less skilled readers: A meta-analytic review. Journal of Educational Psychology, 109 (6), 761–781. https://doi.org/10.1037/edu0000180

Gauche, G., & Pfeiffer Flores, E. (2022). The role of inferences in reading comprehension: A critical analysis. Theory and Psychology, 32 (2), 326–343. https://doi.org/10.1177/09593543211043805

Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 101 (3), 371–395. https://doi.org/10.1037/0033-295X.101.3.371

Hall, C. S. (2016). Inference instruction for struggling readers: A synthesis of intervention research. Educational Psychology Review, 28 (1), 1–22. https://doi.org/10.1007/s10648-014-9295-x

Houghton Mifflin Harcourt. (2023). About us . https://www.hmhco.com/about-us . Retrieved on June 2, 2023.

Jones, J. S., Conradi, K., & Amendum, S. J. (2016). Matching interventions to reading needs: A case for differentiation. The Reading Teacher, 70 (3), 307–316. https://doi.org/10.1002/trtr.1513

Kendeou, P. (2015). A general inference skill. In E. J. O’Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 160 –181). Cambridge University Press. http://dx.doi.org/ https://doi.org/10.1017/CBO9781107279186.009

Kendeou, P., Bohn-Gettler, C., White, M., & van den Broek, P. (2008). Children’s inference generation across different media. Journal of Research in Reading, 31 , 259–272. https://doi.org/10.1111/j.1467-9817.2008.00370.x

Leslie, L., & Caldwell, J. (2017). Formal and informal measures of reading comprehension. In S. E. Israel (Ed.), Handbook of research on reading comprehension 2nd ed. (pp. 427–451). Routledge.

MacGinitie, W. H., MacGinitie, R. K., Maria, K., & Dreyer, L. G. (2002). Gates-MacGinitie Reading Tests . 4th ed. Riverside Publishing.

Mar, R. A., Li, J., Nguyen, A. T., & Ta, C. P. (2021). Memory and comprehension of narrative versus expository texts: A meta-analysis. Psychonomic Bulletin and Review, 28 , 732–749. https://doi.org/10.3758/s13423-020-01853-1

Nash, H., & Heath, J. (2011). The role of vocabulary, working memory and inference making ability in reading comprehension in Down syndrome. Research in Developmental Disabilities, 32 (5), 1782–1791. https://doi.org/10.1016/j.ridd.2011.03.007

Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan—a web and mobile app for systematic reviews. Systematic Reviews, 5 (1), 1–10. https://doi.org/10.1186/s13643-016-0384-4

Perfetti, C. A., & Stafura, J. Z. (2015). Comprehending implicit meanings in text without making inferences. In E. J. O’Brien, A. E. Cook, & R. F. Lorch (Eds.), Inferences during reading (pp. 1–18). Cambridge University Press. https://doi.org/10.1017/CBO9781107279186.002

Schmidt, W. H., & McKnight, C. C. (2012).  Inequality for all . Teachers College Press.

Schwartz, S. (2019). The most popular reading programs aren’t backed by science. Education Week . https://www.edweek.org/teaching-learning/the-most-popular-reading-programs-arent-backed-by-science/2019/12 .

Texas Education Agency (2022). STAAR released test questions . Retrieved from: https://tea.texas.gov/student-assessment/testing/staar/staar-released-test-questions .

Texas Education Agency (2017). Texas essential knowledge and skills for English language arts and reading . Retrieved from: https://tea.texas.gov/about-tea/laws-and-rules/texas-administrative-code/19-tac-chapter-110 .

van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension . Academic Press.

Wijekumar, K. K., Meyer, B. J., & Lei, P. (2017). Web-based text structure strategy instruction improves seventh graders’ content area reading comprehension. Journal of Educational Psychology, 109 (6), 741. https://doi.org/10.1037/edu0000168

Download references

The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through grant U423A180074 to Texas A&M University. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.

Author information

Authors and affiliations.

Department of Educational Psychology, Texas A&M University, 4225 TAMU, College Station, TX, 77843, USA

Marianne Rice

Department of Teaching, Learning, and Culture, Texas A&M University, College Station, USA

Kausalai Wijekumar, Kacee Lambright & Abigail Bristow

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marianne Rice .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 50 kb)

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Rice, M., Wijekumar, K., Lambright, K. et al. Inferencing in Reading Comprehension: Examining Variations in Definition, Instruction, and Assessment. Tech Know Learn (2023). https://doi.org/10.1007/s10758-023-09660-y

Download citation

Accepted : 26 May 2023

Published : 04 June 2023

DOI : https://doi.org/10.1007/s10758-023-09660-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Inferencing
  • Reading comprehension
  • Inferencing instruction
  • Multiple age groups
  • Inference and integrative processes
  • Find a journal
  • Publish with us
  • Track your research

research questions about reading comprehension

Reading aloud boosts memory, but not understanding

R eading text out loud has been shown to improve memory recall compared to its silent counterpart. But does this enhancement extend to understanding the material at a deeper level? A recent study published in the journal Memory & Cognition sought to answer this very question.

Previous research on study strategies has explored various methods to enhance learning effectiveness, focusing on how different approaches impact memory retention and comprehension. Notably, strategies like self-quizzing, spaced repetition, and elaborate self-explanation have been shown to improve learning outcomes. However, these methods often require a significant investment of time.

Among simpler techniques, reading aloud has emerged as a potentially efficient alternative. This interest in reading aloud as a study strategy dates back to early 20th-century research, which suggested that vocalization could aid in memorizing material, a phenomenon later termed the “production effect.”

Yet, while the production effect’s influence on memory has been well-documented, its impact on deeper comprehension remains less clear, highlighting a gap in our understanding of how vocalization influences learning beyond mere recall.

“We wanted to determine whether the production effect, a well-known memory improvement technique, could extend to deeper comprehension of written text beyond the typical word lists that researchers use in memory studies,” explained study author Brady R. T. Roberts , a postdoctoral scholar at the University of Chicago, who conducted the research as a PhD student at the University of Waterloo.

“There had been work in the educational literature that showed the related ‘read aloud technique’ improved comprehension, but that research tended to define certain types of ‘comprehension’ much in the same way we would define rote memorization in the field of psychology.”

To investigate this, Roberts and his colleagues conducted a series of four experiments.

Experiment 1 included 47 university students from the University of Waterloo, who engaged with 10 short passages from the Nelson-Denny Reading Test, reading some passages aloud and others silently in a randomized order to control for potential order effects. This within-subject design ensured each participant acted as their own control. After reading, participants answered multiple-choice questions designed to assess memory and comprehension.

Memory-focused questions tested the participants’ ability to remember specific details mentioned in the text. Comprehension-focused questions, on the other hand, required participants to engage with elements such as the theme or tone of the passage, its gist, or the inferences that could be derived from the text.

Experiment 2 expanded on the initial findings by including an additional condition: reading text silently in an unusual font, specifically Sans Forgetica, hypothesized to create a “desirable difficulty” and thus potentially enhance memory and comprehension. This experiment was conducted online due to the COVID-19 pandemic, broadening the participant pool to include 114 individuals recruited through the MTurk platform and an online research participation system at Flinders University.

After filtering for various criteria, the final sample consisted of 64 participants. The methodology mirrored Experiment 1 in terms of the reading and testing process but included the Sans Forgetica condition to test whether visual degradation of text could similarly enhance learning outcomes.

Experiment 3 aimed to further validate the findings of the previous experiments with a larger and more diverse sample. This experiment was also conducted online, recruiting 167 participants through the Prolific online recruitment system. The study maintained the original design of reading passages aloud or silently but enhanced data quality controls, including more stringent criteria for participation and data inclusion.

A total of 151 participants’ data were analyzed, with the study seeking to replicate the memory benefit observed in earlier experiments and further examine the effect on comprehension. Experiment 3 also explored participants’ intuitions about the efficacy of reading aloud versus silently.

Experiment 4 sought to generalize the findings to different materials and examine the comprehension effect more closely. This experiment used new reading passages obtained from the Test Prep Review website, chosen for their educational relevance and the inclusion of comprehension-focused questions.

Conducted online with 167 participants recruited via Prolific, the methodology was similar to Experiment 3, with adaptations to ensure the new materials were comparable in difficulty and content to those used in previous experiments. The final analysis included data from 131 participants, focusing solely on comprehension to pinpoint whether vocalization could enhance understanding beyond memory recall.

Across the experiments, the researchers consistently found that reading aloud significantly improved memory for the material over reading silently. These results align with the concept of the production effect, which suggests that the act of vocalization enhances the memorability of the material.

However, when it came to comprehension, the study’s findings painted a different picture. Despite the clear benefits for memory recall, reading aloud did not confer any significant advantage for comprehension. The comprehension-focused questions yielded similar accuracy rates regardless of whether the passages were read aloud or silently.

This outcome indicates that while vocalization makes specific details of the text more memorable, it does not inherently improve the reader’s capacity to grasp underlying concepts or draw connections between different pieces of information.

“While reading aloud can indeed improve your memory, it cannot aid in your deeper comprehension of text,” Roberts told PsyPost. “So, read your grocery lists aloud to remember them better, but don’t bother reading your textbook chapters aloud.”

Conducted primarily in a controlled environment, the study leaves room for exploring how these findings translate to real-world learning scenarios, including classrooms and self-study sessions. Future research could expand on these results, perhaps examining how different types of material or diverse subject matter might interact with the production effect.

“The major caveat is that we are relying on Bayesian evidence for a null effect of reading aloud in the case of comprehension, consistent as that finding is,” Roberts said. “Future studies will need to confirm this null effect in more experiments, especially with different measures of comprehension to ensure that our test materials were not the reason for null findings.”

“I think our study reveals a more general point that memory researchers would be wise to consider: Even though memory is presumably required before comprehension can occur, improving memory is not always enough to cause significant improvements in deeper understanding,” Roberts added.

The study, “ Reading text aloud benefts memory but not comprehension ,” was authored by Brady R. T. Roberts, Zoey S. Hu, Eloise Curtis, Glen E. Bodner, David McLean, and Colin M. MacLeod.

(Photo credit: Adobe Stock)

IMAGES

  1. Reading Comprehension Questions

    research questions about reading comprehension

  2. Questions good readers ask..before, during, and after a book. Great

    research questions about reading comprehension

  3. (PDF) Levels of Reading Comprehension in Higher Education: Systematic

    research questions about reading comprehension

  4. Questionnaire English Reading Comprehension

    research questions about reading comprehension

  5. Reading Comprehension Qualitative Research Pdf

    research questions about reading comprehension

  6. Reading Comprehension Qualitative Research Pdf

    research questions about reading comprehension

VIDEO

  1. Reading Comprehension

  2. READING COMPREHENSION

  3. READING COMPREHENSION

  4. READING COMPREHENSION

  5. READING COMPREHENSION

  6. READING COMPREHENSION

COMMENTS

  1. Reading Comprehension Research: Implications for Practice and Policy

    Similarly, the RAND reading model, another influential reading framework for research and practice, defined reading comprehension as the process of "extracting and constructing meaning through interaction and involvement with written language" (RAND Reading Study Group, 2002, p. 11). Specifically, reading comprehension is the interaction ...

  2. The Science of Reading Comprehension Instruction

    Decades of research offer important understandings about the nature of comprehension and its development. Drawing on both classic and contemporary research, in this article, we identify some key understandings about reading comprehension processes and instruction, including these: Comprehension instruction should begin early, teaching word-reading and bridging skills (including ...

  3. What Research Tells Us About Reading, Comprehension, and Comprehension

    For many years, reading instruction was based on a concept of reading as the application of a set of isolated skills such as identifying words, finding main ideas, identifying cause and effect relationships, comparing and contrasting and sequencing. Comprehension was viewed as the mastery of these skills. One important classroom study conducted ...

  4. Question Asking During Reading Comprehension Instruction: A Corpus

    Reading comprehension is a critical skill essential for access to the broad curriculum and long-term academic success (McNamara & Magliano, 2009).Classroom instruction in reading comprehension often takes the form of teacher-led, small-group reading and discussion of a text, a format commonly referred to as guided reading (Ford, 2015; Fountas & Pinnell, 2017).

  5. Effective Strategies for Improving Reading Comprehension

    Within the framework of reading comprehension, the goal of cognitive strategies is to teach students to actively engage with the text, to make connections with it and their prior knowledge, so that learning becomes more purposeful, deliberate, and self-regulated. Texts differ in the level of challenge that they present to students.

  6. PDF Reading Comprehension, What We Know: A Review of Research ...

    This review of research concerning reading comprehension provides incites into what has been learned from 1995 to the present. Reading comprehension is defined as a complex activity that involves several variables. Reading strategies are discussed and how they relate to reading comprehension. Testing is another concern regarding how

  7. The Role of Background Knowledge in Reading Comprehension: A Critical

    The Role of Domain Knowledge. The Construction-Integration model identifies a critical role for background knowledge in reading (Kintsch, Citation 1998; Kintsch & Van Dijk, Citation 1978).Knowledge can be classified according to its specificity; background knowledge comprises all of the world knowledge that the reader brings to the task of reading. This can include episodic (events ...

  8. New Research on Reading Comprehension (and 5 Tips for Teachers

    The researchers discovered a "knowledge threshold" when it comes to reading comprehension: If students were unfamiliar with 59 percent of the terms in a topic, their ability to understand the text was "compromised.". In the study, 3,534 high school students were presented with a list of 44 terms and asked to identify whether each was ...

  9. The Use of New Technologies for Improving Reading Comprehension

    Introduction. Reading comprehension is a fundamental cognitive ability for children, that supports school achievement and successively participation in most areas of adult life (Hulme and Snowling, 2011).Therefore, children with learning disabilities (LD) and special educational needs who show difficulties in text comprehension, sometimes also in association with other problems, may have an ...

  10. Reading comprehension and metacognition: The importance of inferential

    Students' reading comprehension was assessed using comprehension questions for two texts: one about extinction and one about the endocrine system (Thiede, Wiley, & Griffin, Citation 2011). These texts have been used in studies seeking to assess metacomprehension accuracy (e.g., Thiede et al., Citation 2011). Each of the two text is an average ...

  11. The Science of Reading: Supports, Critiques, and Questions

    of SOR findings to reading comprehension instruction, Dewitz and Graves describe how "four factors distorted, impeded, and swamped the research findings" on research regarding reading comprehension. Factors include publish - ers' criteria and interpretations, challenges with dissemina-tion processes, and misinterpretation of the research.

  12. A comprehensive review of research on reading comprehension strategies

    Considering the research foci and findings, we identified seven categories: (a) comparison of the strategy use in L1 and L2 reading; (b) comparison of EAL readers' and monolinguals' comprehension strategy use; (c) different L1 groups' strategy use; (d) the role of languages in the strategy use; (e) the relationship between reading proficiency and comprehension strategy use; (f) strategies in ...

  13. Vocabulary and Reading Comprehension Revisited: Evidence for High-, Mid

    Research Question 1 addressed the contribution of the three levels of vocabulary knowledge to reading comprehension in relation to a single cohort of learners. Results showed that for the entire cohort involved in the study, OVK of words across high- and mid-frequency ranges was collectively capable to predict about 66% of the variance observed ...

  14. How the Science of Reading Informs 21st‐Century Education

    Abstract. The science of reading should be informed by an evolving evidence base built upon the scientific method. Decades of basic research and randomized controlled trials of interventions and instructional routines have formed a substantial evidence base to guide best practices in reading instruction, reading intervention, and the early ...

  15. A Synthesis of Reading Interventions and Effects on Reading

    Because the synthesis focuses on intervention research, questions about what elements of interventions were associated with reading comprehension were addressed. This synthesis was not designed to address other critical issues, including the values and background of readers and teachers and the context in which teachers and learners interacted ...

  16. Question Asking During Reading Comprehension Instruction: A Corpus

    reading, and other teacher-led reading comprehension instruction, is the use of questions to check comprehension and encourage deeper texts and discussion within the group (Degener & Berne, 2017; Ford & Opitz, 2008; Fountas & Pinnell, 1996; McKeown, Beck, & Blake, 2009). To understand better the variability of questioning and its effectiveness

  17. Reading Comprehension and Academic Vocabulary: Exploring Relations of

    Research Questions. General academic word knowledge is strongly related to reading comprehension (Townsend et al., 2012; Lawrence, Hagen, Hwang, Lin, ... If the relation between vocabulary and reading comprehension is driven in part by the fact that knowledge of words is also knowledge of the world and conceptual relations, this approach may ...

  18. Inferencing in Reading Comprehension: Examining Variations in

    Inferencing is an important and complex process required for successful reading comprehension. Previous research has suggested instruction in inferencing is effective at improving reading comprehension. However, varying definitions of inferencing is likely impacting how inferencing instruction is implemented in practice and inferencing ability is measured. The goal of this study was, first, to ...

  19. Students Improve in Reading Comprehension by Learning How to Teach

    In the absence of a standardized reading test for adults, the Lesegeschwindigkeits- und -verständnistest für die Klassen 6-12 [Reading speed test and reading comprehension test for grades 6-12] (LGVT 6-12 see Schneider, Schlagmüller, & Ennemoser, 2007) was used. The LGVT 6-12 is a single choice test, originally developed for sixth ...

  20. Reading Comprehension Research: Implications for Practice and Policy

    Reading comprehension is one of the most complex cognitive activities in which humans engage, making it difficult to teach, measure, and research. Despite decades of research in reading comprehension, international and national reading scores indicate stagnant growth for U.S. adolescents.

  21. Reading aloud boosts memory, but not understanding

    The study, ". Reading text aloud benefts memory but not comprehension. ," was authored by Brady R. T. Roberts, Zoey S. Hu, Eloise Curtis, Glen E. Bodner, David McLean, and Colin M. MacLeod ...

  22. Medical knowledge graph question answering for drug‐drug interaction

    ORIGINAL RESEARCH Medical knowledge graph question answering for dr ug‐dr ug interaction prediction based on multi‐hop machine reading comprehension ... the answer with the highest semantic similarity to the question. To reduce the reading pressure caused by multiple documents on the model, Mao et al. [23] used pre‐trained language models ...

  23. The Science of Reading: Supports, Critiques, and Questions

    "The science of reading" is a phrase representing the accumulated knowledge about reading, reading development, and best practices for reading instruction obtained by the use of the scientific method.…Collectively, research studies with a focus on reading have yielded a substantial knowledge base of stable findings based on the science of reading.

  24. Improving Questioning-Answering Strategies in Learning from Multiple

    In this study, we aimed to improve students' reading comprehension skills and questions answering through a professional development program for teachers. To evaluate the program's impact, we analyzed students' ability to answer inferential questions in one or multiple complementary texts. ... Her research focuses on reading comprehension ...