10 Best Literature Review Tools for Researchers

Best Literature Review Tools for Researchers

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!

Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.

Table of Contents

Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)

1.Semantic ScholarResearchers to access and analyze scholarly literature, particularly focused on leveraging AI and semantic analysis
2.ElicitResearchers in extracting, organizing, and synthesizing information from various sources, enabling efficient data analysis
3.Scite.AiDetermine the credibility and reliability of research articles, facilitating evidence-based decision-making
4.DistillerSRStreamlining and enhancing the process of literature screening, study selection, and data extraction
5.RayyanFacilitating efficient screening and selection of research outputs
6.ConsensusResearchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing
7.RAxResearchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research
8.LateralDiscovering relevant scientific articles and identify potential research collaborators based on user interests and preferences
9.Iris AIExploring and mapping the existing literature, identifying knowledge gaps, and generating research questions
10.ScholarcyExtracting key information from research papers, aiding in comprehension and saving time

#1. Semantic Scholar – A free, AI-powered research tool for scientific literature

Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines. 

#2. Elicit – Research assistant using language models like GPT-3

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort. 

However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications. 

Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.

#3. Scite.Ai – Your personal research assistant

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work. 

However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources. 

Rayyan offers the following paid plans:

#4. DistillerSR – Literature Review Software

Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience. 

#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities. 

#6. Consensus – Use AI to find you answers in scientific research

With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons. 

Consensus offers both free and paid plans:

#7. RAx – AI-powered reading assistant

#8. lateral – advance your research with ai.

Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results. 

#9. Iris AI – Introducing the researcher workspace

Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines. 

#10. Scholarcy – Summarize your literature through AI

Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.

Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text. 

Final Thoughts

In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.

Q1. What are literature review tools for researchers?

Q2. what criteria should researchers consider when choosing literature review tools.

When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities. 

Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?

Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.

Q4. Can literature review tools help with organizing and annotating collected references?

Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.

By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.

Leave a Comment Cancel reply

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

5 literature review tools to ace your research (+2 bonus tools)

Sucheth

Table of Contents

Your literature review is the lore behind your research paper . It comes in two forms, systematic and scoping , both serving the purpose of rounding up previously published works in your research area that led you to write and finish your own.

A literature review is vital as it provides the reader with a critical overview of the existing body of knowledge, your methodology, and an opportunity for research applications.

Tips-For-Writing-A-Literature-Review

Some steps to follow while writing your review:

  • Pick an accessible topic for your paper
  • Do thorough research and gather evidence surrounding your topic
  • Read and take notes diligently
  • Create a rough structure for your review
  • Synthesis your notes and write the first draft
  • Edit and proofread your literature review

To make your workload a little lighter, there are many literature review AI tools. These tools can help you find academic articles through AI and answer questions about a research paper.  

Best literature review tools to improve research workflow

A literature review is one of the most critical yet tedious stages in composing a research paper. Many students find it an uphill task since it requires extensive reading and careful organization .

Using some of the best literature review tools listed here, you can make your life easier by overcoming some of the existing challenges in literature reviews. From collecting and classifying to analyzing and publishing research outputs, these tools help you with your literature review and improve your productivity without additional effort or expenses.

1. SciSpace

SciSpace is an AI for academic research that will help find research papers and answer questions about a research paper. You can discover, read, and understand research papers with SciSpace making it an excellent platform for literature review. Featuring a repository with over 270 million research papers, it comes with your AI research assistant called Copilot that offers explanations, summaries , and answers as you read.

Get started now:

literature review tools

Find academic articles through AI

SciSpace has a dedicated literature review tool that finds scientific articles when you search for a question. Based on semantic search, it shows all the research papers relevant for your subject. You can then gather quick insights for all the papers displayed in your search results like methodology, dataset, etc., and figure out all the papers relevant for your research.

Identify relevant articles faster

Abstracts are not always enough to determine whether a paper is relevant to your research question. For starters, you can ask questions to your AI research assistant, SciSpace Copilot to explore the content and better understand the article. Additionally, use the summarize feature to quickly review the methodology and results of a paper and decide if it is worth reading in detail.

Quickly skim through the paper and focus on the most relevant information with summarize and brainstorm questions feature on SciSpace Copilot

Learn in your preferred language

A big barrier non-native English speakers face while conducting a literature review is that a significant portion of scientific literature is published in English. But with SciSpace Copilot, you can review, interact, and learn from research papers in any language you prefer — presently, it supports 75+ languages. The AI will answer questions about a research paper in your mother tongue.

Read and understand scientific literature in over 75 languages with SciSpace Copilot

Integrates with Zotero

Many researchers use Zotero to create a library and manage research papers. SciSpace lets you import your scientific articles directly from Zotero into your SciSpace library and use Copilot to comprehend your research papers. You can also highlight key sections, add notes to the PDF as you read, and even turn helpful explanations and answers from Copilot into notes for future review.

Understand math and complex concepts quickly

Come across complex mathematical equations or difficult concepts? Simply highlight the text or select the formula or table, and Copilot will provide an explanation or breakdown of the same in an easy-to-understand manner. You can ask follow-up questions if you need further clarification.

Understand math and tables in research papers

Discover new papers to read without leaving

Highlight phrases or sentences in your research paper to get suggestions for related papers in the field and save time on literature reviews. You can also use the 'Trace' feature to move across and discover connected papers, authors, topics, and more.

Find related papers quickly

SciSpace Copilot is now available as a Chrome extension , allowing you to access its features directly while you browse scientific literature anywhere across the web.

literature review tools

Get citation-backed answers

When you're conducting a literature review, you want credible information with proper references.  Copilot ensures that every piece of information provided by SciSpace Copilot is backed by a direct reference, boosting transparency, accuracy, and trustworthiness.

Ask a question related to the paper you're delving into. Every response from Copilot comes with a clickable citation. This citation leads you straight to the section of the PDF from which the answer was extracted.

By seamlessly integrating answers with citations, SciSpace Copilot assures you of the authenticity and relevance of the information you receive.

2. Mendeley

Mendeley Citation Manager is a free web and desktop application. It helps simplify your citation management workflow significantly. Here are some ways you can speed up your referencing game with Mendeley.

Generate citations and bibliographies

Easily add references from your Mendeley library to your Word document, change your citation style, and create a bibliography, all without leaving your document.

Retrieve references

It allows you to access your references quickly. Search for a term, and it will return results by referencing the year, author, or source.

Add sources to your Mendeley library by dragging PDF to Mendeley Reference Manager. Mendeley will automatically remove the PDF(s) metadata and create a library entry.‌

Read and annotate documents

It helps you highlight and comment across multiple PDFs while keep them all in one place using Mendeley Notebook . Notebook pages are not tied to a reference and let you quote from many PDFs.

A big part of many literature review workflows, Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.

Import research articles to your database

Search for research articles on a keyword, and add relevant results to your database. Then, select the articles you are most interested in, and import them into Zotero.

Add bibliography in a variety of formats

With Zotero, you don’t have to scramble for different bibliography formats. Simply use the Zotero-Word plug-in to insert in-text citations and generate a bibliography.

Share your research

You can save a paper and sync it with an online library to easily share your research for group projects. Zotero can be used to create your database and decrease the time you spend formatting citations.

Sysrev is an AI too for article review that facilitates screening, collaboration, and data extraction from academic publications, abstracts, and PDF documents using machine learning. The platform is free and supports public and Open Access projects only.

Some of the features of Sysrev include:

Group labels

Group labels can be a powerful concept for creating database tables from documents. When exported and re-imported, each group label creates a new table. To make labels for a project, go into the manage -> labels section of the project.

Group labels enable project managers to pull table information from documents. It makes it easier to communicate review results for specific articles.

Track reviewer performance

Sysrev's label counting tool provides filtering and visualization options for keeping track of the distribution of labels throughout the project's progress. Project managers can check their projects at any point to track progress and the reviewer's performance.

Tool for concordance

The Sysrev tool for concordance allows project administrators and reviewers to perform analysis on their labels. Concordance is measured by calculating the number of times users agree on the labels they have extracted.

Colandr is a free, open-source, internet-based analysis and screening software used as an AI for academic research. It was designed to ease collaboration across various stages of the systematic review procedure. The tool can be a little complex to use. So, here are the steps involved in working with Colandr.

Create a review

The first step to using Colandr is setting up an organized review project. This is helpful to librarians who are assisting researchers with systematic reviews.

The planning stage is setting the review's objectives along with research queries. Any reviewer can review the details of the planning stage. However, they can only be modified by the author for the review.

Citation screening/import

In this phase, users can upload their results from database searches. Colandr also offers an automated deduplication system.

Full-text screening

The system in Colandr will discover the combination of terms and expressions that are most useful for the reader. If an article is selected, it will be moved to the final step.

Data extraction/export

Colandr data extraction is more efficient than the manual method. It creates the form fields for data extraction during the planning stage of the review procedure. Users can decide to revisit or modify the form for data extraction after completing the initial screening.

Bonus literature review tools

SRDR+ is a web-based tool for extracting and managing systematic review or meta-analysis data. It is open and has a searchable archive of systematic reviews and their data.

7. Plot Digitizer

Plot Digitizer is an efficient tool for extracting information from graphs and images, equipped with many features that facilitate data extraction. The program comes with a free online application, which is adequate to extract data quickly.

Final thoughts

Writing a literature review is not easy. It’s a time-consuming process, which can become tiring at times. The literature review tools mentioned in this blog do an excellent job of maximizing your efforts and helping you write literature reviews much more efficiently. With them, you can breathe a sigh of relief and give more time to your research.

As you dive into your literature review, don’t forget to use SciSpace ResearchGPT to streamline the process. It facilitates your research and helps you explore key findings, summary, and other components of the paper easily.

Frequently Asked Questions (FAQs)

1. what is rrl in research.

RRL stands for Review of Related Literature and sometimes interchanged with ‘Literature Review.’ RRL is a body of studies relevant to the topic being researched. These studies may be in the form of journal articles, books, reports, and other similar documents. Review of related literature is used to support an argument or theory being made by the researcher, as well as to provide information on how others have approached the same topic.

2. What are few softwares and tools available for literature review?

• SciSpace Discover

• Mendeley

• Zotero

• Sysrev

• Colandr

• SRDR+

3. How to generate an online literature review?

The Scispace Discover tool, which offers an excellent repository of millions of peer-reviewed articles and resources, will help you generate or create a literature review easily. You may find relevant information by utilizing the filter option, checking its credibility, tracing related topics and articles, and citing in widely accepted formats with a single click.

4. What does it mean to synthesize literature?

To synthesize literature is to take the main points and ideas from a number of sources and present them in a new way. The goal is to create a new piece of writing that pulls together the most important elements of all the sources you read. Make recommendations based on them, and connect them to the research.

5. Should we write abstract for literature review?

Abstracts, particularly for the literature review section, are not required. However, an abstract for the research paper, on the whole, is useful for summarizing the paper and letting readers know what to expect from it. It can also be used to summarize the main points of the paper so that readers have a better understanding of the paper's content before they read it.

6. How do you evaluate the quality of a literature review?

• Whether it is clear and well-written.

• Whether Information is current and up to date.

• Does it cover all of the relevant sources on the topic.

• Does it provide enough evidence to support its conclusions.

7. Is literature review mandatory?

Yes. Literature review is a mandatory part of any research project. It is a critical step in the process that allows you to establish the scope of your research and provide a background for the rest of your work.

8. What are the sources for a literature review?

• Reports

• Theses

• Conference proceedings

• Company reports

• Some government publications

• Journals

• Books

• Newspapers

• Articles by professional associations

• Indexes

• Databases

• Catalogues

• Encyclopaedias

• Dictionaries

• Bibliographies

• Citation indexes

• Statistical data from government websites

9. What is the difference between a systematic review and a literature review?

A systematic review is a form of research that uses a rigorous method to generate knowledge from both published and unpublished data. A literature review, on the other hand, is a critical summary of an area of research within the context of what has already been published.

literature review tools

Suggested reads!

Types of essays in academic writing Citation Machine Alternatives — A comparison of top citation tools 2023

QuillBot vs SciSpace: Choose the best AI-paraphrasing tool

ChatPDF vs. SciSpace Copilot: Unveiling the best tool for your research

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

literature review tools

Something went wrong when searching for seed articles. Please try again soon.

No articles were found for that search term.

Author, year The title of the article goes here

LITERATURE REVIEW SOFTWARE FOR BETTER RESEARCH

literature review tools

“Litmaps is a game changer for finding novel literature... it has been invaluable for my productivity.... I also got my PhD student to use it and they also found it invaluable, finding several gaps they missed”

Varun Venkatesh

Austin Health, Australia

literature review tools

As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed. A true game-changer indeed!

Ritwik Pandey

Doctoral Research Scholar – Sri Sathya Sai Institute of Higher Learning

literature review tools

Using Litmaps for my research papers has significantly improved my workflow. Typically, I start with a single paper related to my topic. Whenever I find an interesting work, I add it to my search. From there, I can quickly cover my entire Related Work section.

David Fischer

Research Associate – University of Applied Sciences Kempten

“It's nice to get a quick overview of related literature. Really easy to use, and it helps getting on top of the often complicated structures of referencing”

Christoph Ludwig

Technische Universität Dresden, Germany

“This has helped me so much in researching the literature. Currently, I am beginning to investigate new fields and this has helped me hugely”

Aran Warren

Canterbury University, NZ

“I can’t live without you anymore! I also recommend you to my students.”

Professor at The Chinese University of Hong Kong

“Seeing my literature list as a network enhances my thinking process!”

Katholieke Universiteit Leuven, Belgium

“Incredibly useful tool to get to know more literature, and to gain insight in existing research”

KU Leuven, Belgium

“As a student just venturing into the world of lit reviews, this is a tool that is outstanding and helping me find deeper results for my work.”

Franklin Jeffers

South Oregon University, USA

“Any researcher could use it! The paper recommendations are great for anyone and everyone”

Swansea University, Wales

“This tool really helped me to create good bibtex references for my research papers”

Ali Mohammed-Djafari

Director of Research at LSS-CNRS, France

“Litmaps is extremely helpful with my research. It helps me organize each one of my projects and see how they relate to each other, as well as to keep up to date on publications done in my field”

Daniel Fuller

Clarkson University, USA

As a person who is an early researcher and identifies as dyslexic, I can say that having research articles laid out in the date vs cite graph format is much more approachable than looking at a standard database interface. I feel that the maps Litmaps offers lower the barrier of entry for researchers by giving them the connections between articles spaced out visually. This helps me orientate where a paper is in the history of a field. Thus, new researchers can look at one of Litmap's "seed maps" and have the same information as hours of digging through a database.

Baylor Fain

Postdoctoral Associate – University of Florida

literature review tools

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

literature review tools

Accelerate your research with the best systematic literature review tools

The ideal literature review tool helps you make sense of the most important insights in your research field. ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review.

literature review tools

Finalize your literature review faster with comfort

ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

literature review tools

Figure out the "why" behind your participant's motivations

Understand the behaviors and emotions that are driving your focus group participants. With ATLAS.ti, you can transform your raw data and turn it into qualitative insights you can learn from. Easily determine user intent in the same spot you're deciphering your overall focus group data.

literature review tools

Visualize your research findings like never before

We make it simple to present your analysis results with meaningful charts, networks, and diagrams. Instead of figuring out how to communicate the insights you just unlocked, we enable you to leverage easy-to-use visualizations that support your goals.

literature review tools

Paper Search – Access 200+ Million Papers with AI-Powered Insights

Unlock access to over 200 million scientific papers and streamline your research with our cutting-edge AI-powered Paper Search 2.0. Easily find, summarize, and integrate relevant papers directly into your ATLAS.ti Web workspace.

literature review tools

Everything you need to elevate your literature review

Import and organize literature data.

Import and analyze any type of text content – ATLAS.ti supports all standard text and transcription files such as Word and PDF.

Analyze with ease and speed

Utilize easy-to-learn workflows that save valuable time, such as auto coding, sentiment analysis, team collaboration, and more.

Leverage AI-driven tools

Make efficiency a priority and let ATLAS.ti do your work with AI-powered research tools and features for faster results.

Visualize and present findings

With just a few clicks, you can create meaningful visualizations like charts, word clouds, tables, networks, among others for your literature data.

The faster way to make sense of your literature review. Try it for free, today.

A literature review analyzes the most current research within a research area. A literature review consists of published studies from many sources:

  • Peer-reviewed academic publications
  • Full-length books
  • University bulletins
  • Conference proceedings
  • Dissertations and theses

Literature reviews allow researchers to:

  • Summarize the state of the research
  • Identify unexplored research inquiries
  • Recommend practical applications
  • Critique currently published research

Literature reviews are either standalone publications or part of a paper as background for an original research project. A literature review, as a section of a more extensive research article, summarizes the current state of the research to justify the primary research described in the paper.

For example, a researcher may have reviewed the literature on a new supplement's health benefits and concluded that more research needs to be conducted on those with a particular condition. This research gap warrants a study examining how this understudied population reacted to the supplement. Researchers need to establish this research gap through a literature review to persuade journal editors and reviewers of the value of their research.

Consider a literature review as a typical research publication presenting a study, its results, and the salient points scholars can infer from the study. The only significant difference with a literature review treats existing literature as the research data to collect and analyze. From that analysis, a literature review can suggest new inquiries to pursue.

Identify a focus

Similar to a typical study, a literature review should have a research question or questions that analysis can answer. This sort of inquiry typically targets a particular phenomenon, population, or even research method to examine how different studies have looked at the same thing differently. A literature review, then, should center the literature collection around that focus.

Collect and analyze the literature

With a focus in mind, a researcher can collect studies that provide relevant information for that focus. They can then analyze the collected studies by finding and identifying patterns or themes that occur frequently. This analysis allows the researcher to point out what the field has frequently explored or, on the other hand, overlooked.

Suggest implications

The literature review allows the researcher to argue a particular point through the evidence provided by the analysis. For example, suppose the analysis makes it apparent that the published research on people's sleep patterns has not adequately explored the connection between sleep and a particular factor (e.g., television-watching habits, indoor air quality). In that case, the researcher can argue that further study can address this research gap.

External requirements aside (e.g., many academic journals have a word limit of 6,000-8,000 words), a literature review as a standalone publication is as long as necessary to allow readers to understand the current state of the field. Even if it is just a section in a larger paper, a literature review is long enough to allow the researcher to justify the study that is the paper's focus.

Note that a literature review needs only to incorporate a representative number of studies relevant to the research inquiry. For term papers in university courses, 10 to 20 references might be appropriate for demonstrating analytical skills. Published literature reviews in peer-reviewed journals might have 40 to 50 references. One of the essential goals of a literature review is to persuade readers that you have analyzed a representative segment of the research you are reviewing.

Researchers can find published research from various online sources:

  • Journal websites
  • Research databases
  • Search engines (Google Scholar, Semantic Scholar)
  • Research repositories
  • Social networking sites (Academia, ResearchGate)

Many journals make articles freely available under the term "open access," meaning that there are no restrictions to viewing and downloading such articles. Otherwise, collecting research articles from restricted journals usually requires access from an institution such as a university or a library.

Evidence of a rigorous literature review is more important than the word count or the number of articles that undergo data analysis. Especially when writing for a peer-reviewed journal, it is essential to consider how to demonstrate research rigor in your literature review to persuade reviewers of its scholarly value.

Select field-specific journals

The most significant research relevant to your field focuses on a narrow set of journals similar in aims and scope. Consider who the most prominent scholars in your field are and determine which journals publish their research or have them as editors or reviewers. Journals tend to look favorably on systematic reviews that include articles they have published.

Incorporate recent research

Recently published studies have greater value in determining the gaps in the current state of research. Older research is likely to have encountered challenges and critiques that may render their findings outdated or refuted. What counts as recent differs by field; start by looking for research published within the last three years and gradually expand to older research when you need to collect more articles for your review.

Consider the quality of the research

Literature reviews are only as strong as the quality of the studies that the researcher collects. You can judge any particular study by many factors, including:

  • the quality of the article's journal
  • the article's research rigor
  • the timeliness of the research

The critical point here is that you should consider more than just a study's findings or research outputs when including research in your literature review.

Narrow your research focus

Ideally, the articles you collect for your literature review have something in common, such as a research method or research context. For example, if you are conducting a literature review about teaching practices in high school contexts, it is best to narrow your literature search to studies focusing on high school. You should consider expanding your search to junior high school and university contexts only when there are not enough studies that match your focus.

You can create a project in ATLAS.ti for keeping track of your collected literature. ATLAS.ti allows you to view and analyze full text articles and PDF files in a single project. Within projects, you can use document groups to separate studies into different categories for easier and faster analysis.

For example, a researcher with a literature review that examines studies across different countries can create document groups labeled "United Kingdom," "Germany," and "United States," among others. A researcher can also use ATLAS.ti's global filters to narrow analysis to a particular set of studies and gain insights about a smaller set of literature.

ATLAS.ti allows you to search, code, and analyze text documents and PDF files. You can treat a set of research articles like other forms of qualitative data. The codes you apply to your literature collection allow for analysis through many powerful tools in ATLAS.ti:

  • Code Co-Occurrence Explorer
  • Code Co-Occurrence Table
  • Code-Document Table

Other tools in ATLAS.ti employ machine learning to facilitate parts of the coding process for you. Some of our software tools that are effective for analyzing literature include:

  • Named Entity Recognition
  • Opinion Mining
  • Sentiment Analysis

As long as your documents are text documents or text-enable PDF files, ATLAS.ti's automated tools can provide essential assistance in the data analysis process.

7 open source tools to make literature reviews easy

Open source, library schools, libraries, and digital dissemination

Opensource.com

A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing  a literature review using open source software .

The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.

1. GNU Linux

Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a flavor of GNU Linux will breathe new life into these outdated PCs. There are more than 100 distributions , all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a bootable USB stick that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.

Linux distributions generally come with a free web browser, and the most popular is Firefox . Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.

3. Unpaywall

Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even Harvard University is challenged to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). Unpaywall is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using open source in lit reviews .

Formatting references is the most tedious of academic tasks. Zotero can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).

5. LibreOffice

Your thesis or academic article can be written conventionally with the free office suite LibreOffice , which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.

If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with LaTeX , a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.

7. MediaWiki

If you want to leverage the open source way to get help with your literature review, you can facilitate a dynamic collaborative literature review . A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. MediaWiki is free software that enables you to set up your own wikis.

Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., Aalto University ), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including mine ) use Appropedia , which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.

Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.

Wrapping up

Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.

Joshua Pearce

Related Content

Two people chatting via a video conference app

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism. Run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved August 19, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

Literature Review Tips & Tools

  • Tips & Examples

Organizational Tools

Tools for systematic reviews.

  • Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
  • Coggle Another free online mindmapping tool.
  • Organization & Structure tips from Purdue University Online Writing Lab
  • Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
  • Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
  • PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
  • Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
  • Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
  • PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
  • Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
  • Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
  • Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
  • SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
  • RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
  • << Previous: Tips & Examples
  • Next: Writing & Citing Help >>
  • Last Updated: Jul 30, 2024 9:23 AM
  • URL: https://guides.library.umass.edu/litreviews

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

Writing in the Health and Social Sciences

  • Journal Publishing
  • Style and Writing Guides
  • Readings about Writing
  • Resources for Dissertation Authors
  • Citation Management and Formatting Tools

Systematic Literature Reviews: Steps & Resources

Literature review & systematic review steps.

  • What are Literature Reviews?
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews

What are Systematic Reviews? (3 minutes, 24 second YouTube Video)

literature review tools

These steps for conducting a systematic literature review are listed below . 

Also see subpages for more information about:

  • The different types of literature reviews, including systematic reviews and other evidence synthesis methods
  • Tools & Tutorials
  • Develop a Focused Question
  • Scope the Literature  (Initial Search)
  • Refine & Expand the Search
  • Limit the Results
  • Download Citations
  • Abstract & Analyze
  • Create Flow Diagram
  • Synthesize & Report Results

1. Develop a Focused   Question 

Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome

Focus on defining the Population or Problem and Intervention (don't narrow by Comparison or Outcome just yet!)

"What are the effects of the Pilates method for patients with low back pain?"

Tools & Additional Resources:

  • PICO Question Help
  • Stillwell, Susan B., DNP, RN, CNE; Fineout-Overholt, Ellen, PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek, PhD, RN, CPNP/PMHNP, FNAP, FAAN; Williamson, Kathleen M., PhD, RN Evidence-Based Practice, Step by Step: Asking the Clinical Question, AJN The American Journal of Nursing : March 2010 - Volume 110 - Issue 3 - p 58-61 doi: 10.1097/01.NAJ.0000368959.11129.79

2. Scope the Literature

A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature. 

Eligible studies may be located by searching in:

  • Background sources (books, point-of-care tools)
  • Article databases
  • Trial registries
  • Grey literature
  • Cited references
  • Reference lists

When searching, if possible, translate terms to controlled vocabulary of the database. Use text word searching when necessary.

Use Boolean operators to connect search terms:

  • Combine separate concepts with AND  (resulting in a narrower search)
  • Connecting synonyms with OR  (resulting in an expanded search)

Search:  pilates AND ("low back pain"  OR  backache )

Video Tutorials - Translating PICO Questions into Search Queries

  • Translate Your PICO Into a Search in PubMed (YouTube, Carrie Price, 5:11) 
  • Translate Your PICO Into a Search in CINAHL (YouTube, Carrie Price, 4:56)

3. Refine & Expand Your Search

Expand your search strategy with synonymous search terms harvested from:

  • database thesauri
  • reference lists
  • relevant studies

Example: 

(pilates OR exercise movement techniques) AND ("low back pain" OR backache* OR sciatica OR lumbago OR spondylosis)

As you develop a final, reproducible strategy for each database, save your strategies in a:

  • a personal database account (e.g., MyNCBI for PubMed)
  • Log in with your NYU credentials
  • Open and "Make a Copy" to create your own tracker for your literature search strategies

4. Limit Your Results

Use database filters to limit your results based on your defined inclusion/exclusion criteria.  In addition to relying on the databases' categorical filters, you may also need to manually screen results.  

  • Limit to Article type, e.g.,:  "randomized controlled trial" OR multicenter study
  • Limit by publication years, age groups, language, etc.

NOTE: Many databases allow you to filter to "Full Text Only".  This filter is  not recommended . It excludes articles if their full text is not available in that particular database (CINAHL, PubMed, etc), but if the article is relevant, it is important that you are able to read its title and abstract, regardless of 'full text' status. The full text is likely to be accessible through another source (a different database, or Interlibrary Loan).  

  • Filters in PubMed
  • CINAHL Advanced Searching Tutorial

5. Download Citations

Selected citations and/or entire sets of search results can be downloaded from the database into a citation management tool. If you are conducting a systematic review that will require reporting according to PRISMA standards, a citation manager can help you keep track of the number of articles that came from each database, as well as the number of duplicate records.

In Zotero, you can create a Collection for the combined results set, and sub-collections for the results from each database you search.  You can then use Zotero's 'Duplicate Items" function to find and merge duplicate records.

File structure of a Zotero library, showing a combined pooled set, and sub folders representing results from individual databases.

  • Citation Managers - General Guide

6. Abstract and Analyze

  • Migrate citations to data collection/extraction tool
  • Screen Title/Abstracts for inclusion/exclusion
  • Screen and appraise full text for relevance, methods, 
  • Resolve disagreements by consensus

Covidence is a web-based tool that enables you to work with a team to screen titles/abstracts and full text for inclusion in your review, as well as extract data from the included studies.

Screenshot of the Covidence interface, showing Title and abstract screening phase.

  • Covidence Support
  • Critical Appraisal Tools
  • Data Extraction Tools

7. Create Flow Diagram

The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram is a visual representation of the flow of records through different phases of a systematic review.  It depicts the number of records identified, included and excluded.  It is best used in conjunction with the PRISMA checklist .

Example PRISMA diagram showing number of records identified, duplicates removed, and records excluded.

Example from: Stotz, S. A., McNealy, K., Begay, R. L., DeSanto, K., Manson, S. M., & Moore, K. R. (2021). Multi-level diabetes prevention and treatment interventions for Native people in the USA and Canada: A scoping review. Current Diabetes Reports, 2 (11), 46. https://doi.org/10.1007/s11892-021-01414-3

  • PRISMA Flow Diagram Generator (ShinyApp.io, Haddaway et al. )
  • PRISMA Diagram Templates  (Word and PDF)
  • Make a copy of the file to fill out the template
  • Image can be downloaded as PDF, PNG, JPG, or SVG
  • Covidence generates a PRISMA diagram that is automatically updated as records move through the review phases

8. Synthesize & Report Results

There are a number of reporting guideline available to guide the synthesis and reporting of results in systematic literature reviews.

It is common to organize findings in a matrix, also known as a Table of Evidence (ToE).

Example of a review matrix, using Microsoft Excel, showing the results of a systematic literature review.

  • Reporting Guidelines for Systematic Reviews
  • Download a sample template of a health sciences review matrix  (GoogleSheets)

Steps modified from: 

Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach.   Medical Education , 46 (10), 943–952.

  • << Previous: Citation Management and Formatting Tools
  • Next: What are Literature Reviews? >>
  • Last Updated: Aug 13, 2024 4:59 PM
  • URL: https://guides.nyu.edu/healthwriting

Logo for Dr Anna Clemens PhD who teaches scientific writing courses for researchers

10 Open Science Tools for Literature Review You Should Know about

10 Open Science Tools for Literature Review You Should Know about

Here are 10 literature search tools that will make your scientific literature search faster and more convenient. All of the presented literature review software is free and follows Open Science principles.

Traditionally, scientific literature has been tucked away behind paywalls of academic publishers. Not only is the access to papers often restricted, but subscriptions are required to use many scientific search engines. This practice discriminates against universities and institutions who cannot afford the licenses, e.g. in low-income countries. Closed publishing also makes it hard for persons not affiliated with research institutes, such as freelance journalists or the public, to learn about scientific discoveries. 

The proportion of research accessible publicly today at no cost varies between disciplines . While in the biomedical sciences and mathematics, the majority of research published between 2009 and 2015 was openly accessible, this held true only for around 15 percent of publications in chemistry. Luckily, the interest in open access publishing is steadily increasing and has gained momentum in the past decade or so.

Many governmental funding bodies around the world nowadays require science resulting from grant money they provided to be available publicly for free. The exact requirements vary and UNESCO is currently developing a framework that specifies standards for the whole area of Open Science. 

Once I started my research on the topic, I was astonished by just how many free Open Science tools for literature review already exist! Read on below for 10 literature search tools — from a search engines for research papers, over literature review software that helps you quickly find open access versions of papers, to tools that help you save the correct citation in one click.

Tools for Literature review

First, an overview of the literature search tools in this blog post:

ScienceOpen

  • Citation Gecko
  • Local Citation Network

ResearchRabbit

  • Open Access Button
  • EndNote Click

Read by QxMD

I divided the tools into four categories:

Search engines for research papers

  • Literature review software based on citation networks
  • Locating open access scientific papers, and
  • Other tools that help in the literature review

Here, we go!

The best place to start a scientific literature search is with a search engine for research papers. Here are two you might not have heard of!

Want to perform a literature search and don’t want to pay for Web of Science or Scopus or perhaps you are tired of the limited functionality of the free Google Scholar ? ScienceOpen is many things, among others a search engine for research papers. Despite being owned by a private company, this scientific search engine is freely accessible with visually appealing and functional design. Search results are clearly labelled for type of publication, number of citations, altmetrics scores etc. and allow for filtering. You can also access citation metrics, i.e., display which publications have cited a certain paper.

Recommended by a reader of the blog (thank you!), the Lens is a search tool that doesn’t only allow you to search the scholarly literature but patents too! Millions of patents from over 95 jurisdictions can be searched. The Lens is run by the non-profit social enterprise Cambia. The search engine is free to use for the public, though charges occur for commercial use and to get additional functionality.

Image inviting researchers interested in tools for literature review to a free scientific writing training

Literature Review software based on citation networks

The next category of tools we will be looking at are a bit more advanced than a simple search engine for research papers. These literature search tools help you discover scientific literature you may have missed by visualising citation networks.

Citation Gecko 

The literature search tool Citation Gecko is an open source web app that makes it easier to discover relevant scientific literature than your average keyword-based search engine for research papers. It works in the following way: First you upload about 5-6 “seed papers”. The program then extracts all references in and to these seed papers and creates a visual citation network. The nodes are displayed in different colours and sizes depending on whether the papers are citing a seed paper or are cited by it and how many, respectively. By combing through the citation network, you can discover new papers that may be relevant for your scientific literature search. You can also increase your citation network step by step by including more seed papers. 

This literature review tool was developed by Barney Walker , and the underlying citation data is provided by Crossref and Open Citations .

Local Citation Network 

Similar to Citation Gecko, Local Citation Network is an open source tool that works as a scientific search engine on steroids. Local Citation Network was developed by Physician Scientist Tim Wölfle. This literature review tool works best if you feed it with a larger library of seed papers than required for Citation Gecko. Therefore, Wölfle recommends using it at the end of your scientific literature search to identify papers you may have missed. 

As an alternative to the literature search tools Citation Gecko and Local Citation Network, a reader of the blog recommended ResearchRabbit . It’s free to use and looks like a versatile piece of literature review software helping you build your own citation network. ResearchRabbit lets you add labels to the entries in your citation network, download PDFs of papers and sign up for email alerts for new papers related to your research topic. Instead of a tool to use only once during your scientific literature search, ResearchRabbit seems to function more like a private scientific library storing (and connecting) all the papers in your field.

Run by (former) researchers and engineers, ResearchRabbit is partly financed through donations but their website does not state where the core funding of this literature review software originates from.

Locating open access scientific papers

You may face the problem in your scientific literature search that you don’t have access to every research paper you are interested in. I highly recommend installing at least one of the open access tools below so you can quickly locate freely accessible versions of the scientific literature if available anywhere.

Open Access Button 

Works like the scientific search engine Sci-hub but is legal: You enter the DOI, link or citation of a paper and the literature review tool Open Access Button displays it if freely accessible anywhere. To find an open access version, Open Access Button searches thousands of repositories, for example, preprint servers, authors’ personal pages, open access journals and other aggregators such as the COnnecting REpositories service based at The Open University in the UK ( CORE ), the EU-funded OpenAire infrastructure, and the US community initiative Share . 

If the article you are looking for isn’t freely available, Open Access Button asks the author to share it to a repository. You can enter your email address to be notified once it has become available. 

Open Access Button is also available as browser plugin, which means that a button appears next to an article whenever a free version is available. This search engine for research papers is funded by non-profit foundations and is open source. 

Unpaywall 

Unpaywall is a search engine for research papers similar to Open Access Button — but only available as browser plugin. If the article you are looking at is behind a paywall but freely accessible somewhere else, a green button appears on the right side of the article. I installed it recently and regret not having done it sooner, it works really smoothly! I think the plugin is a great help in your scientific literature search.

Unpaywall is run by the non-profit organisation Our Research who has created a fleet of open science tools.

EndNote Click 

Another browser extension that lets you access the scientific literature for free if available is EndNote Click (formerly Kopernio). EndNote Click claims to be faster than other search engines for research papers bypassing redirects and verification steps. I personally don’t find the Unpaywall or Open Access Button plugins inconvenient to use but I’d encourage you to try out all of these scientific search engines and see what works best for you. 

One advantage of EndNote Click that a reader of the blog told me about is the side bar that appears when opening a paper through the plugin. It lets you, for example, save citations quickly, avoiding time-consuming searches on publishers’ websites. 

As the reference manager, EndNote Click is part of the research analytics company Clarivate.  

Other tools for literature review

This last category of literature search tools features a tool that creates a personalised feed of scientific literature for you and another that makes citing the scientific literature effortless.

Available as an app or in a browser window, the literature review tool Read lets you create a personalised feed that is updated daily with new papers on research topics or from journals of your choice. If there is an openly accessible version of an article, you can read it with one click. If your institution has journal subscriptions, you can also link them to your Read profile. Read has been created by the company QxMD and is free to use. 

CiteAs 

You discovered a promising paper in your scientific literature search and want to cite it? CiteAs is a convenient literature review tool to obtain the correct citation for any publication, preprint, software or dataset in one click. Funded by the Alfred P. Sloan Foundation, CiteAs is operated partly by the non-profit Our Research . 

Beyond literature review tools

There you have it, 10 tools for literature review that are all completely free and follow Open Science principles.

Of course, finding a great literature review tool, such as a search engine for research papers or a citation tool, is only one essential part in the whole process of writing a scientific paper. If you would like to learn a complete process to write a scientific article step by step, then you’ll love our free training. Simply click on the orange button below to watch it now (or sign up to watch it later).

Screenshot of free writing training for researchers interested in tools for literature review

Share article

© Copyright 2018-2024 by Anna Clemens. All Rights Reserved. 

Photography by Alice Dix

literature review tools

RAxter is now Enago Read! Enjoy the same licensing and pricing with enhanced capabilities. No action required for existing customers.

Your all in one AI-powered Reading Assistant

A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research

  • Smartly organize your research
  • Receive recommendations that cannot be ignored
  • Collaborate with your team to read, discuss, and share knowledge

literature review research assistance

From Surface-Level Exploration to Critical Reading - All in one Place!

Fine-tune your literature search.

Our AI-powered reading assistant saves time spent on the exploration of relevant resources and allows you to focus more on reading.

Select phrases or specific sections and explore more research papers related to the core aspects of your selections. Pin the useful ones for future references.

Our platform brings you the latest research related to your and project work.

Speed up your literature review

Quickly generate a summary of key sections of any paper with our summarizer.

Make informed decisions about which papers are relevant, and where to invest your time in further reading.

Get key insights from the paper, quickly comprehend the paper’s unique approach, and recall the key points.

Bring order to your research projects

Organize your reading lists into different projects and maintain the context of your research.

Quickly sort items into collections and tag or filter them according to keywords and color codes.

Experience the power of sharing by finding all the shared literature at one place.

Decode papers effortlessly for faster comprehension

Highlight what is important so that you can retrieve it faster next time.

Select any text in the paper and ask Copilot to explain it to help you get a deeper understanding.

Ask questions and follow-ups from AI-powered Copilot.

Collaborate to read with your team, professors, or students

Share and discuss literature and drafts with your study group, colleagues, experts, and advisors. Recommend valuable resources and help each other for better understanding.

Work in shared projects efficiently and improve visibility within your study group or lab members.

Keep track of your team's progress by being constantly connected and engaging in active knowledge transfer by requesting full access to relevant papers and drafts.

Find papers from across the world's largest repositories

microsoft academic

Testimonials

Privacy and security of your research data are integral to our mission..

enago read privacy policy

Everything you add or create on Enago Read is private by default. It is visible if and when you share it with other users.

Copyright

You can put Creative Commons license on original drafts to protect your IP. For shared files, Enago Read always maintains a copy in case of deletion by collaborators or revoked access.

Security

We use state-of-the-art security protocols and algorithms including MD5 Encryption, SSL, and HTTPS to secure your data.

  • Research Guides
  • University Libraries

AI-Based Literature Review Tools

  • Dialogues: Insightful Facts
  • How to Craft Prompts
  • Plugins / Extensions for AI-powered Searches
  • Cite ChatGPT in APA / MLA
  • AI and Plagiarism
  • ChatGPT & Higher Education
  • Author Profile

Selected AI-Based Literature Review Tools

Disclaimer:

  • The guide is intended for informational purposes. It is advisable for you to independently evaluate these tools and their methods of use.
  • Research AI Assistant is available in Dimensions Analytics (TAMU) and Statista (TAMU).
  • See news about their AI Assistant (Beta): Web of Science , Scopus , Ebsco , ProQues t, OVID , Dimensions , JStor , Westlaw , and LexisNexis .       

Suggestions:

  • Please keep these differences in mind when exploring AI-powered academic search engines.

literature review tools

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  • https://www.openread.academy/
  • Accessed institutionally by Harvard, MIT, University of Oxford, Johns Hopkins, Stanford, and more. ..
  • AI-powered Academic Searching + Web Searching - Over 300 million papers and real-time web content.
  • Trending and Topics - Browse them to find the latest hot papers. Use Topic to select specific fields and then see their trending.
  • Each keyword search or AI query generates a synthesis report with citations. To adjust the search results, simply click on the Re-Generate button to refresh the report and the accompanied citations. After that click on Follow-Up Questions to go deeper into a specific area or subject.
  • Use Paper Q&A to interact with a text directly. Examples: " What does this paper say about machine translation ?" ;  "What is C-1 in Fig.1?"
  • When you read a paper, under Basic Information select any of the following tools to get more information: Basic Information > Related Paper Graph> Paper Espresso > Paper Q&A , and > Notes. The Related Paper Graph will present the related studies in a visual map with relevancy indication by percentage.
  • Click on Translation to put a text or search results into another language.
  • Read or upload a document and let Paper Espresso analyze it for you. It will organize the content into a standard academic report format for easy reference: Background and Context > Research Objectives and Hypotheses > Methodology > Results and Findings > Discussion and Interpretation > Contributions to the field > Structure and Flow > Achievements and Significance , and > Limitations and Future Work.  

SEMANTIC SCHOLAR

  • SCIENTIFIC LITERATURE SEARCH ENGINE - finding semantically similar research papers.
  • " A free, AI-powered research tool for scientific literature."  <https://www.semanticscholar.org/>. But login is required in order to use all functions.
  • Over 200 millions of papers from all fields of science, the data of which has also served as a wellspring for the development of other AI-driven tools.

The 4000+ results can be sorted by Fields of Study, Date Range, Author, Journals & Conferences

Save the papers in your Library folder. The Research Feeds will recommend similar papers based on the items saved.

Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality Total Citations: 22,438   [Note: these numbers were gathered when this guide was created] Highly Influential Citations 2,001 Background Citations 6,109 Methods Citations 3,273  Results Citations 385

Semantic Reader "Semantic Reader is an augmented reader with the potential to revolutionize scientific reading by making it more accessible and richly contextual." It "uses artificial intelligence to understand a document’s structure and merge it with the Semantic Scholar’s academic corpus, providing detailed information in context via tooltips and other overlays ." <https://www.semanticscholar.org/product/semantic-reader>.

Skim Papers Faster "Find key points of this paper using automatically highlighted overlays. Available in beta on limited papers for desktop devices only."  <https://www.semanticscholar.org/product/semantic-reader>. Press on the pen icon to activate the highlights.

TLDRs (Too Long; Didn't Read) Try this example . Press the pen icon to reveal the highlighted key points . TLDRs "are super-short summaries of the main objective and results of a scientific paper generated using expert background knowledge and the latest GPT-3 style NLP techniques. This new feature is available in beta for nearly 60 million papers in computer science, biology, and medicine..." < https://www.semanticscholar.org/product/tldr>  

  • AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more.
  • " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers’ workflows. Currently, the main workflow in Elicit is Literature Review. If you ask a question, Elicit will show relevant papers and summaries of key information about those papers in an easy-to-use table."   <https://elicit.org/faq#what-is-elicit.>; Find answers from 175 million papers. FAQS
  • Example - How do mental health interventions vary by age group?    /   Fish oil and depression Results: [Login required] (1) Summary of top 4 papers > Paper #1 - #4 with Title, abstract, citations, DOI, and pdf (2) Table view: Abstract / Interventions / Outcomes measured / Number of participants (3) Relevant studies and citations. (4) Click on Search for Paper Information to find - Metadata about Sources ( SJR etc.) >Population ( age etc.) >Intervention ( duration etc.) > Results ( outcome, limitations etc.) and > Methodology (detailed study design etc.) (5) Export as BIB or CSV
  • How to Search / Extract Data / List of Concept Search -Enter a research question >Workflow: Searching > Summarizing 8 papers> A summary of 4 top papers > Final answers. Each result will show its citation counts, DOI, and a full-text link to Semantic Scholar website for more information such as background citations, methods citation, related papers and more. - List of Concepts search - e.g. adult learning motivation . The results will present a list the related concepts. - Extract data from a pdf file - Upload a paper and let Elicit extract data for you.
  • Export Results - Various ways to export results.
  • How to Cite - Includes the elicit.org URL in the citation, for example: Ought; Elicit: The AI Research Assistant; https://elicit.org; accessed xxxx/xx/xx  

CONSENSUS.APP

ACADEMIC SEARCH ENGINE- using AI to find insights in research papers.

"We are a search engine that is designed to accept research questions, find relevant answers within research papers, and synthesize the results using the same language model technology." <https://consensus.app/home/blog/maximize-your-consensus-experience-with-these-best-practices/>

  • Example - Does the death penalty reduce the crime?   /  Fish oil and depression  /    (1) Extracted & aggregated findings from relevant papers. (2) Results may include AIMS, DESIGN, PARTICIPANTS, FINDINGS or other methodological or report components. (3) Summaries and Full Text
  • How to Search Direct questions - Does the death penalty reduce the crime? Relationship between two concepts - Fish oil and depression / Does X cause Y? Open-ended concepts - effects of immigration on local economics Tips and search examples from Consensus' Best Practice   
  • Synthesize (beta) / Consensus Meter When the AI recognizes certain types of research questions, this functionality may be activated. It will examine a selection of some studies and provide a summary along with a Consensus Meter illustrating their collective agreement. Try this search: Is white rice linked to diabetes? The Consensus Meter reveals the following outcomes after analyzing 10 papers: 70% indicate a positive association, 20% suggest a possible connection, and 10% indicate no link.

Prompt “ write me a paragraph about the impact of climate change on GDP with citations “  

CITATIONS IN CONTEXT

Integrated with Research Solutions.

Over 1.2 billion Citation Statements and metadata from over 181 million papers suggested reference.

How does it work? - "scite uses access to full-text articles and its deep learning model to tell you, for a given publication: - how many times it was cited by others - how it was cited by others by displaying the text where the citation happened from each citing paper - whether each citation offers supporting or contrasting evidence of the cited claims in the publication of interest, or simply mention it."   <https://help.scite.ai/en-us/article/what-is-scite-1widqmr/>

EXAMPLE of seeing all citations and citation statements in one place

More information: Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning  

Scholar GPT - By awesomegpts.ai

  • " Enhance research with 200M+ resources and built-in critical reading skills. Access Google Scholar, PubMed, JSTOR, Arxiv, and more, effortlessly ."
  • Dialogue prompts suggested on the page: - Find the latest research about AI. - I'll provide a research paper link; Please analyze it. - I will upload a PDF paper; Use critical skills to read it. - Type "LS" to list my built-in critical reading list.
  • To access it, in your ChatGPT account > Select " Explore GPTs > Scholar GPT
  • GPT3.5 by OpenAI. Knowledge cutoff date is September 2021.
  • Input/ Output length - ChatGPT-3.5 allows a maximum token limit of 4096 tokens. According to ChatGPT " On average, a token in English is roughly equivalent to 4 bytes or characters. English words are typically around 5 characters long. This means that, very roughly, you could fit around 800 to 1000 English words within 4096 tokens." 
  • According to ChatGPT, the generated responses are non-deterministic by default. So if you run the searches again and get slightly or very different results, it's likely due to this factor.
  • ChatGPT may find non-existent references.
  • According to this study < https://arxiv.org/ftp/arxiv/papers/2304/2304.06794.pdf > "ChatGPT cites the most-cited articles and journals, relying solely on Google Scholar's citation counts" within the field of environmental science.
  • See a case of using ChatGPT40 to extract a PDF file below.  
  • Example - "INTERVIEW WITH CHATGPT" as a Research Method & Teaching Tool Some researchers began to use this approach to obtain their research data. Try this Google Scholar search link "interview with ChatGPT"   or see two articles below: (1) Chatting about ChatGPT: how may AI and GPT impact academia and libraries? BD Lund, T Wang - Library Hi Tech News, 2023 (2) An interview with ChatGPT: discussing artificial intelligence in teaching, research, and practice , G Scaringi, M Loche - 2023  

Increased risk of sudden infant death syndrome (SIDS) Increased risk of premature birth Increased risk of low birth weight Increased risk of respiratory problems in newborns Increased risk of respiratory problems in infants exposed to secondhand smoke Increased risk of developing asthma and other respiratory illnesses later in life for infants exposed to secondhand smoke [Note : ChatGPT may generate non-existent references or false knowledge. To find out why Large Language Models hallucinate, check out this Wiki article: Hallucination (artificial intelligence) and this blog post -   A Gentle Introduction to Hallucinations in Large Language Models by Adrian Tam ]

Infant death Neonatal mortality (referring specifically to deaths within the first 28 days of life) Perinatal mortality (referring to deaths occurring during the period from 22 completed weeks of gestation to 7 completed days after birth) Early childhood mortality (referring to deaths occurring within the first five years of life) Child mortality (referring to deaths occurring before the age of 18) [Press the Regenerate button to get more.]

  • Example -  RELATED WORDS What are the related words of infant mortality? Neonatal mortality, Post-neonatal mortality, Stillbirths, Low birth weight, Malnutrition, Infectious diseases, Vaccination, Maternal health,  Access to healthcare, Poverty, Social inequality, Sanitation, Hygiene, Water quality, Childbirth complications, Congenital abnormalities, Birth defects, Maternal age, Under-five mortality, Child mortality, Perinatal mortality, Preterm birth, Low birth weight, Sudden infant death syndrome (SIDS), Maternal mortality, Postnatal care, Malnutrition, Immunization, Access to healthcare, Clean water and sanitation, Congenital anomalies, Infant health disparities, Infant mortality rate (IMR), Infant survival.  

(("Infant Death"[Mesh] OR "Sudden Infant Death Syndrome"[Mesh] OR "Infant Mortality"[Mesh] OR "Perinatal Mortality"[Mesh] OR "Neonatal Mortality"[Mesh] OR "Fetal Mortality"[Mesh] OR "Infant Death" OR "Sudden Infant Death Syndrome" OR "Infant Mortality" OR "Perinatal Mortality" OR "Neonatal Mortality" OR "Fetal Mortality") AND ("Risk Factors"[Mesh] OR "Etiology"[Mesh] OR "Prevention and Control"[Mesh] OR "Interventions"[Mesh] OR "Public Health"[Mesh] OR "Health Disparities"[Mesh] OR "Healthcare Disparities"[Mesh] OR "Poverty"[Mesh] OR "Malnutrition"[Mesh] OR "Infection"[Mesh] OR "Congenital Abnormalities"[Mesh] OR "Prematurity"[Mesh] OR "Low Birth Weight"[Mesh] OR "Maternal Education"[Mesh] OR "Maternal Health"[Mesh] OR "Risk Factors" OR "Etiology" OR "Prevention and Control" OR "Interventions" OR "Public Health" OR "Health Disparities" OR "Healthcare Disparities" OR "Poverty" OR "Malnutrition" OR "Infection" OR "Congenital Abnormalities" OR "Prematurity" OR "Low Birth Weight" OR "Maternal Education" OR "Maternal Health")) Comments and Suggestions: #1 -This search string may not work on all database platforms. For example, Medline (OVID) may not be able to run it, Medline & CINAHL (EBSCO) can run it without any problems while PubMed will run it but it will indicate which MeSH terms are not indexed. See PubMed's search History and Details under Advanced . #2 - This is just one possible combination. You may need to adjust it based on your task and the databases you are using . #3 - It might be easier if you break the task into 3 parts: ask for synonyms > ask for thesauri terms > and ask it to combine them.

Examples - DATABASE RECOMMENDATIONS Heath Science - Which databases to use to find articles on the EFFECTS OF MATERNAL SMOKING ON INFANT DEATH? PubMed: This is a database of biomedical literature from the National Library of Medicine. It includes articles from medical journals and other sources on a wide range of health-related topics. Embase: This is a database of biomedical literature that covers more international and pharmaceutical content than PubMed. It includes articles from medical journals and other sources on a wide range of health-related topics. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions, including interventions related to maternal smoking and infant death. Scopus: This is a multidisciplinary database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. Web of Science: This is a database that covers research from a wide range of fields, including health sciences. It includes articles from scientific journals, books, and conference proceedings. CINAHL: This is a database of nursing and allied health literature, including articles on maternal smoking and infant death. [Note: The response above was copied and pasted directly from ChatGPT. No editing was added.] Education -- I am doing a meta-analysis on k-12 STEM education. What databases should I use? ERIC : This is a database of education research and information, including articles from journals, reports, and conference papers. It is a commonly used database for educational research. Education Source : This is a comprehensive database of education research and information, including articles from journals, books, and conference papers. Web of Science: This is a multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. Scopus: This is another multidisciplinary database that covers research from a wide range of fields, including education. It includes articles from scientific journals, books, and conference proceedings. PsycINFO: This is a database of psychology research, including articles from journals, books, and dissertations. It can be useful for research on STEM education that includes psychological factors. Cochrane Library : This is a collection of databases that provides systematic reviews and meta-analyses of healthcare interventions including interventions related to education. [Author's note: You could also ask for the GREY LITERATURE sources related to the topic above,]

Example: ACT-AS-AN-EXPERT PROMPT Here's an alternative way to consolidate all the questions into a single consultation by using this popular Act-As-An-Expert prompt, aka, system message prompt:  I am going to conduct a systematic review on games and mathematics related to K-12 education  I want you to act as a systematic review expert . I will ask you some questions related to my research topic and you advise me on how to do it. 

[You may ask one question at a time] My first question is "which databases should I use?"  My second question is "what are the relevant subject terms, descriptors and key words?" My third question is "how to do data extraction?" ..... , .... Please note that the above prompts are merely for illustrative purposes. Actual questions may contain more specific instructions. As an example, for the first question, could you provide guidance on how to identify relevant publications? Specifically, can you provide a list of the primary databases and grey literature sources related to my topic?  

Example - GREY LITERATURE Please show me all the resources that offer grey literature for conducting research in the field of education     ERIC (Education Resources Information Center): https://eric.ed.gov/     OpenGrey: http://www.opengrey.eu/     British Education Index: https://www.ebscohost.com/academic/british-education-index     WorldCat: https://www.worldcat.org/     ProQuest Dissertations & Theses Global: https://www.proquest.com/products-services/pqdtglobal.html     EThOS (Electronic Theses Online Service): https://ethos.bl.uk/     Institutional Repositories of universities and academic institutions (search for specific institution repositories)     American Educational Research Association (AERA): https://www.aera.net/     National Education Association (NEA): https://www.nea.org/     Association for Supervision and Curriculum Development (ASCD): http://www.ascd.org/     RAND Corporation: https://www.rand.org/     National Center for Education Research (NCER): https://ies.ed.gov/ncer/     National Center for Education Statistics (NCES): https://nces.ed.gov/     European Educational Research Association (EERA): https://www.eera-ecer.de/     UNESCO International Institute for Educational Planning (IIEP): http://www.iiep.unesco.org/     Organization for Economic Cooperation and Development (OECD): https://www.oecd.org/education/     Education Development Center (EDC): https://www.edc.org/     National Institute for Educational Policy Research (NIER) - Japan: https://www.nier.go.jp/     Australian Council for Educational Research (ACER): https://www.acer.org/     Canadian Education Association (CEA): https://www.cea-ace.ca/ My topic is about stem education. Which grey literature databases should I use?    STEM Education Research Centre (SERC) Grey Literature Database    National STEM Education Research Repository (NSERR)    National Science Digital Library (NSDL)    STEM Education Resource Center (SERC) GrayLit Network    STEM Education Grey Literature Library

See other Comparison Examples under Bing AI.

Examples - ZERO SHOT, ONE SHOT & FEW SHOT PROMPTING for literature review See the examples on the left panel- How to Fine Tune Prompts

Example- using ChatGPT4o to Extract sections from PDF

I am going to upload a PDF file. Please extract its Research Objectives, Hypotheses,Methodology and findings for me, and then save it into a text file. ChatGPT40 : Yes, you can upload the PDF..... The uploaded document appears to be a paper titled "......" I'll begin by searching for and extracting the relevant sections. Extracting Sections: Research Objectives and Hypotheses, Methodology, Results and Findings Let me start with the extraction. The information has been saved into a text file. You can download it using the link below:Download Extracted_Info.txt

Bing Chat / Copilot

  • Bing Chat has become Copilot.
  • CONVERSATIONAL LLM 4.0 + SEARCH ENGINE. Connects to the Internet.
  • Three conversation modes: More Creative, More Balanced or More Precise.
  • Comparison - Prompt: My topic is about psychology. Which grey literature databases should I use? More Precise - PsycExtra, NIH Library and links to library guides, .gov and .org websites More Balanced - PsycExtra and three main research guides that mention psychology grey literature sources. More Creative- PsycExtra, LILACS, and Worldcat + definition of grey literature + suggestions on searching for professional association websites, newsletters, magazines, standards and guidelines, and other sources that may contain relevant information for your topic.
  • Some examples below were tested in Bing Chat only.  
  • Example - factors of infant mortality Infant mortality is a term that refers to the death of infants before their first birthday. The leading causes of infant mortality include birth asphyxia, pneumonia, congenital malformations, term birth complications such as abnormal presentation of the fetus umbilical cord prolapse, or prolonged labor, neonatal infection, diarrhea, malaria, measles, and malnutrition1. Many situational factors lead to infant mortality such as the pregnant woman’s level of education, environmental conditions, political infrastructure, and level of medical support1. In the United States, a primary determinant of infant mortality risk is infant birth weight with lower birth weights increasing the risk of infant mortality. The determinants of low birth weight include socioeconomic, psychological, behavioral and environmental factors2.  
  • Example: PEER REVIEWED SOURCES I need to find some peer reviewed articles about infant mortality? The answer includes 5 articles. The first one is “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” by Neha Bairoliya and Günther Fink1. A click on  " Tell me more about the first article " leads to this summary: " The first article is titled “Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study” and was published in PLOS Medicine on March 20, 2018. The authors are Neha Bairoliya and Günther Fink. The study used linked birth and death records for the period 2010–2012 to identify the state- and cause-specific burden of infant mortality among full-term infants (born at 37–42 weeks of gestation) in the United States. The researchers found that the full-term infant mortality rate (FTIMR) was 2.2 per 1,000 live births overall and ranged between 1.29 (Connecticut) and 3.77 (Mississippi) at the state level. Sudden unexpected death in infancy (SUDI) accounted for 43% of FTIM; congenital malformations and perinatal conditions accounted for 31% and 11.3% of FTIM, respectively. The largest mortality differentials between states with good and states with poor FTIMR were found for SUDI, with particularly large risk differentials for deaths due to sudden infant death syndrome (SIDS) and suffocation." More searches: Please find five empirical studies focusing on the factors influencing adult learners engaged in online education. Please locate five scholarly articles on stress management among college students site:pubmed.ncbi.nlm.nih.gov

Image illustrating data extraction

The ERIC Thesaurus contains nearly 12,000 terms, including more than 4,500 descriptors and 7,000 synonyms1. You can use the term “Academic Achievement” or its synonyms such as “Educational Achievement”, “Student Achievement”, “Academic Performance”, “Academic Success”, “Learning Achievement” and more. [ Note : The follow-up responses include some additional sources. See two below: https://www.powerthesaurus.org/academic_achievement https://eric.ed.gov/?ti=Achievement+Gap   including https://eric.ed.gov/?ti=Academic+Achievement |

My topic is about psychology.  Which grey literature databases should I use?

ChatGPT PsycEXTRA : This database is produced by the American Psychological Association (APA) and includes grey literature such as conference papers, white papers, and government reports. OpenGrey : This database provides access to grey literature produced in Europe, including reports, theses, and conference papers Grey Literature Report : This database is produced by The New York Academy of Medicine and includes reports, conference papers, and other grey literature on health and biomedical topics, including psychology. ProQuest Dissertations and Theses Global : This database provides access to graduate-level research, including theses and dissertations related to psychology. Google Scholar: While not exclusively a grey literature database, Google Scholar can be a useful tool for finding conference papers and other grey literature related to psychology. National Technical Information Service (NTIS) : This database provides access to technical reports produced by U.S. government agencies, including those related to psychology. Bing AI   - [asked in the Precise mode] For psychology, some recommended grey literature databases include PsycEXTRA and PsyArXiv . PsycEXTRA is a database of grey literature material relating to psychology, behavioral sciences and health. PsyArXiv is a psychology archive designed to facilitate rapid dissemination of psychological research. What other models have been developed based on SERVQUAL?

ChatGPT SERVPERF: This is a modified version of SERVQUAL that measures service performance instead of service quality. It was developed by Cronin and Taylor (1992) and is based on the same five dimensions as SERVQUAL. SERVPERF-GAP: This model, developed by Caruana and Ewing (2006), extends the SERVQUAL model to include three additional gaps that can occur when customers evaluate service quality. E-SERVQUAL: This model, developed by Parasuraman et al. (2005), adapts the SERVQUAL scale to measure the quality of online services. SERVCO: This is a modified version of SERVQUAL that includes a sixth dimension, "cost," in addition to the five original dimensions. It was developed by Sureshchandar et al. (2002) to address the importance of cost in service quality evaluation. SERVQ-HC: This is a modified version of SERVQUAL that is specifically designed for evaluating the quality of health care services. It was developed by Cronin et al. (2000) and includes 20 items across five dimensions. Bing AI The Nordic model of service quality2. The hierarchical model of service quality3. The American Customer Satisfaction Index (ACSI)4.

     Bing AI Sidebar - CHAT, THINK, & WRITE

  • Examples of Using AI sidebar to Recap Google Scholar findings

Chat - If you highlight a study in the search results of Google Scholar, you will be prompted with a question by AI sidebar asking if you want to send the selected text to it. If you choose to do so, you can then select either  EXPLAIN or EXPAND to learn more about the chosen article. It's worth noting that EXPAND provides more detailed information. You can then click on any of the follow-up links to delve further into the selected article or topic. If the text of the chosen study is written in a foreign language, the AI sidebar can also provide an explanation of the text in English. Another option for extracting the key points from a selected article is to submit the URL of the PDF. Insights - Use the insights button to get more ideas about the topic. Compose - Tell the Edge sidebar your topic and let its Compose tool to generate a draft for you. Google Scholar Button- See also the Plugins and Extension page for how to perform a combined search using Google, Google Scholar, and ChatGPT .

Example: FINDING SIMILAR ARTICLES Find similar articles like this one - [then paste in the abstract ] Note - Use this prompt with Bing chat since real-time web info is required.

Examples: SUMMARIZE + REFERENCE Use a basic prompt like " Please summarize . ..." or add a REFERENCE TEXT that provides the necessary information about what you want it to focus or how the summary or resolution should look like. The reference could be a quote, an excerpt, a partial sentence, a conversation, an abstract, a code snippet and so on. Formula : Specify a role > assign a task > provide a reference text. Example#1 - Scholarly Journal Quote

STEP 1 Prompt: As a research assistant specializing in renewable energy, your task is to locate comparable research studies based on the provided abstract from an article: "This paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and replacement of fossil fuels by various sources of renewable energy."

Bing Chat then found 3 documents: the ScienceDirect paper where the abstract came from and two other documents from un.org and frontiersin.org respectively.

STEP 2 Prompt: Please synthesize the findings of the three studies provided above.

Bing Chat: "Sure! Here is a synthesis of the findings from the three studies:

All three studies emphasize the importance of renewable energy in achieving sustainable development and mitigating climate change. They all discuss various strategies and actions that can be taken to increase the use of renewable energy sources and reduce reliance on fossil fuels.

Henrik Lund’s paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and .....

The United Nations outlines five critical actions the world needs to prioritize now to transform our energy systems and speed up the shift to renewable energy: Make renewable energy technology a global public good, Improve global access to components and raw materials......

“Sustainable Energy Transition for Renewable and Low Carbon Grid” discusses that a sustainable global electricity transition will entail increased use of renewable energy sources particularly wind and solar, nuclear energy as a low carbon energy source, electrification of transport and.......

In summary , these studies suggest that transitioning to renewable energy sources is crucial for sustainable development and mitigating climate change. This can be achieved through various strategies such as increasing energy efficiency, improving access to renewable technology and materials, leveling the playing field for renewables, ending fossil fuel subsidies, and redirecting investments towards renewables." Example #2 Y ou are an expert researcher, given the following article, please discuss its key findings. Reference: How Large Language Models are Transforming Machine-Paraphrased Plagiarism at https://arxiv.org/abs/2210.03568 This is another way to put REFERENCE TEXT. As a side not, role specifying or the use of persona is important as it can impact the level of responses obtained. Just imagine if the designated role were " You are a high school student... ." Example #3 Please read this article and summarize it f or me - "Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making" and then find 5 similar studies and please also summarize each for me.

literature review tools

CONNECTED PAPERS

  • RELATED STUDIES
  • Uses visual graphs or other ways to show relevant studies. The database is connected to the Semantic Scholar Paper Corpus which has compiled hundreds of millions of published papers across many science and social science fields.
  • See more details about how it works .  
  • Example - SERVQUAL and then click on SELECT A PAPER TO BUILD THE GRAPH > The first paper was selected. Results: (1) Origin paper - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality + Connected papers with links to Connected Papers / PDF / DOI or Publisher's site / Semantic Scholar / Google Scholar. (2) Graph showing the origin paper + connected papers with links to the major sources . See above. (3) Links to Prior Works and Derivative Works See the detailed citations by Semantic Scholar on the origin SERVQUAL paper on the top of this page within Semantic Scholars.
  • How to Search Search by work title. Enter some keywords about a topic.
  • Download / Save Download your saved Items in Bib format.  

PAPER DIGEST

  • SUMMARY & SYNTHESIS
  • " Knowledge graph & natural language processing platform tailored for technology domain . <"https://www.paperdigest.org/> Areas covered: technology, biology/health, all sciences areas, business, humanities/ social sciences, patents and grants ...

literature review tools

  • LITERATURE REVIEW - https://www.paperdigest.org/review/ Systematic Review - https://www.paperdigest.org/literature-review/
  • SEARCH CONSOLE - https://www.paperdigest.org/search/ Conference Digest - NIPS conference papers ... Tech AI Tools: Literature Review  | Literature Search | Question Answering | Text Summarization Expert AI Tools: Org AI | Expert search | Executive Search, Reviewer Search, Patent Lawyer Search...

Daily paper digest / Conference papers digest / Best paper digest / Topic tracking. In Account enter the subject areas interested. Daily Digest will upload studies based on your interests.

RESEARCH RABBIT

  • CITATION-BASED MAPPING: SIMILAR / EARLY / LATER WORKS
  • " 100s of millions of academic articles and covers more than 90%+ of materials that can be found in major databases used by academic institutions (such as Scopus, Web of Science, and others) ." See its FAQs page. Search algorithms were borrowed from NIH and Semantic Scholar.

The default “Untitled Collection” will collect your search histories, based on which Research Rabbit will send you recommendations for three types of related results: Similar Works / Earlier Works / Later Works, viewable in graph such as Network, Timeline, First Authors etc.

Zotero integration: importing and exporting between these two apps.

  • Example - SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality [Login required] Try it to see its Similar Works, Earlier Works and Later Works or other documents.
  • Export Results - Findings can be exported in BibTxt, RIS or CSV format.

CITING GENERATIVE AI

  • How to cite ChatGPT  [APA] - https://apastyle. apa.org/blog /how-to-cite-chatgpt  
  • How to Cite Generative AI  [MLA]  https://style. mla.org /citing-generative-ai/
  • Citation Guide - Citing ChatGPT and Other Generative AI (University of Queensland, Australia)
  • Next: Dialogues: Insightful Facts >>
  • Last Updated: Aug 18, 2024 9:08 PM
  • URL: https://tamu.libguides.com/c.php?g=1289555

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Evaluate This link opens in a new window

What Will You Do Differently?

Please help your librarians by filling out this two-minute survey of today's class session..

Professor, this one's for you .

Introduction

Literature reviews take time. here is some general information to know before you start.  .

  •  VIDEO -- This video is a great overview of the entire process.  (2020; North Carolina State University Libraries) --The transcript is included --This is for everyone; ignore the mention of "graduate students" --9.5 minutes, and every second is important  
  • OVERVIEW -- Read this page from Purdue's OWL. It's not long, and gives some tips to fill in what you just learned from the video.  
  • NOT A RESEARCH ARTICLE -- A literature review follows a different style, format, and structure from a research article.  
 
Reports on the work of others. Reports on original research.
To examine and evaluate previous literature.

To test a hypothesis and/or make an argument.

May include a short literature review to introduce the subject.

  • Next: Evaluate >>
  • Last Updated: Jul 30, 2024 1:42 PM
  • URL: https://guides.library.jhu.edu/lit-review

Academia Insider

AI Tools To Automate Your Literature Review: Which To Use?

Researching and writing outa literature review can be a daunting task, but AI-powered tools like Semantic Scholar, Research Rabbit, and Scite are revolutionising this process.

These tools, leveraging artificial intelligence, automate the arduous task of sifting through mountains of academic papers, extracting key information, and summarizing relevant research.

In this article, lets explore how AI research assistants and advanced search engines streamline the literature review process, making it faster, more efficient, and thorough for conducting scientific research. Discover the best AI tools to transform your literature review work. 

Semantic Scholar– AI-powered search engine
– Contextual analysis of papers
– Advanced citation analysis
Research Rabbit– Mapping connections between papers
– Identifying key research areas
– Exploration of research topics
Scite– Smart Citations (supporting, contrasting, mentioning)
– Citation contexts with text excerpts
– Detailed citation metrics and analytics
Connected Papers– Visualise the research landscape
– Identify key papers and trends
– Explore paper interconnections
ChatGPT– Prompt-based research assistance
– Automated summarisation  
– Quick insights on complex topics

Why Use AI Tools To Write Literature Review?

The vast sea of academic papers can be daunting. This is where AI tools like ChatGPT, Semantic Scholar, and Research Rabbit become indispensable.

literature review tools

AI tools for literature review are adept at sifting through millions of papers across various databases like PubMed and Google Scholar. They use AI algorithms and natural language processing (NLP) to identify relevant research articles, providing a summary of key information.

Some AI-powered research assistant such as Scite, uses AI to scan through research data, offering insights into the supporting or contrasting evidence within peer-reviewed papers. This can be a valuable tool for finding specific details in a systematic literature review.

These tools also streamline the literature search by automating the process of citation and reference management. Tools like Connected Papers and Research Rabbit are adept at evaluating and summarizing relevant academic papers, saving you time and effort.

They can extract key information from PDF documents and research articles, making it fast and easy to organize your findings.

AI-powered tools for literature review also help in creating a thorough literature search, essential for:

  • Systematic literature reviews
  • Meta-analyses, and
  • Identifying research gap.

They can elicit information from a variety of sources, providing a comprehensive view of your research question. This approach ensures that your literature review is not only rich in content but also diverse in perspectives.

Best AI Tools To Write Literature Review

If you are looking at exploring AI tools to help you write literature review, consider these instead. They are revolutionizing the way literature reviews are conducted:

Connected Papers

As the name suggests, Connected Papers focuses on showing how various academic papers are interconnected. This generative AI tool helps you visualize the research landscape around your topic, making it easier to:

  • Identify key papers,
  • Identify potential gaps in your literature review, and
  • Understand emerging trends in scientific research.

While not a traditional literature review tool, ChatGPT is revolutionising academic research by offering prompt-based assistance.

This chatbot-like AI tool can help automate parts of the literature review process, such as generating research questions or providing quick summaries of complex topics.

ChatGPT’s AI-driven insights can be a valuable starting point for deeper exploration into specific research areas. The key is to provide it with the right input, and then to give the right prompts.

Research Rabbit

This tool uses AI to scan a variety of sources, including peer-reviewed research data. It’s particularly useful for conducting systematic literature reviews, as it can evaluate and summarize relevant academic papers, highlighting supporting or contrasting evidence.

literature review tools

Its key features include mapping out connections between research papers, identifying key papers in a research area, and helping to understand how various research topics are interconnected.

Research Rabbit also excels in reference management, a vital component of academic writing.

Scite is an innovative tool that uses AI technologies to provide a new layer of insight into scientific literature.

It evaluates the credibility of research by analyzing citation contexts, helping you to identify the most impactful and relevant papers for your research question.

Scite’s most distinctive feature is its Smart Citations. Unlike traditional citations that merely count how often a paper is cited, Smart Citations provide context by showing how a paper has been cited. This means that for each citation, Scite shows you the other relevant papers that:

  • Contrasts, or
  • Mentioning evidence for the claims made in the original paper. 

As a result, Scite is invaluable for conducting thorough literature reviews, especially for systematic reviews and meta-analyses.

Semantic Scholar

This AI-powered research assistant stands out in its ability to sift through vast databases like PubMed and Google Scholar.

It uses advanced natural language processing (NLP) and AI algorithms to extract key information from millions of academic papers, offering concise summaries and identifying relevant research articles.

What sets Semantic Scholar apart is its AI algorithms. It analyses millions of academic publications, extracting key information such as:

  • Figures, 
  • Tables, as well as 
  • Contextual relevance of each paper.

This enables the tool to provide highly relevant search results, summaries, and insights that are tailored to your specific interests and research needs.

What To Watch Out For When Writing Literature Review With AI Tools?

When you’re writing a literature review with AI tools, you’re stepping into a world where technology meets academic rigor. To ensure they balance each other out, by watching over these details as you use AI when researching:

Bias  

AI algorithms, including those used in literature search tools, can inherit biases from their training data.

This might skew the search results towards more popular or frequently cited papers, potentially overlooking lesser-known yet significant research.

Ensure you’re accessing a variety of sources to maintain a balanced perspective. When possible, always look into newer papers on your research area, as these may not have been discovered by your AI tool yet.

Citation Accuracy

While tools like Scite provide advanced citation analysis, the accuracy of AI in identifying and interpreting citations is not infallible.

literature review tools

Always verify the citations and references manually, especially when dealing with complex literature or less digitised sources.

You can also use multiple AI tools to check for citation accuracy. For example, rather than simply relying on Scite, you can also use other reference management softwares like: 

  • Refworks, or

Contextual Understanding

AI tools, efficient as they are, might not fully grasp the nuanced context of your research question.

Tools like ChatGPT and Semantic Scholar provide summaries and identify relevant papers using natural language processing (NLP), but they may not always align perfectly with your specific research focus.

Always double-check that the AI’s interpretation matches your intended research angle. This means actually spending time to read articles within the research area, and become familiar with it. 

Depth of Analysis

AI tools can automate the initial stages of your literature review by quickly sifting through databases like PubMed and Google Scholar to find relevant papers.

However, they might not evaluate the depth and subtlety of arguments in academic writing as thoroughly as a human researcher would.

It’s crucial to supplement AI findings with your own detailed analysis. This means you should not rely on AI completely, but to actually roll up your sleeves and work on the analysis yourself as well.

Over-reliance on AI

There’s a temptation to overly rely on AI for streamlining every aspect of the literature review process.

literature review tools

Remember, AI is a tool to assist, not replace, your critical thinking and scholarly diligence.

Use AI to enhance your workflow but maintain an active role in evaluating and synthesising research data. See AI as a tool, not your replacement. You need to be there to pilot the AI tool, to ensure it is doing its job properly.

AI Tools For Literature Review: Keep Up With Times

Leveraging AI tools for your literature review is a game-changer. Tools like Semantic Scholar, Research Rabbit, and Scite, powered by advanced AI algorithms, not only automate the search for relevant papers but also provide critical summaries and evaluations.

They enhance the literature review process in academic research, making it more efficient and comprehensive.

As AI continues to evolve, these tools become indispensable for researchers, helping to streamline workflows, organize findings, and extract key insights from a vast array of scientific literature with unparalleled ease and speed.

literature review tools

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

literature review tools

2024 © Academia Insider

literature review tools

All-in-one Literature Review Software

Start your free trial.

Free MAXQDA trial for Windows and Mac

Your trial will end automatically after 14 days and will not renew. There is no need for cancelation.

MAXQDA The All-in-one Literature Review Software

MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more.

Document viewer

Your analysis.

Literature Review Software MAXQDA Interface

As your all-in-one literature review software, MAXQDA can be used to manage your entire research project. Easily import data from texts, interviews, focus groups, PDFs, web pages, spreadsheets, articles, e-books, and even social media data. Connect the reference management system of your choice with MAXQDA to easily import bibliographic data. Organize your data in groups, link relevant quotes to each other, keep track of your literature summaries, and share and compare work with your team members. Your project file stays flexible and you can expand and refine your category system as you go to suit your research.

Developed by and for researchers – since 1989

literature review tools

Having used several qualitative data analysis software programs, there is no doubt in my mind that MAXQDA has advantages over all the others. In addition to its remarkable analytical features for harnessing data, MAXQDA’s stellar customer service, online tutorials, and global learning community make it a user friendly and top-notch product.

Sally S. Cohen – NYU Rory Meyers College of Nursing

Literature Review is Faster and Smarter with MAXQDA

All-in-one Literature Review Software MAXQDA: Import of documents

Easily import your literature review data

With a literature review software like MAXQDA, you can easily import bibliographic data from reference management programs for your literature review. MAXQDA can work with all reference management programs that can export their databases in RIS-format which is a standard format for bibliographic information. Like MAXQDA, these reference managers use project files, containing all collected bibliographic information, such as author, title, links to websites, keywords, abstracts, and other information. In addition, you can easily import the corresponding full texts. Upon import, all documents will be automatically pre-coded to facilitate your literature review at a later stage.

Capture your ideas while analyzing your literature

Great ideas will often occur to you while you’re doing your literature review. Using MAXQDA as your literature review software, you can create memos to store your ideas, such as research questions and objectives, or you can use memos for paraphrasing passages into your own words. By attaching memos like post-it notes to text passages, texts, document groups, images, audio/video clips, and of course codes, you can easily retrieve them at a later stage. Particularly useful for literature reviews are free memos written during the course of work from which passages can be copied and inserted into the final text.

Using Literature Review Software MAXQDA to Organize Your Qualitative Data: Memo Tools

Find concepts important to your generated literature review

When generating a literature review you might need to analyze a large amount of text. Luckily MAXQDA as the #1 literature review software offers Text Search tools that allow you to explore your documents without reading or coding them first. Automatically search for keywords (or dictionaries of keywords), such as important concepts for your literature review, and automatically code them with just a few clicks. Document variables that were automatically created during the import of your bibliographic information can be used for searching and retrieving certain text segments. MAXQDA’s powerful Coding Query allows you to analyze the combination of activated codes in different ways.

Aggregate your literature review

When conducting a literature review you can easily get lost. But with MAXQDA as your literature review software, you will never lose track of the bigger picture. Among other tools, MAXQDA’s overview and summary tables are especially useful for aggregating your literature review results. MAXQDA offers overview tables for almost everything, codes, memos, coded segments, links, and so on. With MAXQDA literature review tools you can create compressed summaries of sources that can be effectively compared and represented, and with just one click you can easily export your overview and summary tables and integrate them into your literature review report.

Visual text exploration with MAXQDA's Word Tree

Powerful and easy-to-use literature review tools

Quantitative aspects can also be relevant when conducting a literature review analysis. Using MAXQDA as your literature review software enables you to employ a vast range of procedures for the quantitative evaluation of your material. You can sort sources according to document variables, compare amounts with frequency tables and charts, and much more. Make sure you don’t miss the word frequency tools of MAXQDA’s add-on module for quantitative content analysis. Included are tools for visual text exploration, content analysis, vocabulary analysis, dictionary-based analysis, and more that facilitate the quantitative analysis of terms and their semantic contexts.

Visualize your literature review

As an all-in-one literature review software, MAXQDA offers a variety of visual tools that are tailor-made for qualitative research and literature reviews. Create stunning visualizations to analyze your material. Of course, you can export your visualizations in various formats to enrich your literature review analysis report. Work with word clouds to explore the central themes of a text and key terms that are used, create charts to easily compare the occurrences of concepts and important keywords, or make use of the graphical representation possibilities of MAXMaps, which in particular permit the creation of concept maps. Thanks to the interactive connection between your visualizations with your MAXQDA data, you’ll never lose sight of the big picture.

Daten visualization with Literature Review Software MAXQDA

AI Assist: literature review software meets AI

AI Assist – your virtual research assistant – supports your literature review with various tools. AI Assist simplifies your work by automatically analyzing and summarizing elements of your research project and by generating suggestions for subcodes. No matter which AI tool you use – you can customize your results to suit your needs.

Free tutorials and guides on literature review

MAXQDA offers a variety of free learning resources for literature review, making it easy for both beginners and advanced users to learn how to use the software. From free video tutorials and webinars to step-by-step guides and sample projects, these resources provide a wealth of information to help you understand the features and functionality of MAXQDA for literature review. For beginners, the software’s user-friendly interface and comprehensive help center make it easy to get started with your data analysis, while advanced users will appreciate the detailed guides and tutorials that cover more complex features and techniques. Whether you’re just starting out or are an experienced researcher, MAXQDA’s free learning resources will help you get the most out of your literature review.

Free Tutorials for Literature Review Software MAXQDA

Free MAXQDA Trial for Windows and Mac

Get your maxqda license, compare the features of maxqda and maxqda analytics pro, faq: literature review software.

Literature review software is a tool designed to help researchers efficiently manage and analyze the existing body of literature relevant to their research topic. MAXQDA, a versatile qualitative data analysis tool, can be instrumental in this process.

Literature review software, like MAXQDA, typically includes features such as data import and organization, coding and categorization, advanced search capabilities, data visualization tools, and collaboration features. These features facilitate the systematic review and analysis of relevant literature.

Literature review software, including MAXQDA, can assist in qualitative data interpretation by enabling researchers to organize, code, and categorize relevant literature. This organized data can then be analyzed to identify trends, patterns, and themes, helping researchers draw meaningful insights from the literature they’ve reviewed.

Yes, literature review software like MAXQDA is suitable for researchers of all levels of experience. It offers user-friendly interfaces and extensive support resources, making it accessible to beginners while providing advanced features that cater to the needs of experienced researchers.

Getting started with literature review software, such as MAXQDA, typically involves downloading and installing the software, importing your relevant literature, and exploring the available features. Many software providers offer tutorials and documentation to help users get started quickly.

For students, MAXQDA can be an excellent literature review software choice. Its user-friendly interface, comprehensive feature set, and educational discounts make it a valuable tool for students conducting literature reviews as part of their academic research.

MAXQDA is available for both Windows and Mac users, making it a suitable choice for Mac users looking for literature review software. It offers a consistent and feature-rich experience on Mac operating systems.

When it comes to literature review software, MAXQDA is widely regarded as one of the best choices. Its robust feature set, user-friendly interface, and versatility make it a top pick for researchers conducting literature reviews.

Yes, literature reviews can be conducted without software. However, using literature review software like MAXQDA can significantly streamline and enhance the process by providing tools for efficient data management, analysis, and visualization.

literature review tools

literature review tools

  • Help Center

GET STARTED

Rayyan

COLLABORATE ON YOUR REVIEWS WITH ANYONE, ANYWHERE, ANYTIME

Rayyan for students

Save precious time and maximize your productivity with a Rayyan membership. Receive training, priority support, and access features to complete your systematic reviews efficiently.

Rayyan for Librarians

Rayyan Teams+ makes your job easier. It includes VIP Support, AI-powered in-app help, and powerful tools to create, share and organize systematic reviews, review teams, searches, and full-texts.

Rayyan for Researchers

RESEARCHERS

Rayyan makes collaborative systematic reviews faster, easier, and more convenient. Training, VIP support, and access to new features maximize your productivity. Get started now!

Over 1 billion reference articles reviewed by research teams, and counting...

Intelligent, scalable and intuitive.

Rayyan understands language, learns from your decisions and helps you work quickly through even your largest systematic literature reviews.

WATCH A TUTORIAL NOW

Solutions for Organizations and Businesses

literature review tools

Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization.

  • Accelerate your research across your team or organization and save valuable researcher time.
  • Build and preserve institutional assets, including literature searches, systematic reviews, and full-text articles.
  • Onboard team members quickly with access to group trainings for beginners and experts.
  • Receive priority support to stay productive when questions arise.
  • SCHEDULE A DEMO
  • LEARN MORE ABOUT RAYYAN TEAMS+

RAYYAN SYSTEMATIC LITERATURE REVIEW OVERVIEW

literature review tools

LEARN ABOUT RAYYAN’S PICO HIGHLIGHTS AND FILTERS

literature review tools

Join now to learn why Rayyan is trusted by already more than 500,000 researchers

Individual plans, teams plans.

For early career researchers just getting started with research.

Free forever

  • 3 Active Reviews
  • Invite Unlimited Reviewers
  • Import Directly from Mendeley
  • Industry Leading De-Duplication
  • 5-Star Relevance Ranking
  • Advanced Filtration Facets
  • Mobile App Access
  • 100 Decisions on Mobile App
  • Standard Support
  • Revoke Reviewer
  • Online Training
  • PICO Highlights & Filters
  • PRISMA (Beta)
  • Auto-Resolver 
  • Multiple Teams & Management Roles
  • Monitor & Manage Users, Searches, Reviews, Full Texts
  • Onboarding and Regular Training

Professional

For researchers who want more tools for research acceleration.

per month, billed annually

  • Unlimited Active Reviews
  • Unlimited Decisions on Mobile App
  • Priority Support
  • Auto-Resolver

For currently enrolled students with valid student ID.

per month, billed quarterly

For a team that wants professional licenses for all members.

per month, per user, billed annually

  • Single Team
  • High Priority Support

For teams that want support and advanced tools for members.

  • Multiple Teams
  • Management Roles

For organizations who want access to all of their members.

Annual Subscription

Contact Sales

  • Organizational Ownership
  • For an organization or a company
  • Access to all the premium features such as PICO Filters, Auto-Resolver, PRISMA and Mobile App
  • Store and Reuse Searches and Full Texts
  • A management console to view, organize and manage users, teams, review projects, searches and full texts
  • Highest tier of support – Support via email, chat and AI-powered in-app help
  • GDPR Compliant
  • Single Sign-On
  • API Integration
  • Training for Experts
  • Training Sessions Students Each Semester
  • More options for secure access control

———————–

ANNUAL ONLY

Rayyan Subscription

membership starts with 2 users. You can select the number of additional members that you’d like to add to your membership.

Total amount:

Click Proceed to get started.

Great usability and functionality. Rayyan has saved me countless hours. I even received timely feedback from staff when I did not understand the capabilities of the system, and was pleasantly surprised with the time they dedicated to my problem. Thanks again!

This is a great piece of software. It has made the independent viewing process so much quicker. The whole thing is very intuitive.

Rayyan makes ordering articles and extracting data very easy. A great tool for undertaking literature and systematic reviews!

Excellent interface to do title and abstract screening. Also helps to keep a track on the the reasons for exclusion from the review. That too in a blinded manner.

Rayyan is a fantastic tool to save time and improve systematic reviews!!! It has changed my life as a researcher!!! thanks

Easy to use, friendly, has everything you need for cooperative work on the systematic review.

Rayyan makes life easy in every way when conducting a systematic review and it is easy to use.

Screen, analyse and summarise articles faster with Scholarcy

Try it for free, subscribe today.

Scholarcy is used by students around the world to read and analyse research papers in less time. Upload your articles to Scholarcy to:

  • Cut your reading time in half and feel more in control
  • Identify the papers that matter in less time
  • Jump straight to the most important information
  • Compare a collection of articles more easily
With Scholarcy Library, you can import all your papers and search results, and quickly screen them with the automatically generated ‘key takeaway’ headline.

Take the stress out of your literature review

While there are lots of tools that help you discover articles for your research, how do you analyse and synthesise the information from all of those papers?

3 easy ways to import articles

Scholarcy lets you quickly import your articles for screening and analysing.

Import papers in PDF, Word, HTML and LaTeX format

Import search results from PubMed or any service that provides results in RIS or BibTeX format

Import publisher RSS feeds

Build your literature matrix in minutes

Our Excel export feature generates a literature synthesis matrix for you, so you can

Compare papers side by side for their study sizes, key contributions, limitations, and more.

Export literature-review ready data in Excel, Word, RIS or Markdown format

Integrates with your reference manager and ‘second brain’ tools such as Roam, Notion and Obsidian

Carrying out a systematic review?

Scholarcy breaks papers down into our unique summary flashcard format.

The Study subjects and analysis tab shows you study population, intervention, outcome, and statistical analyses from the paper.

And the Excel synthesis matrix generated shows the key methods and quantitative findings of each paper, side by side.

Build a knowledge graph from your papers

If you’re a fan of the latest generation of knowledge management tools such as  Roam  or  Obsidian , you’ll love our  Markdown  export.

This creates a knowledge graph of all the papers in your library by connecting them via key terms, methods, and shared citations.

What People Are Saying

“Quick processing time, successfully summarized important points.”
“It’s really good for case study analysis, thank you for this too.”
“I love this website so much it has made my research a lot easier thanks!”
“The instant feedback I get from this tool is amazing.”
“Thank you for making my life easier.”

Revolutionize Your Research with Jenni AI

Literature Review Generator

Welcome to Jenni AI, the ultimate tool for researchers and students. Our AI Literature Review Generator is designed to assist you in creating comprehensive, high-quality literature reviews, enhancing your academic and research endeavors. Say goodbye to writer's block and hello to seamless, efficient literature review creation.

literature review tools

Loved by over 3 million academics

literature review tools

Endorsed by Academics from Leading Institutions

Join the Community of Scholars Who Trust Jenni AI

google logo

Elevate Your Research Toolkit

Discover the Game-Changing Features of Jenni AI for Literature Reviews

Advanced AI Algorithms

Jenni AI utilizes cutting-edge AI technology to analyze and suggest relevant literature, helping you stay on top of current research trends.

Get started

literature review tools

Idea Generation

Overcome writer's block with AI-generated prompts and ideas that align with your research topic, helping to expand and deepen your review.

Citation Assistance

Get help with proper citation formats to maintain academic integrity and attribute sources correctly.

literature review tools

Our Pledge to Academic Integrity

At Jenni AI, we are deeply committed to the principles of academic integrity. We understand the importance of honesty, transparency, and ethical conduct in the academic community. Our tool is designed not just to assist in your research, but to do so in a way that respects and upholds these fundamental values.

How it Works

Start by creating your account on Jenni AI. The sign-up process is quick and user-friendly.

Define Your Research Scope

Enter the topic of your literature review to guide Jenni AI’s focus.

Citation Guidance

Receive assistance in citing sources correctly, maintaining the academic standard.

Easy Export

Export your literature review to LaTeX, HTML, or .docx formats

Interact with AI-Powered Suggestions

Use Jenni AI’s suggestions to structure your literature review, organizing it into coherent sections.

What Our Users Say

Discover how Jenni AI has made a difference in the lives of academics just like you

literature review tools

I thought AI writing was useless. Then I found Jenni AI, the AI-powered assistant for academic writing. It turned out to be much more advanced than I ever could have imagined. Jenni AI = ChatGPT x 10.

literature review tools

Charlie Cuddy

@sonofgorkhali

Love this use of AI to assist with, not replace, writing! Keep crushing it @Davidjpark96 💪

literature review tools

Waqar Younas, PhD

@waqaryofficial

4/9 Jenni AI's Outline Builder is a game-changer for organizing your thoughts and structuring your content. Create detailed outlines effortlessly, ensuring your writing is clear and coherent. #OutlineBuilder #WritingTools #JenniAI

literature review tools

I started with Jenni-who & Jenni-what. But now I can't write without Jenni. I love Jenni AI and am amazed to see how far Jenni has come. Kudos to http://Jenni.AI team.

literature review tools

Jenni is perfect for writing research docs, SOPs, study projects presentations 👌🏽

literature review tools

Stéphane Prud'homme

http://jenni.ai is awesome and super useful! thanks to @Davidjpark96 and @whoisjenniai fyi @Phd_jeu @DoctoralStories @WriteThatPhD

Frequently asked questions

What exactly does jenni ai do, is jenni ai suitable for all academic disciplines, is there a trial period or a free version available.

How does Jenni AI help with writer's block?

Can Jenni AI write my literature review for me?

How often is the literature database updated in Jenni AI?

How user-friendly is Jenni AI for those not familiar with AI tools?

Jenni AI: Standing Out From the Competition

In a sea of online proofreaders, Jenni AI stands out. Here’s how we compare to other tools on the market:

Feature Featire

COMPETITORS

Advanced AI-Powered Assistance

Uses state-of-the-art AI technology to provide relevant literature suggestions and structural guidance.

May rely on simpler algorithms, resulting in less dynamic or comprehensive support.

User-Friendly Interface

Designed for ease of use, making it accessible for users with varying levels of tech proficiency.

Interfaces can be complex or less intuitive, posing a challenge for some users.

Transparent and Flexible Pricing

Offers a free trial and clear, flexible pricing plans suitable for different needs.

Pricing structures can be opaque or inflexible, with fewer user options.

Unparalleled Customization

Offers highly personalized suggestions and adapts to your specific research needs over time.

Often provide generic suggestions that may not align closely with individual research topics.

Comprehensive Literature Access

Provides access to a vast and up-to-date range of academic literature, ensuring comprehensive research coverage.

Some may have limited access to current or diverse research materials, restricting the scope of literature reviews.

Ready to Transform Your Research Process?

Don't wait to elevate your research. Sign up for Jenni AI today and discover a smarter, more efficient way to handle your academic literature reviews.

DistillerSR Logo

Distiller SR: Literature Review Software

Smarter reviews: trusted evidence.

Securely automate every stage of your literature review to produce evidence-based research faster, more accurately, and more

transparently at scale.

Software Built for Every Stage of a Literature Review

DistillerSR automates the management of literature collection, screening, and assessment using AI and intelligent workflows. From a systematic literature review to a rapid review to a living review, DistillerSR makes any project simpler to manage and configure to produce transparent, audit-ready, and compliant results.

Literature Review Lifecycle, DistillerSR

Broader, Automated Literature Searches

Search more efficiently with DistillerSR’s integrations with data providers, such as PubMed, automatic review updates, and AI-powered duplicate detection and removal.

Search Screen Shot, DistillerSR

PubMed Integration

Automatic review updates.

Automatically import newly published references, always keeping literature reviews up-to-date with DistillerSR LitConnect .

Duplicate Detection

Detect and remove duplicate citations preventing skew and bias caused by studies included more than once.

Faster, More Effective Reference Screening

Reduce your screening burden by 60% with DistillerSR. Start working on later stages of your review sooner by finding relevant references faster and addressing conflicts more easily.

Screening Screen Shot. DistillerSR

AI-Powered Screening

Conflict resolution.

Automatically identifies conflicts and disagreements between literature reviewers for easy resolution.

AI Quality Check

Increase the thoroughness of your literature review by having AI double-check your exclusion decisions and validate your categorization of records with the help of DistillerSR AI Classifiers software module.

Cost-Effective Access  to Full-Text Documents

Ensure your literature review is always up-to-date with DistillerSR’s direct connections to full-text data sources, all the while lowering overall subscription costs.

DistillerSR Full Text Retrieval Screenshot

Ensure your review is always up-to-date with DistillerSR’s direct connections to full-text data sources, all the while lowering overall subscription costs.

Open Access Integrations

Automatically search for and upload full-text documents from PMC , and link directly to source material through DOI.org .

Copyright Compliant Bulk Search

Retrieve full-text articles for the lowest possible cost through Article Galaxy .

Ad-Hoc Document Retrieval

Leverage existing RightFind and Article Galaxy subscriptions, the open access Unpaywall plugin, and internal libraries to access copyright compliant documents.

Simple Yet Powerful Data-Extraction

Simplify data extraction through templates and configurable forms. Extract data easily with in-form validations and calculations, and easily capture repeating, complex data sets.

DistillerSR Data Extraction Screenshot

Cross-Review, Data Reuse

Prevent duplication of effort across your organization and reduce data extraction times with DistillerSR CuratorCR by easily reusing data across literature reviews.

Capturing Complex Output

Easily capture complex data, such as a variable number of time points across multiple studies in an easy-to-understand and ready-to-analyze way.

Smart Forms

Cut down on literature review data cleaning, data conversions, and effective measure calculations with input validation and built-in form calculations.

Automatic and Configurable Reporting

PRISMA 2020 Chart Example, DistillerSR

Customizable Reporting Engine

Build reports and schedule automated email updates to stakeholders. Integrate your data with third-party reporting applications and databases with DistillerSR API .

Auto-Generated Reports

Comprehensive audit-trail.

Automatically keeps track of every entry and decision providing transparency and reproducibility in your literature review.

Easy-to-use Literature Review Project Management

Facilitate project management throughout the literature review process with real-time user and project metric monitoring, reusable configurations, and granular user permissions.

DistillerSR Project Management Screenshot

Facilitate project management throughout the review process with real-time user and project metric monitoring, reusable configurations, and granular user permissions.

Real-Time User and Project Metrics

Monitor teams and literature review progress in real-time, improving management and quality oversight into projects.

Repeatable, Configurable Processes

Secure literature reviews.

Single sign-on (SSO) and fully configurable user roles and permissions simplify the literature reviewer experience while also ensuring data integrity and security .

I can’t think of a way to do reviews faster than with DistillerSR. Being able to monitor progress and collaborate with team members, no matter where they are located makes my life a lot easier.

DistillerSR Case Studies

Stryker Case Study, DistillerSR

Maple Health Group

An alligator on the water's surface

University of Florida

Distillersr frequently asked questions, what types of reviews can be done with distillersr systematic reviews, living reviews, rapid reviews, or clinical evaluation report (cer) literature reviews.

Literature reviews can be a very simple or highly complex process, and literature reviews can use a variety of methods for finding, assessing, and presenting evidence. We describe DistillerSR as a literature review software because it supports all types of reviews , from systematic reviews to rapid reviews, and from living reviews to CER literature reviews.

DistillerSR software is used by over 300 customers in many different industries to support their evidence generation initiatives, from guideline development to HEOR analysis to CERs to post-market surveillance (PMS) and pharmacovigilance.

What are some of DistillerSR’s capabilities that support conducting systematic reviews?

Systematic reviews are the gold standard of literature reviews that aim to identify and screen all evidence relating to a specific research question. DistillerSR facilitates systematic reviews through a configurable, transparent, reproducible process that makes it easy to view the provenance of every cell of data.

DistillerSR was originally designed to support systematic reviews. The software handles dual reviewer screening, conflict resolution, capturing exclusion reasons while you work, risk of bias assessments, duplicate detection, multiple database searches, and reporting templates such as PRISMA . DistillerSR can readily scale for systematic reviews of all sizes, supporting more than 700,000 references per project through a robust enterprise-grade technical architecture . Using software like DistillerSR makes conducting systematic reviews easier to manage and configure to produce transparent evidence-based research faster and more accurately.

How does DistillerSR support clinical evaluation reports (CERs) and performance evaluation reports (PERs) program management?

The new European Union Medical Device Regulation (EU-MDR) and In-Vitro Device Regulation (EU-IVDR) require medical device manufacturers to increase the frequency, traceability, and overall documentation for CERs in the MDR program or PERs in the IVDR counterpart. Literature review software is an ideal tool to help you comply with these regulations.

DistillerSR automates literature reviews to enable a more transparent, repeatable, and auditable process , enabling manufacturers to create and implement a standard framework for literature reviews. This framework for conducting literature reviews can then be incorporated into all CER and PER program management plans consistently across every product, division, and research group.

How can DistillerSR help rapid reviews?

DistillerSR AI is ideal to speed up the rapid review process without compromising on quality. The AI-powered screening enables you to find references faster by continuously reordering relevant references, resulting in accelerated screening. The AI can also double-check your exclusion decisions to ensure relevant references are not left out of the rapid review.

DistillerSR title screening functionality enables you to quickly perform title screening on large numbers of references.

Does DistillerSR support living reviews?

The short answer is yes. DistillerSR has multiple capabilities that automate living systematic reviews , such as automatically importing newly published references into your projects and notifying reviewers that there’s screening to do. You can also put reports on an automated schedule so you’re never caught off guard when important new data is collected.   These capabilities help ensure the latest research is included in your living systematic review and that your review is up-to-date. 

How can DistillerSR help ensure the accuracy of Literature and Systematic reviews?

The quality of systematic reviews is foundational to evidence-based research. However, quality may be compromised because systematic reviews – by their very nature – are often tedious and repetitive, and prone to human error. Tracking all review activity in systematic review software, like DistillerSR, and making it easy to trace the provenance of every cell of data, delivers total transparency and auditability into the systematic review process. DistillerSR enables reviewers to work on the same project simultaneously without the risk of duplicating work or overwriting each other’s results. Configurable workflow filters ensure that the right references are automatically assigned to the right reviewers, and DistillerSR’s cross-project dashboard allows reviewers to monitor to-do lists for all projects from one place.

Why should I add DistillerSR to my Literature and Systematic Review Toolbox and retire my current spreadsheet solution?

It’s estimated that 90% of spreadsheets contain formula errors and approximately 50% have material defects. These errors, coupled with the time and resources necessary to fix them, adversely impact the management of the systematic review process. DistillerSR software was specifically designed to address the challenges faced by systematic review authors, namely the ever-increasing volume of research to screen and extract, review bottlenecks, and regulatory requirements for auditability and transparency, as well as a tool for managing a remote global workforce. Efficiency, consistency, better collaboration, and quality control are just a few of the benefits you’ll get when you choose DistillerSR’s systematic review process over a manual spreadsheet tool for your reviews.

What is the role of AI in your systematic review process?

DistillerSR AI enables the automation of the logistic-heavy tasks involved in conducting a systematic literature review, such as finding references faster using AI to continuously reorder references based on relevance. Continuous AI Reprioritization uses machine learning to learn from the references you are including and excluding and automatically reorder the ones you have left to screen, putting the most pertinent references in front of you first. This means that you find included references much more quickly during the screening process. DistillerSR also uses classifiers , which use NLP to classify and process information in the systematic review.  DistillerSR can also increase the thoroughness of your systematic review by having AI double-check your exclusion decisions.

What about the security and scalability of systematic literature reviews done on DistillerSR?

DistillerSR builds security, scalability, and availability into everything we do, so you can focus on producing evidence-based research faster, more accurately, and more securely with our  systematic review software. We undergo an annual independent third-party audit and certify our products using the American Institute of Certified Public Accountants SOC 2 framework. In terms of scalability, systematic review projects in DistillerSR can easily handle a large number of references; some of our customers have over 700,000 references in their projects.

Do you offer any commitments on the frequency of new product and capability launches?

We pride ourselves on listening to and working with our customers to regularly introduce new capabilities that improve DistillerSR and the systematic review process. We plan on offering two major releases a year in addition to two minor feature enhancements. We notify customers in advance about upcoming releases, host webinars, develop tools and training to introduce the new capabilities, and provide extensive release notes for our reviewers.

I have a unique literature review protocol. Is your software configurable with my literature review data and process?

Configurability is one of the key foundations of DistillerSR software. In fact, with over 300 customers in many different industries, we have yet to see a literature review protocol that our software couldn’t handle. DistillerSR is a professional B2B SaaS company with an exceptional customer success team that will work with you to understand your unique requirements and systematic review process to get you started quickly. Our global support team is available 24/7 to help you.

Still unsure if DistillerSR will meet your systematic literature review requirements?

Adopting a new software is about more than just money. New software is also about commitment and trusting that the new platform will match your systematic review and scalability needs. We have resources to help you in your analysis and decision: check out the systematic review software checklist or the literature review software checklist .

Learn More About DistillerSR

Linkedin Icon

homepage

Literature Reviews

  • Getting Started
  • Steps for Conducting a Lit Review
  • Finding "The Literature"
  • Organizing/Writing
  • Peer Review
  • Citation/Style Guides This link opens in a new window

Quick Links

What is a literature review.

A literature review is a methodical examination of the published literature on a specific topic or research question, aimed at analyzing rather than merely summarizing scholarly works relevant to your research . It includes literature that offers background on your topic and demonstrates how it aligns with your research question.

What is the Purpose of a Literature Review?

  • To help define the focus of your research topic.
  • To identify existing research in your area of interest, pinpoint gaps in the existing literature, and avoid duplicating previous research.
  • To gain an understanding of past and current research as well as the current developments and controversies in your field of interest.
  • To recognize and assess the strengths and weaknesses of works related to your area of interest.
  • To evaluate the contributions of experts, theoretical approaches, methodologies, results, conclusions, and possible opportunities for future research.

A Literature Review is NOT

  • An annotated bibliography or research paper
  • A collection of broad, unrelated sources
  • Everything that has been written on a particular topic
  • Literature criticism or a book review.

Literature Review vs Annotated Bibliography

A literature review and an annotated bibliography are both tools used to assess and present scholarly research, but they serve different purposes and have distinct formats:

  Literature Review Annotated Bibliography
Purpose Provides an examination of a collection of scholarly work as they pertain to a specific topic of interest. Provides a summary of the contents of each example in a collection of scholarly works.
Elements Includes an introduction, body, conclusion, and bibliography similar to a research paper. A selection of research and/or scholarly works each with its own summary.
Construction Sources are logically organized and synthesized to demonstrate the author's understanding of the material. An alphabetized list of works with a complete citation and a brief statement of the main components.
Critical Evaluation Contains a collective critique of a body of work related to a specific topic. Assesses the strengths, weaknesses, gaps, and possible future research needs for that topic. Any critique it contains will focus on the quality of the research and/or argument found in each scholarly work.

Where Can I Find a Lit Review?

The Literature Review portion of a scholarly article is usually close to the beginning. It often follows the introduction , or may be combined with the introduction. The writer may discuss his or her research question first, or may choose to explain it while surveying previous literature.

If you are lucky, there will be a section heading that includes " literature review ".  If not, look for the section of the article with the most citations or footnotes .

  • Next: Steps for Conducting a Lit Review >>
  • Last Updated: Aug 14, 2024 5:23 PM
  • URL: https://westlibrary.txwes.edu/literaturereview

literature review tools

AI Literature Review Generator

Generate high-quality literature reviews fast with ai.

  • Academic Research: Create a literature review for your thesis, dissertation, or research paper.
  • Professional Research: Conduct a literature review for a project, report, or proposal at work.
  • Content Creation: Write a literature review for a blog post, article, or book.
  • Personal Research: Conduct a literature review to deepen your understanding of a topic of interest.

New & Trending Tools

Ai meeting notes organizer, ai research article summarizer, sentence rephraser.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

 Cochrane (formerly Cochrane Collaboration)
 JBI (formerly Joanna Briggs Institute)
 National Institute for Health and Care Excellence (NICE)—United Kingdom
 Scottish Intercollegiate Guidelines Network (SIGN) —Scotland
 Agency for Healthcare Research and Quality (AHRQ)—United States

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Review typeTopic assessedElements of research question (mnemonic)
Intervention [ , ]Benefits and harms of interventions used in healthcare. opulation, ntervention, omparator, utcome ( )
Diagnostic test accuracy [ ]How well a diagnostic test performs in diagnosing and detecting a particular disease. opulation, ndex test(s), and arget condition ( )
Qualitative
 Cochrane [ ]Questions are designed to improve understanding of intervention complexity, contextual variations, implementation, and stakeholder preferences and experiences.

etting, erspective, ntervention or Phenomenon of nterest, omparison, valuation ( )

ample, henomenon of nterest, esign, valuation, esearch type ( )

spective, etting, henomena of interest/Problem, nvironment, omparison (optional), me/timing, indings ( )

 JBI [ ]Questions inform meaningfulness and appropriateness of care and the impact of illness through documentation of stakeholder experiences, preferences, and priorities. opulation, the Phenomena of nterest, and the ntext
Prognostic [ ]Probable course or future outcome(s) of people with a health problem. opulation, ntervention (model), omparator, utcomes, iming, etting ( )
Etiology and risk [ ]The relationship (association) between certain factors (e.g., genetic, environmental) and the development of a disease or condition or other health outcome. opulation or groups at risk, xposure(s), associated utcome(s) (disease, symptom, or health condition of interest), the context/location or the time period and the length of time when relevant ( )
Measurement properties [ , ]What is the most suitable instrument to measure a construct of interest in a specific study population? opulation, nstrument, onstruct, utcomes ( )
Prevalence and incidence [ ]The frequency, distribution and determinants of specific factors, health states or conditions in a defined population: eg, how common is a particular disease or condition in a specific group of individuals?Factor, disease, symptom or health ndition of interest, the epidemiological indicator used to measure its frequency (prevalence, incidence), the ulation or groups at risk as well as the ntext/location and time period where relevant ( )

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

Intervention857296.3Effectiveness43561.5
Diagnostic1761.9Diagnostic Test Accuracy91.3
Overview640.7Umbrella40.6
Methodology410.45Mixed Methods20.3
Qualitative170.19Qualitative15922.5
Prognostic110.12Prevalence and Incidence60.8
Rapid110.12Etiology and Risk71.0
Prototype 80.08Measurement Properties30.4
Economic60.6
Text and Opinion10.14
Scoping436.0
Comprehensive 324.5
Total = 8900Total = 707

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

 Quality of Reporting of Meta-analyses (QUOROM) StatementMoher 1999 [ ]
 Meta-analyses Of Observational Studies in Epidemiology (MOOSE)Stroup 2000 [ ]
 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)Moher 2009 [ ]
 PRISMA 2020 Page 2021 [ ]
 Overview Quality Assessment Questionnaire (OQAQ)Oxman and Guyatt 1991 [ ]
 Systematic Review Critical Appraisal SheetCentre for Evidence-based Medicine 2005 [ ]
 A Measurement Tool to Assess Systematic Reviews (AMSTAR)Shea 2007 [ ]
 AMSTAR-2 Shea 2017 [ ]
 Risk of Bias in Systematic Reviews (ROBIS) Whiting 2016 [ ]

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

Characteristic
ExtensiveExtensive
InterventionIntervention, diagnostic, etiology, prognostic
7 critical, 9 non-critical4
 Total number1629
 Response options

Items # 1, 3, 5, 6, 10, 13, 14, 16: rated or

Items # 2, 4, 7, 8, 9 : rated or

Items # 11 , 12, 15: rated or

24 assessment items: rated

5 items regarding level of concern: rated

 ConstructConfidence based on weaknesses in critical domainsLevel of concern for risk of bias
 CategoriesHigh, moderate, low, critically lowLow, high, unclear

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA for systematic reviews with a focus on health equity [ ]PRISMA-E2012
Reporting systematic reviews in journal and conference abstracts [ ]PRISMA for Abstracts2015; 2020
PRISMA for systematic review protocols [ ]PRISMA-P2015
PRISMA for Network Meta-Analyses [ ]PRISMA-NMA2015
PRISMA for Individual Participant Data [ ]PRISMA-IPD2015
PRISMA for reviews including harms outcomes [ ]PRISMA-Harms2016
PRISMA for diagnostic test accuracy [ ]PRISMA-DTA2018
PRISMA for scoping reviews [ ]PRISMA-ScR2018
PRISMA for acupuncture [ ]PRISMA-A2019
PRISMA for reporting literature searches [ ]PRISMA-S2021

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

Table Table
Methods for study selection#5#2.5All three components must be done in duplicate, and methods fully described.Helps to mitigate CoI and bias; also may improve accuracy.
Methods for data extraction#6#3.1
Methods for RoB assessmentNA#3.5
Study description#8#3.2Research design features, components of research question (eg, PICO), setting, funding sources.Allows readers to understand the individual studies in detail.
Sources of funding#10NAIdentified for all included studies.Can reveal CoI or bias.
Publication bias#15*#4.5Explored, diagrammed, and discussed.Publication and other selective reporting biases are major threats to the validity of systematic reviews.
Author CoI#16NADisclosed, with management strategies described.If CoI is identified, management strategies must be described to ensure confidence in the review.

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

AcronymMeaning
feasible, interesting, novel, ethical, and relevant
specific, measurable, attainable, relevant, timely
time, outcomes, population, intervention, context, study design, plus (effect) moderators

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

 BMJ Open
 BioMed Central
 JMIR Research Protocols
 World Journal of Meta-analysis
 Cochrane
 JBI
 PROSPERO

 Research Registry-

 Registry of Systematic Reviews/Meta-Analyses

 International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY)
 Center for Open Science
 Protocols.io
 Figshare
 Open Science Framework
 Zenodo

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

Aggregate data

Individual

participant data

Weighted average of effect estimates

Pairwise comparisons of effect estimates, CI

Overall effect estimate, CI, value

Evaluation of heterogeneity

Forest plot with summary statistic for average effect estimate
Network Variable The interventions, which are compared directly indirectlyNetwork diagram or graph, tabular presentations
Comparisons of relative effects between any pair of interventionsEffect estimates for intervention pairings
Summary relative effects for pair-wise comparisons with evaluations of inconsistency and heterogeneityForest plot, other methods
Treatment rankings (ie, probability that an intervention is among the best options)Rankogram plot
Summarizing effect estimates from separate studies (without combination that would provide an average effect estimate)Range and distribution of observed effects such as median, interquartile range, range

Box-and-whisker plot, bubble plot

Forest plot (without summary effect estimate)

Combining valuesCombined value, number of studiesAlbatross plot (study sample size against values per outcome)
Vote counting by direction of effect (eg, favors intervention over the comparator)Proportion of studies with an effect in the direction of interest, CI, valueHarvest plot, effect direction plot

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

[ ]
Risk of bias [ ]Large magnitude of effect
Imprecision [ ]Dose–response gradient
Inconsistency [ ]All residual confounding would decrease magnitude of effect (in situations with an effect)
Indirectness [ ]
Publication bias [ ]

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

 ⊕  ⊕  ⊕  ⊕ High: We are very confident that the true effect lies close to that of the estimate of the effect
 ⊕  ⊕  ⊕ Moderate: We are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different
 ⊕  ⊕ Low: Our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect
 ⊕ Very low: We have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

1. The certainty in the evidence (also known as quality of evidence or confidence in the estimates) should be defined consistently with the definitions used by the GRADE Working Group.
2. Explicit consideration should be given to each of the GRADE domains for assessing the certainty in the evidence (although different terminology may be used).
3. The overall certainty in the evidence should be assessed for each important outcome using four or three categories (such as high, moderate, low and/or very low) and definitions for each category that are consistent with the definitions used by the GRADE Working Group.
4. Evidence summaries … should be used as the basis for judgments about the certainty in the evidence.

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

Cochrane , JBICochrane, JBICochraneCochrane, JBIJBIJBIJBICochrane, JBIJBI
 ProtocolPRISMA-P [ ]PRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-PPRISMA-P
 Systematic reviewPRISMA 2020 [ ]PRISMA-DTA [ ]PRISMA 2020

eMERGe [ ]

ENTREQ [ ]

PRISMA 2020PRISMA 2020PRISMA 2020PRIOR [ ]PRISMA-ScR [ ]
 Synthesis without MASWiM [ ]PRISMA-DTA [ ]SWiM eMERGe [ ] ENTREQ [ ] SWiM SWiM SWiM PRIOR [ ]

For RCTs: Cochrane RoB2 [ ]

For NRSI:

ROBINS-I [ ]

Other primary research

QUADAS-2[ ]

Factor review QUIPS [ ]

Model review PROBAST [ ]

CASP qualitative checklist [ ]

JBI Critical Appraisal Checklist [ ]

JBI checklist for studies reporting prevalence data [ ]

For NRSI: ROBINS-I [ ]

Other primary research

COSMIN RoB Checklist [ ]AMSTAR-2 [ ] or ROBIS [ ]Not required
GRADE [ ]GRADE adaptation GRADE adaptation

CERQual [ ]

ConQual [ ]

GRADE adaptation Risk factors GRADE adaptation

GRADE (for intervention reviews)

Risk factors

Not applicable

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

A review that uses explicit, systematic methods to collate and synthesize findings of studies that address a clearly formulated question.
The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates and other methods, such as combining values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect.
A statistical technique used to synthesize results when study effect estimates and their variances are available, yielding a quantitative summary of results.
An event or measurement collected for participants in a study (such as quality of life, mortality).
The combination of a point estimate (such as a mean difference, risk ratio or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome.
A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information.
The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.
An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses.

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

PreferredPotentially problematic

Evidence synthesis with meta-analysis

Systematic review with meta-analysis

Meta-analysis
Overview or umbrella review

Systematic review of systematic reviews

Review of reviews

Meta-review

RandomizedExperimental
Non-randomizedObservational
Single case experimental design

Single-subject research

N-of-1 design

Case report or case seriesDescriptive study
Methodological qualityQuality
Certainty of evidence

Quality of evidence

Grade of evidence

Level of evidence

Strength of evidence

Qualitative systematic reviewQualitative synthesis
Synthesis of qualitative data Qualitative synthesis
Synthesis without meta-analysis

Narrative synthesis , narrative summary

Qualitative synthesis

Descriptive synthesis, descriptive summary

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Apple Badge

🤖 AI Literature Review Generator

Unleash the power of AI with our Literature Review Generator. Effortlessly access comprehensive and meticulously curated literature reviews to elevate your research like never before!

Feeling overwhelmed by the sheer volume of books, articles, and papers you need to review for your research? You can use this free AI Literature Review Generator to streamline the work! This intuitive tool takes the hassle out of the process and makes planning and drafting literature reviews more efficient.

Think of this free AI Literature Review Generator as your personal research assistant. It uses advanced artificial intelligence techniques to fetch studies, identifies key points, and highlights important findings for you, so you can focus on analyzing and synthesizing information rather than spending hours reading.

What Is a Literature Review?

A literature review is a deep dive into academic sources like scholarly articles, books, and other publications related to your research topic. It’s about understanding the current landscape of your field and identifying the key players and pivotal works. More than just a summary, a literature review synthesizes existing knowledge to provide a comprehensive overview.

A literature review outlines what’s already known, highlights major debates, and points out gaps that still need exploring. This makes it an essential starting point for any academic research, whether it’s for a dissertation, thesis, or research article. Think of it as setting the stage for your research by clarifying the intellectual progression of your field. A well-crafted literature review helps you interpret your findings within the context of existing knowledge.

Finally, a literature review shows you where your work fits in and how it can contribute to the ongoing conversation in your area of study.

With our AI Literature Review Generator, you’ll get all these benefits without the usual stress and time commitment. So, let’s see how it can do the heavy lifting so you can focus on what you do best — researching and discovering new insights.

Why Use a Literature Review Generator?

Universities and academic institutions often require students to develop literature reviews. These targeted examinations of other studies related to your current research provide a robust foundation for your work. While developing literature reviews can enhance your academic skills, they can also be time-consuming and challenging due to the extensive research involved.

Using an online literature review generator in the process can be a game-changer. It takes the legwork out of sifting through countless sources and helps you zero in on the most relevant studies quickly. This means you can spend less time on tedious data collection and more time on the critical analysis that actually makes your work stand out. The tool’s ability to organize and structure your reviews also helps present your findings in a coherent and professional manner.

By automating much of the groundwork, a generator reduces the risk of errors and ensures you have accurate citations, enhancing the overall quality of your review. Plus, it serves as a valuable learning tool, teaching you how to structure a high-quality literature review, which is super beneficial for your academic growth.

Let’s take a closer look at a few more benefits of literature review generators:

  • Ease of information gathering : Comprehensive studies require extensive reading and diligent research, which often takes hours to complete. Literature review generators automate the information-gathering process, retrieving relevant articles, journals, and related publications in a matter of seconds. This ensures a momentous saving of time and relieves the user from the tedious job of slogging through numerous resources.
  • Coherent and well-structured reviews : Structuring the review in a logical and coherent manner can be a difficult task. Online literature review generators present well-structured reviews and offer well-organized input which can guide you in writing your own well-formulated literature review.
  • Finds good matches : A literature review generator is designed to find the most relevant literature content according to your research topic. The expertise of these software tools allows users to ease the process of finding relevant scholarly articles and other documents, making it more accurate and faster than doing it manually.
  • Reduces errors and improves quality : Humans are prone to making mistakes, especially when tasked with analyzing extensive volumes of data. Literature review generators minimize errors by ensuring access to the most accurate data and providing proper citations hence enhancing the quality of the review.
  • Pedagogical benefits : Not only does using a free literature review generator provide a quick fix for students but it also serves as a tool for learning. It allows the users to understand how professional literature reviews should be structured and can guide them in crafting their work.
  •  Time management : By automating the most time-consuming parts of the literature review process, you free up time to focus on other important aspects of your research or studies. This helps you manage your workload more effectively and reduces stress.
  • Customizable searches : Literature review generators create structured, hiearchical documents. This means you can easily browse the contents of the reviews and find the information you need in an instant. This ensures you’re getting the most relevant and current information all the time.

How To Review Literature With This Literature Review Generator

  • Open your Taskade workspace and click “➕New project”.
  • Choose “ 🤖 AI Project Studio ” and describe what you want to create.
  • Use the drop-downs to define project type or upload seed sources .
  • When done, customize your project to make it your own!

AI Literature Review Generator

Designed to assist researchers and students in creating comprehensive, high-quality literature reviews.

0 / 10000 Characters

In this ever-changing world where new things are taking place every second, we have moved from a phase of lack of information to a phase of an overflow of information and when we have to write an academic paper, thesis, or create a business strategy then going through this sea of information, finding credible sources, analyzing the information could be a tedious and time taking process.

To save yourself from all this hassle, our tool sifts through large sets of information, analyzes it, and provides you with credible information in a structured way. It basically, provides you with a base to build your research and shields you from writer's block.

Let's discover how this tool works, its key features, and its use cases.

What is a literature review?

It is like doing a deep dive into all the important stuff people have written about a particular topic. It's about gathering, reading, and analyzing lots of articles, books, and other sources to really get the gist of what's out there. By going through all this info, you can see what's missing, what's good, and where the research world can use a little boost. This process helps in identifying gaps in current literature, setting the stage for new research. Pretty cool, right?

How does a literature review generator work?

It is an easy-to-use tool that helps automate the process of creating an article review. Internally it works in 2 steps:

  • automates the creation of comprehensive reviews by searching and analyzing scholarly resources to identify key themes, methodologies, findings, and gaps.
  • compiles this information into a structured review with proper citations, saving researchers time and effort.

Users can input their research topic, review and edit the generated content, and export it in various formats for academic projects. This tool leverages AI to enhance the efficiency and quality of literature reviews, allowing researchers to focus on other aspects of their work while still producing a high-quality, comprehensive review.

How to use Merlin's Literature Review Generator?

Step 1: Sign up on Merlin AI Begin by signing up and registering for a free Merlin AI account. This will provide you with access to all of Merlin's services.

drawing

Step 2: Access the tool

Screenshot 2024-05-01 014507.png

Why to Use Merlin's AI Article Review Generator?

Benefits of using this tool are:.

  • Time Efficiency: Dramatically cuts down the time required to compile and review academic sources. Helps researchers find information faster.
  • Advanced AI Analysis: Utilizes cutting-edge AI to uncover deep insights and detect underlying themes and trends.
  • Customizable Outputs: Offers adaptable settings for depth of information, specific focus areas, and citation formats.
  • Ease of Access: Simplifies complex research, making it accessible to users with varying levels of expertise.
  • Latest Data: Ensures reviews include the most recent studies, providing up-to-date information.
  • Reduced Bias: Helps achieve a more objective review by minimizing personal biases through broad and diverse source analysis. Eliminates personal biases that might affect the selection and interpretation of data.
  • Makes sure researchers don't miss important materials.
  • Allows searching by keywords and related topics.
  • Predicts what researchers might be looking for.
  • Reduces the manual work researchers need to do.
  • Makes it easier to combine evidence from different sources.
  • Helps organize information in a clear way.
  • Improves the quality and speed of research projects.

Who is the AI Literature Review Writer Tool For?

ProfessionUse cases
Streamlines the process of compiling and synthesizing academic sources.
Helps in creating comprehensive literature reviews for theses, dissertations, and term papers.
Assists in curriculum development and staying updated with the latest research.
Provides quick overviews of existing knowledge for articles and reports.
Useful for market analysis and understanding industry trends.
Useful to get a deep understanding of a topic.

Create Reviews for Your Literature

Author

The key feature of Literature Review Writer

1. clear organization.

The review is organized into sections (like introduction, findings, and conclusions), which makes it easy to read and understand.

2. Introduction

At the start, it explains what the topic is about, why it's important, and what the review will cover.

3. Method Details

It clearly describes how the research was done, including where information was found and how it was checked for accuracy. This helps show that the review is trustworthy.

4. Topics Covered

The review groups related findings into themes (like causes, effects, and debates). This helps readers grasp the wide-ranging impacts of climate change.

5. Debates Discussed

It talks about different opinions, for eg., whether it’s better to prevent further climate change or adapt to its impacts. This shows there are various viewpoints within the topic.

6. Gaps in Research

The review points out areas where more research is needed, such as local effects of climate change and long-term predictions. Highlighting these gaps can guide future studies.

7. Citations Provided

Every claim or piece of information is backed up with references, making the review more credible and giving readers resources to explore more.

8. Summary and Urgency

The conclusion summarizes the important points and emphasizes the need for action and further study on climate change.

9. Reliable Sources

The review uses information from well-known and respected sources (like scientific journals and global organizations), which supports the accuracy of the information presented.

These features make sure the literature review is both informative and reliable, providing a clear overview for anyone interested in understanding climate change.

Use cases of Article review generator

This tool can be very helpful in different areas where collecting and understanding a lot of information quickly is important. Here are some simple examples of when they might be used:

1. Academic Research

Students and researchers can use these tools to quickly gather information about what has already been studied, which helps them understand their topic better.

2. Grant Proposals

Researchers need to show why their project is important by discussing previous studies. This AI tool can help them find and summarize this information quickly.

3. Thesis and Dissertation Writing

Graduate students can use these tools to organize their research findings and make sure they cover important studies related to their topic.

4. Health Reviews

In medicine and social sciences, knowing what previous studies say is crucial. A generator can help outline and summarize these studies effectively.

5. Policy Making

Government officials can use these tools to read about past policies and their impacts, which helps them make better policies.

6. Business Strategies

Companies might use literature reviews to keep up with trends and developments in their industry, and a generator can make this process faster by summarizing important information.

7. Healthcare Guidelines

Health professionals use research to design medical guidelines. A generator can make it easier to gather and review the necessary medical studies quickly.

8. Legal Research

Lawyers use this tool to understand laws and cases better. A generator can help by organizing and highlighting key legal documents.

9. Educational Planning

Teachers and educators can use these tools to find out what's new in teaching methods and incorporate this into their lessons.

10. Environmental Projects

When planning projects that affect the environment, getting a quick overview of similar past projects is helpful. This article review tool can simplify finding and reviewing related environmental studies.

The Merlin literature review generator is a cool tool that really simplifies the whole process of sifting through tons of academic info. It uses some slick AI tech to help users quickly go through various studies and articles, making their research tasks a lot deeper and more thorough. This tool is a real game-changer in academia and research, helping to boost the way scholarly work is done and discussed.

Merlin's other AI tools supporting Academic Integrity

  • Plagiarism Checker
  • AI Detection

Frequently Asked Questions

Is merlin’s literature review generator free.

Merlin AI's Literature Review Generator is free to use, Merlin offers 102 queries per day to every free user to test all the unique features and tools of Merlin AI. Users can opt for Merlin’s paid plans to avoid limited queries per day.

How to write a literature review?

Writing a literature review involves identifying, summarizing, and evaluating previous research on a topic. Collect relevant articles, analyze their contents, and organize your findings coherently to explain the current knowledge and gaps.

How to do a literature review?

To do a literature review, gather relevant research articles, read and analyze them to understand key themes, then organize and present your findings to highlight trends, debates, and gaps in the existing research.

Can ChatGPT do a literature review?

Yes, ChatGPT can help draft and organize a literature review by summarizing relevant research and providing insights. One if the best tool to do this work is Merlin's tool

How to generate an article review?

If you want to do it manually then steps are:

  • compile relevant research,
  • summarize key findings,
  • analyze patterns and gaps, and
  • synthesize the information into a cohesive narrative.

Or if you want to use an AI tool for this then use Merlin

What AI can generate an article review?

AI tools like ChatGPT, Merlin , and EndNote can assist in generating literature reviews by summarizing and organizing research articles.

What are the 7 steps in writing a literature review?

  • Define the research question.
  • Conduct a comprehensive literature search.
  • Evaluate and select relevant sources.
  • Organize the literature by themes or categories.
  • Summarize and synthesize the findings.
  • Identify gaps and inconsistencies.
  • Write and revise the review.

Other tools

Cover Image

COMMENTS

  1. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  2. Ace your research with these 5 literature review tools

    Learn how to use five literature review tools, including SciSpace, Mendeley, Zotero, and Sysrev, to find, read, and manage academic articles. SciSpace is an AI platform that helps you discover, understand, and summarize papers in any language.

  3. Litmaps

    Litmaps is a tool that helps you discover, visualize, share, and monitor scientific literature faster and easier. It is used by over 250,000 researchers across 150 countries and has received positive feedback from users.

  4. Literature Review Generator

    Our Literature Review Generator is an AI-powered tool that streamlines and simplifies the creation of literature reviews by automatically collecting, analyzing, summarizing, and synthesizing all the relevant academic sources on a specific topic within the parameters you define. It saves you additional time by highlighting themes, trends, and ...

  5. ATLAS.ti

    ATLAS.ti is a software that helps you manage, organize, and analyze articles, PDFs, excerpts, and more for your literature review projects. It offers a comprehensive toolset, AI-powered features, and easy-to-use visualizations to accelerate your research and present your findings.

  6. 7 open source tools to make literature reviews easy

    Learn how to use free and open source software tools to collect, organize, cite, and share research papers for your literature reviews. Discover Linux, Firefox, Unpaywall, Zotero, LibreOffice, LaTeX, and MediaWiki.

  7. How to Write a Literature Review

    Learn how to conduct a literature review for your thesis, dissertation, or research paper. Find out how to search, evaluate, synthesize, and outline scholarly sources on your topic.

  8. Tools

    Systematic Review Toolbox. "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process.

  9. Literature Reviews and Synthesis Tools

    These steps for conducting a systematic literature review are listed below. Also see subpages for more information about: What are Literature Reviews? The different types of literature reviews, including systematic reviews and other evidence synthesis methods; Conducting & Reporting Systematic Reviews; Finding Systematic Reviews; Tools & Tutorials

  10. 10 Tools for Literature Review

    Beyond literature review tools. There you have it, 10 tools for literature review that are all completely free and follow Open Science principles. Of course, finding a great literature review tool, such as a search engine for research papers or a citation tool, is only one essential part in the whole process of writing a scientific paper.

  11. AI-Powered Research and Literature Review Tool

    Enago Read is an AI assistant that speeds up the literature review process, offering summaries and key insights to save researchers reading time. It boosts productivity with advanced AI technology and the Copilot feature, enabling real-time questions for deeper comprehension of extensive literature.

  12. 15 Best Literature Review Online Tool For Efficient Writing

    10. DistillerSR. DistillerSR is a powerful literature review tool trusted by many researchers due to its user-friendly interface and robust features. With advanced search capabilities, researchers can quickly find relevant studies across multiple databases, making the literature review process smoother and more efficient. 11.

  13. Research Guides: AI-Based Literature Review Tools: Home

    AI-POWERED RESEARCH ASSISTANT - finding papers, filtering study types, automating research flow, brainstorming, summarizing and more. " Elicit is a research assistant using language models like GPT-3 to automate parts of researchers' workflows. Currently, the main workflow in Elicit is Literature Review.

  14. 12 PhD tools to supercharge your literature review

    12 PhD tools to supercharge your literature review. Professor Dawid Hanak May 24, 2022; No Comments A literature review is an inherent part of each research project. This is because it helps you to understand the relevant background of the broader research area and the associated political, environmental, societal, technological and economic ...

  15. START HERE

    Literature reviews take time. Here is some general information to know before you start. VIDEO -- This video is a great overview of the entire process. (2020; North Carolina State University Libraries) --The transcript is included. --This is for everyone; ignore the mention of "graduate students". --9.5 minutes, and every second is important.

  16. AI-Powered Literature Review Generator

    AI-Powered Literature Review Generator. Generate high-quality literature reviews fast with our AI tool. Summarize papers, identify key themes, and synthesize conclusions with just a few clicks. The AI reviews thousands of sources to find the most relevant info for your topic.

  17. AI Tools To Automate Your Literature Review: Which To Use?

    Leveraging AI tools for your literature review is a game-changer. Tools like Semantic Scholar, Research Rabbit, and Scite, powered by advanced AI algorithms, not only automate the search for relevant papers but also provide critical summaries and evaluations. They enhance the literature review process in academic research, making it more ...

  18. Literature Review Software MAXQDA

    MAXQDA is an all-in-one software for managing and analyzing literature review data. It offers tools for importing, organizing, coding, searching, summarizing, visualizing, and quantifying your sources, as well as AI assistance and free learning resources.

  19. Rayyan

    Rayyan Enterprise and Rayyan Teams+ make it faster, easier and more convenient for you to manage your research process across your organization. Accelerate your research across your team or organization and save valuable researcher time. Build and preserve institutional assets, including literature searches, systematic reviews, and full-text ...

  20. Best Literature Review Tool

    Our Excel export feature generates a literature synthesis matrix for you, so you can. Compare papers side by side for their study sizes, key contributions, limitations, and more. Export literature-review ready data in Excel, Word, RIS or Markdown format. Integrates with your reference manager and 'second brain' tools such as Roam, Notion ...

  21. AI Literature Review Generator

    Welcome to Jenni AI, the ultimate tool for researchers and students. Our AI Literature Review Generator is designed to assist you in creating comprehensive, high-quality literature reviews, enhancing your academic and research endeavors. Say goodbye to writer's block and hello to seamless, efficient literature review creation.

  22. Systematic Review Software

    Literature review software is an ideal tool to help you comply with these regulations. DistillerSR automates literature reviews to enable a more transparent, repeatable, and auditable process, enabling manufacturers to create and implement a standard framework for literature reviews. This framework for conducting literature reviews can then be ...

  23. Getting Started

    The Literature Review portion of a scholarly article is usually close to the beginning. It often follows the introduction, or may be combined with the introduction.The writer may discuss his or her research question first, or may choose to explain it while surveying previous literature.. If you are lucky, there will be a section heading that includes "literature review".

  24. AI Literature Review Generator

    Creates a comprehensive academic literature review with scholarly resources based on a specific research topic. HyperWrite's AI Literature Review Generator is a revolutionary tool that automates the process of creating a comprehensive literature review. Powered by the most advanced AI models, this tool can search and analyze scholarly articles, books, and other resources to identify key themes ...

  25. Guidance to best tools and practices for systematic reviews

    These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [88, 96]. To enable their uptake, Table 4.1 links review components to the corresponding appraisal tool items.

  26. AI Literature Review Generator

    Online literature review generators present well-structured reviews and offer well-organized input which can guide you in writing your own well-formulated literature review. Finds good matches: A literature review generator is designed to find the most relevant literature content according to your research topic. The expertise of these software ...

  27. Literature Review Generator by Merlin AI

    The Merlin literature review generator is a cool tool that really simplifies the whole process of sifting through tons of academic info. It uses some slick AI tech to help users quickly go through various studies and articles, making their research tasks a lot deeper and more thorough. This tool is a real game-changer in academia and research ...