• UNC Libraries
  • HSL Academic Process
  • Searching PubMed
  • Literature Reviews

Searching PubMed: Literature Reviews

Created by health science librarians.

HSL Logo

  • Basic Searches
  • Filters and Narrowing Searches
  • Find Full-Text Articles
  • Save Search Results
  • Saving Searches & Creating Alerts
  • My NCBI Accounts

Section Objective

What is a literature review, clearly stated research question, search terms, searching worksheets, boolean and / or.

The content in the Literature Review section defines the literature review purpose and process, explains using the PICO format to ask a clear research question, and demonstrates how to evaluate and modify search results to improve the accuracy of the retrieval.

A literature review seeks to identify, analyze and summarize the published research literature about a specific topic.  Literature reviews are assigned as course projects; included as the introductory part of master's and PhD theses; and are conducted before undertaking any new scientific research project.

The purpose of a literature review is to establish what is currently known about a specific topic and to evaluate the strength of the evidence upon which that knowledge is based. A review of a clinical topic may identify implications for clinical practice. Literature reviews also identify areas of a topic that need further research.

A systematic review is a literature review that follows a rigorous process to find all of the research conducted on a topic and then critically appraises the research methods of the highest quality reports. These reviews track and report their search and appraisal methods in addition to providing a summary of the knowledge established by the appraised research.

The UNC Writing Center provides a nice summary of what to consider when writing a literature review for a class assignment. The online book, Doing a literature review in health and social care : a practical guide (2010), is a good resource for more information on this topic.

Obviously, the quality of the search process will determine the quality of all literature reviews. Anyone undertaking a literature review on a new topic would benefit from meeting with a librarian to discuss search strategies. A consultaiton with a librarian is strongly recommended for anyone undertaking a systematic review.

Use the email form on our Ask a Librarian page to arrange a meeting with a librarian.

The first step to a successful literature review search is to state your research question as clearly as possible.

It is important to:

  • be as specific as possible
  • include all aspects of your question

Clinical and social science questions often have these aspects (PICO):

  • People/population/problem  (What are the characteristics of the population?  What is the condition or disease?)
  • Intervention (What do you want to do with this patient?  i.e. treat, diagnose)
  • Comparisons [not always included]  (What is the alternative to this intervention?  i.e. placebo, different drug, surgery)
  • Outcomes  (What are the relevant outcomes?  i.e. morbidity, death, complications)

If the PICO model does not fit your question, try to use other ways to help be sure to articulate all parts of your question. Perhaps asking yourself Who, What, Why, How will help.  

Example Question:  Is acupuncture as effective of a therapy as triptans in the treament of adult migraine?

Note that this question fits the PICO model.

  • Population: Adults with migraines
  • Intervention: Acupuncture
  • Comparison: Triptans/tryptamines
  • Outcome: Fewer Headache days, Fewer migraines

A literature review search is an iterative process. Your goal is to find all of the articles that are pertinent to your subject. Successful searching requires you to think about the complexity of language. You need to match the words you use in your search to the words used by article authors and database indexers. A thorough PubMed search must identify the author words likely to be in the title and abstract or the indexer's selected MeSH (Medical Subject Heading) Terms.

Start by doing a preliminary search using the words from the key parts of your research question.

Step #1: Initial Search

Enter the key concepts from your research question combined with the Boolean operator AND. PubMed does automatically combine your terms with AND. However, it can be easier to modify your search if you start by including the Boolean operators.

migraine AND acupuncture AND tryptamines

The search retrieves a number of relevant article records, but probably not everything on the topic.

Step #2: Evaluate Results

Use the Display Settings drop down in the upper left hand corner of the results page to change to Abstract display.

Review the results and move articles that are directly related to your topic to the Clipboard .

Go to the Clipboard to examine the language in the articles that are directly related to your topic.

  • look for words in the titles and abstracts of these pertinent articles that differ from the words you used
  • look for relevant MeSH terms in the list linked at the bottom of each article

The following two articles were selected from the search results and placed on the Clipboard.

Here are word differences to consider:

  • Initial search used acupuncture. MeSH Terms use Acupuncture therapy.
  • Initial search used migraine.  Related word from MeSH Terms is Migraine without Aura and Migraine Disorders.
  • Initial search used tryptamines. Article title uses sumatriptan. Related word from MeSH is Sumatriptan or Tryptamines.

With this knowledge you can reformulate your search to expand your retrieval, adding synonyms for all concepts except for manual and plaque.

#3 Revise Search

Use the Boolean OR operator to group synonyms together and use parentheses around the OR groups so they will be searched properly. See the image below to review the difference between Boolean OR / Boolean AND.

Here is what the new search looks like:

(migraine OR migraine disorders) AND (acupuncture OR acupuncture therapy) AND (tryptamines OR sumatriptan)

  • Search Worksheet Example: Acupuncture vs. Triptans for Migraine
  • Search Worksheet

Venn diagram with all segments highlighted

  • << Previous: My NCBI Accounts
  • Last Updated: Dec 19, 2023 3:00 PM
  • URL: https://guides.lib.unc.edu/search-pubmed

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 19, Issue 1
  • Reviewing the literature
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Joanna Smith 1 ,
  • Helen Noble 2
  • 1 School of Healthcare, University of Leeds , Leeds , UK
  • 2 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • Correspondence to Dr Joanna Smith , School of Healthcare, University of Leeds, Leeds LS2 9JT, UK; j.e.smith1{at}leeds.ac.uk

https://doi.org/10.1136/eb-2015-102252

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Implementing evidence into practice requires nurses to identify, critically appraise and synthesise research. This may require a comprehensive literature review: this article aims to outline the approaches and stages required and provides a working example of a published review.

Are there different approaches to undertaking a literature review?

What stages are required to undertake a literature review.

The rationale for the review should be established; consider why the review is important and relevant to patient care/safety or service delivery. For example, Noble et al 's 4 review sought to understand and make recommendations for practice and research in relation to dialysis refusal and withdrawal in patients with end-stage renal disease, an area of care previously poorly described. If appropriate, highlight relevant policies and theoretical perspectives that might guide the review. Once the key issues related to the topic, including the challenges encountered in clinical practice, have been identified formulate a clear question, and/or develop an aim and specific objectives. The type of review undertaken is influenced by the purpose of the review and resources available. However, the stages or methods used to undertake a review are similar across approaches and include:

Formulating clear inclusion and exclusion criteria, for example, patient groups, ages, conditions/treatments, sources of evidence/research designs;

Justifying data bases and years searched, and whether strategies including hand searching of journals, conference proceedings and research not indexed in data bases (grey literature) will be undertaken;

Developing search terms, the PICU (P: patient, problem or population; I: intervention; C: comparison; O: outcome) framework is a useful guide when developing search terms;

Developing search skills (eg, understanding Boolean Operators, in particular the use of AND/OR) and knowledge of how data bases index topics (eg, MeSH headings). Working with a librarian experienced in undertaking health searches is invaluable when developing a search.

Once studies are selected, the quality of the research/evidence requires evaluation. Using a quality appraisal tool, such as the Critical Appraisal Skills Programme (CASP) tools, 5 results in a structured approach to assessing the rigour of studies being reviewed. 3 Approaches to data synthesis for quantitative studies may include a meta-analysis (statistical analysis of data from multiple studies of similar designs that have addressed the same question), or findings can be reported descriptively. 6 Methods applicable for synthesising qualitative studies include meta-ethnography (themes and concepts from different studies are explored and brought together using approaches similar to qualitative data analysis methods), narrative summary, thematic analysis and content analysis. 7 Table 1 outlines the stages undertaken for a published review that summarised research about parents’ experiences of living with a child with a long-term condition. 8

  • View inline

An example of rapid evidence assessment review

In summary, the type of literature review depends on the review purpose. For the novice reviewer undertaking a review can be a daunting and complex process; by following the stages outlined and being systematic a robust review is achievable. The importance of literature reviews should not be underestimated—they help summarise and make sense of an increasingly vast body of research promoting best evidence-based practice.

  • ↵ Centre for Reviews and Dissemination . Guidance for undertaking reviews in health care . 3rd edn . York : CRD, York University , 2009 .
  • ↵ Canadian Best Practices Portal. http://cbpp-pcpe.phac-aspc.gc.ca/interventions/selected-systematic-review-sites / ( accessed 7.8.2015 ).
  • Bridges J , et al
  • ↵ Critical Appraisal Skills Programme (CASP). http://www.casp-uk.net / ( accessed 7.8.2015 ).
  • Dixon-Woods M ,
  • Shaw R , et al
  • Agarwal S ,
  • Jones D , et al
  • Cheater F ,

Twitter Follow Joanna Smith at @josmith175

Competing interests None declared.

Read the full text or download the PDF:

  • Open access
  • Published: 29 March 2023

Mapping ethical issues in the use of smart home health technologies to care for older persons: a systematic review

  • Nadine Andrea Felber   ORCID: orcid.org/0000-0001-8207-2996 1 ,
  • Yi Jiao (Angelina) Tian   ORCID: orcid.org/0000-0003-2969-9655 1 ,
  • Félix Pageau   ORCID: orcid.org/0000-0002-4249-7399 2 ,
  • Bernice Simone Elger   ORCID: orcid.org/0000-0003-0857-0510 1 &
  • Tenzin Wangmo 1  

BMC Medical Ethics volume  24 , Article number:  24 ( 2023 ) Cite this article

6095 Accesses

7 Citations

17 Altmetric

Metrics details

The worldwide increase in older persons demands technological solutions to combat the shortage of caregiving and to enable aging in place. Smart home health technologies (SHHTs) are promoted and implemented as a possible solution from an economic and practical perspective. However, ethical considerations are equally important and need to be investigated.

We conducted a systematic review according to the PRISMA guidelines to investigate if and how ethical questions are discussed in the field of SHHTs in caregiving for older persons.

156 peer-reviewed articles published in English, German and French were retrieved and analyzed across 10 electronic databases. Using narrative analysis, 7 ethical categories were mapped: privacy, autonomy, responsibility, human vs. artificial interactions, trust, ageism and stigma, and other concerns.

The findings of our systematic review show the (lack of) ethical consideration when it comes to the development and implementation of SHHTs for older persons. Our analysis is useful to promote careful ethical consideration when carrying out technology development, research and deployment to care for older persons.

Registration

We registered our systematic review in the PROSPERO network under CRD42021248543.

Peer Review reports

Introduction/background

Significant advancements in medicine, public health and technology are allowing the world population to grow increasingly older adding to the steady rise in the proportion of senior citizens (aged over 65) [ 1 ]. Because of this growth in the aging population, the demand for and financial costs of caring for older adults are both rising [ 2 ]. That older persons generally wish to age in place and receive healthcare at home [ 2 ] may mean accepting risks such as falling, a risk that increases with frailty [ 3 ]. However, many prefer accepting these risks rather than moving into long term care facilities [ 4 , 5 , 6 ].

A solution to this multi-facetted problem of ageing safely at home and receiving appropriate care, while keeping costs at bay may be the use of smart home health technologies (SHHTs). A smart home is defined by Demiris and colleagues as “ residence wired with technology features that monitor the well-being and activities of their residents to improve overall quality of life, increase independence and prevent emergencies” [ 7 ]. SHHTs then, represent a certain type of smart home technology, which include non-invasive, unobtrusive, interoperable and possibly wearable technologies that use a concept called the Internet-of-Things (IoT) [ 8 ]. These technologies could thereby remotely monitor the older resident and register any abnormal deviations in the daily habits and vital signs while sending alerts to their formal and informal caregivers when necessary. These SHHTs could permit older people (and their caregivers) to receive the necessary medical support and attention at their convenience and will, thereby allowing them to continue living independently in their home environment.

All of these functions offer benefits to older persons wishing to age at home. While focusing on practical advantages is important, an equally important question to ask is how ethical these technologies are when used in the care of older persons. Principles of biomedical ethics, such as autonomy, justice [ 9 ], privacy [ 10 ], and responsibility [ 11 ] should not only be respected by medical professionals, but by technology developers and build-into the technologies as well.

The goal of our systematic review is therefore to investigate whether and which ethical concerns are discussed in the pertinent theoretical and empirical research on SHHTs for older persons between 2000 and 2020. Different from previous literature reviews [ 12 , 13 , 14 ],, which only explored practical aspects, we explicitly examined if and how researchers treated the ethical aspects of SHHTs in their studies, adding an important, yet often overlooked aspect to the systematic literature. Moreover, we present how and which ethical concerns are discussed in the theoretical literature and which ones in empirical literature, to shed light on possible gaps regarding which and how different ethical concerns are developed. Identifying these gaps is the first important step to eventually connecting bioethical considerations to the real world, adapting policies, guidelines and technologies itself [ 15 ]. Thus, our systematic review is the first one to do so in the context of ethical issues in SHHTs used for caregiving for older persons.

Search strategy

With the guidance of an information specialist from the University of Basel, our team developed a search strategy according to the PICO principle: Population 1 (Older adults), Population 2 (Caregivers), Intervention (Smart home health technologies), and Context (Home). The outcome of ethics was intentionally omitted as we wanted to capture all relevant studies without narrowing concerns that we would classify as “ethical”. Within each category, synonyms and spelling variations for the keywords were used to include all relevant studies. We then adapted the search string by using database-specific thesaurus terms in all ten searched electronic databases: EMBASE, Medline, PsycINFO, CINAHL, SocIndex, SCOPUS, IEEE, Web of Science, Philpapers, and Philosophers Index. We limited the search to peer-reviewed papers published between January 1st, 2000 and December 31st, 2020, written in the English, French, and German languages. This time frame allowed us to map the evolution to SHHTs as a new field.

The inclusion criteria were the following: (1) The article must be an empirical or theoretical original research contribution. Hence, book chapters, conference proceedings, newspaper articles, commentary, dissertations, and thesis were excluded. Also excluded were other systematic reviews since their inclusion would duplicate findings from our individual studies. (2) When the included study was empirical, the study’s population of interest must be older persons over 65 years of age, and/or professional or informal caregivers who provide care to older persons. Informal caregivers include anyone in the community who provided support without financial compensation. Professional caregivers include nurses and related professions who receive financial compensation for their caregiving services. (3) The included study must investigate SHHTs and their use in the older persons’ place of dwelling.

First, we carried out the systematic search across databases and removed all duplicates through EndNote (see supplementary Table  1 in appendix part 1 for a list of all included articles). One member of the research team screened all titles manually and excluded irrelevant papers. Then, two authors screened the abstracts and excluded irrelevant papers, and any disagreements were solved by a third author. She then also combined all included articles and removed further duplicates.

figure 1

PRISMA 2020 Flowchart

Final inclusion and data extraction

All included articles were searched and retrieved online (and excluded if full text was not available). Three co-authors then started data extraction, where several papers were excluded due to irrelevant content. To code the extracted data, a template was developed, which was tested in a first round of data extraction and then used in Microsoft Excel during the remaining extraction process. Study demographics and ethical considerations were recorded. Each extracting author was responsible for a portion of articles. If uncertainties or disputes occurred, they were solved by discussion. To ensure that our data extraction was not biased, 10% of the articles were reviewed independently. Upon comparing data extracted of those 10% of our overall sample, we found that items extracted reached 80% consistency.

Data synthesis

The extracted datasets were combined and ethical discussions encountered in the publications were analyzed using narrative synthesis [ 16 ]. During this stage, the authors discussed the data and recognized seven first-order ethical categories. Information within these categories were further analyzed to form sub-categories that describe and/or add further information to the key ethical category.

Nature of included articles

Our search initially identified 10,924 papers in ten databases. After the duplicates were removed, 9067 papers remained whose titles were screened resulting in exclusion of 5215 papers (Fig.  1 ). The examination of remaining 3845 abstracts of articles led to the inclusion of 374 papers for full-texts for retrieval. As we were unable to find 20 papers after several attempts, the remaining 354 full-texts were included for full-text review. In this full-text review phase, we further excluded 198 full-texts with reasons (such as technologies employed in hospitals, or technologies unrelated to health). Ultimately, this systematic review included 144 empirical and 12 theoretical papers specifying normative considerations of SHHTs in the context of caregiving for older persons.

Almost all publications (154 out of 156) were written in English, and over 67% [ 105 ] were published between 2014 and 2020. About a quarter (26%; 41 papers) were published between 2007 and 2013 and only 7% (10 articles) were from 2000 to 2006. Apart from the 12 theoretical papers, the methodology used in the 144 empirical papers included the following: 42 articles (29%) used a mixed-methods approach, 39 (27%) experimental, 38 (26%) qualitative, 15 (10%) quantitative, and the remaining were of an observational, ethnographical, case-study, or iterative testing nature.

The functions of SHHTs tested or studied in the included empirical papers were categorized as such: 29 articles (20.14%) were solely involved with (a) physiological and functioning monitoring technologies, 16 (11.11%) solely with (b) safety/security monitoring and assistance functions, 23 (15.97%) solely promoted (c) social interactions, and 9 (6.25%) solely for (d) cognitive and sensory assistance. However, 46 articles (29%) also involved technologies that fulfilled more than one of the categorized functions. The specific types of SHHTs included in this review comprised: intelligent homes (71 articles, 49.3%); assistive autonomous robots (49 articles, 34.03%); virtual/augmented/mixed reality (7, 4.4%); and AI-enabled health smart apps and wearables (4 articles, 1.39%). Likewise, the remaining 20 articles (12.8.8%) involved either multiple technologies or those that did not fall into any of the above categories.

Ethical considerations

Of the 156 papers included, 55 did not mention any ethical considerations (See supplementary Table  1 in appendix part 1). Among the 101 papers that noted one or more ethical considerations, we grouped them into 7 main categories (1) privacy, (2) human vs. artificial relationships, (3) autonomy, (4) responsibility, (5) social stigma and ageism, (6) trust, and (7) other normative issues (see Table  1 ). Each of these categories consists of various sub-categories that provided more information on how smart home health technologies (possibly) affected or interacted with the older persons or caregivers in the context of caregiving (Table  2 ). Each of the seven ethical considerations are explained in depth in the following paragraphs.

This key category was cited across 58 articles. In theoretical articles, privacy was one of the most often discussed ethical consideration, as 9 out of 12 mentioned privacy related concerns. Among the 58 articles, four sub-issues within privacy were discussed.

(A)The awareness of privacy was reported as varying according to the type of SHHT end-user. Whereas some end-users were more aware or privacy in relation to SHHTs, others denoted little or a total lack of consideration, while some had differing levels of concerns for privacy that changed as it is weighed against other values, such as access to healthcare [ 17 ] or feeling of safety [ 18 ]. Both caregivers and researchers often took privacy concerns into account [ 19 , 20 , 21 ], while older persons themselves did not share the same degree of fears or concerns [ 22 , 23 , 24 ]. Older persons in fact were less concerned about privacy than costs and usability [ 23 ]. Furthermore, they were willing to trade privacy for safety and the ability to live at home. Nevertheless, several papers acknowledged that privacy is an individualized value, whereby its significance depends on both the person and their context, thus their preferences cannot be generalized [ 25 , 26 , 27 , 28 ]. Lastly, there were also some papers that explicitly stated that there were no privacy concerns found by the participants, or that participants found it useful to have monitoring without mentioning privacy as a barrier [ 29 , 30 , 31 ].

The second prevalent sub-issue within privacy was (B) privacy by choice. Both older persons and their caregivers expressed a preference for having a choice in technology used, in what data is collected, and where technology should or should not be to installed [ 32 , 33 ]. For example, some spaces were perceived as more private and thus monitoring felt more intrusive [ 34 , 35 , 36 ]. Formal caregivers were concerned about monitoring technologies being used as a recording device for their work [ 37 , 38 ]. Furthermore, older persons were often worried about cameras [ 39 , 40 ] and “eyes watching”, even if no cameras were involved [ 41 , 42 , 43 ].

The third privacy concern was (C) risk and regulation of privacy, which included discussions surrounding dissemination of data or active data theft [ 44 , 45 , 46 , 47 ], as well as change in behavior or relationships due to interaction with technology [ 48 , 49 ]. Researchers were aware of both legal and design-contextual measures that must be observed in order to ensure that these risks were minimized [ 45 , 50 , 51 ].

The final sub-issue that we categorized was (D) privacy in the case of cognitive impairment. This included disagreements if cognitive impairment warrants more intrusive measures or if privacy should be protected for everyone in the same way [ 52 , 53 ].

Human versus artificial relationships

54 articles in our review contained data pertinent to trade-offs between human and artificial caregiving. Firstly, (A) there was a general fear that robots would replace humans in providing care for older persons [ 28 , 54 , 55 , 56 ], along with related concerns such as losing jobs [ 40 , 57 ], disadvantages with substituting real interpersonal contact [ 17 , 46 ], and thus increasing the negative effects associated with social isolation [ 41 , 58 ].

Many papers also emphasized (B) the importance of human caregiving, underlining the necessity of human touch [ 26 , 47 , 50 , 59 ] believing that technology should and could not replace humans in connections [ 17 ], love [ 33 ], relationships [ 60 ], and care through attention to subtle signs of health decline in every in-person visit [ 57 ]. Older persons also preferred human contact over machines and had guarded reactions to purely virtual relationships[ 31 , 61 , 62 ]. The use of technology was seen to dehumanize care, as care should be inherently human-oriented [ 27 , 48 ].

There was data alluding to (C) the positive reactions to technologies performing caregiving tasks and possibly forming attachments with the technology[ 47 , 49 , 58 ]. Furthermore, some papers cited participants reacting positively to robots replacing human care, where the concept of “good care” could be redefined [ 63 , 64 , 65 , 66 ]. Solely theoretical papers also identified possible benefits of tech for socialization and relationship building [ 67 , 68 ].

Finally, many articles raised the idea of (D) collaboration between machine and human to provide caregiving to older persons [ 69 ]. These studies highlighted the possible harms if such collaboration was not achieved, such as informal caregivers withdrawing from care responsibilities [ 70 ] or the reinforcement of oppressive care relations [ 71 ]. Interestingly, opinions varied on whether the caregiving technology, such as a robot should have “life-like” appearance, voices, and emotional expressions, while recognizing the current technological limits in actually providing those features to a satisfactory level [ 46 ]. For example, some users preferred for the robot to communicate with voice commands, while others wanted to further customize this function with specific requests on the types of voices generated [ 65 , 72 ].

40 papers mentioned autonomy of the older person with respect to the use of SHHTs. The first sub-theme categorized was in relation to (A) control, which encompassed positive aspects like (possible) empowerment through technology [ 25 , 26 , 73 , 74 ] and negative aspects such as the possibility of technology taking control over the older person, thus increasing dependence [ 55 , 75 ] or decreasing freedom of decision making [ 48 ]. Several studies reported the wishes of older persons to be in control when using the technology (e.g. technology should be easily switched off or on) and be in control of its potential, meaning the extend of data collected or transferred, for example [ 17 , 30 , 70 , 76 ]. Furthermore, they should have the option to not use technology in spaces where they do not wish to, e.g., public spaces [ 35 ]. The issue of increased dependency was discussed as a loss or rather, fear of the loss of autonomy due to greater reliance on technology as well as the fear of being monitored all the time [ 28 , 48 ]. In addition, using technology was deemed to make older persons more dependent and to increase isolation [ 77 ].

The second sub-category within autonomy highlighted the need for the technology to (B) protect the autonomy and dignity of its older end-users, which also included the unethical practice of deception (e.g.[ 46 , 49 , 54 , 78 ], infantilization [ 31 , 60 ], or paternalism [ 17 , 27 , 57 ], as a way to disrespect older persons’ dignity and autonomy [ 79 , 80 , 81 ]. Also reported was that these users may accept technology to avoid being a burden on others, thus underscoring the value of technology to enhance functional autonomy, understood here as independent functioning [ 52 , 82 , 83 ]. Other studies mentioned this kind of trade-off between autonomy and other values or interests as well. For example, between respecting the autonomy of the older persons versus nudging them towards certain behavior (perceived as beneficial for them) through the help of technology [ 32 ], or between autonomy and safety [ 24 ].

Two sub-issues within autonomy primarily discussed in the theoretical publications were (C) relational autonomy [ 27 , 41 , 49 , 58 ] and (D) explanations on why autonomy should actually be preserved. The former emphasized the fact that older persons do not and should not live isolated lives and that there should be respect and promotion of their relationships with family members, friends, caregivers, and the community as a whole [ 27 , 47 ]. The latter described the benefits of respecting autonomy, such as increased happiness and well-being [ 65 , 67 ] or a sense of purpose [ 84 ], and thus favoring the promotion of autonomy and choice also from a normative perspective.

Responsibility

This theme included data across 25 articles that mentioned concerns such as the effect of using technologies on the current responsibilities of caregivers and older persons themselves. Specifically, the papers discussed (A) the downsides of assistive home technology on responsibility. That is, the use of technology conflicted with moral ideas around responsibility [ 58 ], especially for caregivers [ 57 , 59 ]. Its use also raised more practical concerns, such as the fear of shifting the responsibility onto the technology and thus, diminishing vigilance and/or care. Related to this thought was also a fear of increased responsibility on both older persons [ 60 ] and their caregivers, who were worried about extra work time was needed to integrate technology into their work, learn its functions, analyze data, and respond to potentially higher frequencies of alerts [ 18 , 35 , 36 , 53 , 85 ].

Additionally, studies reported (B) continuous negotiation between (formal) caregivers’ (professional) responsibilities of care and the opportunities that smart technologies could provide [ 26 , 47 , 55 , 70 , 82 ]. For example, increased need for cooperation between informal and formal caregivers due to technology was foreseen [ 81 ] and fear expressed that over-reliance on female caregivers was exacerbated [ 71 ]. Nevertheless, the use of smart home health technologies was often seen to (C) reduce the burden of care, where caregivers could direct their attention and time to the most-needed situations and better align the responsibilities of care [ 5 , 18 , 49 , 74 , 80 , 81 ]. This shift of burden onto a technology was also reported by older persons as freeing [ 48 ].

Ageism and stigma

24 articles discussed ageism and stigma, which included discussions about fear of (A) being stigmatized by others with the use of SHHTs [ 73 , 86 ]. Older persons thought acceptance of such technologies also alluded to an admission of failure [ 82 ], or being perceived by others as frail, old, forgetful [ 77 , 87 ], or even stupid [ 26 , 33 , 88 ]. This resulted in them expressing ageist views stating that they did not need the technology “yet” [ 84 , 89 ]. Some papers reported the belief that the presence of robots was disrespectful for older people [ 52 , 85 , 90 ] and technologies do little to alleviate frustration and the impression of “being stupid” that older persons may have when they are faced with the complexities of the healthcare system [ 73 ]. Furthermore, older persons in a few studies did express unfamiliarity with learning new technologies in old age [ 42 , 66 , 91 ], coupled with fears of falling behind and not keeping up with their development, and feeling pressured to use technology [ 62 , 89 ].

Within ageism and stigma, (B) social influence was deemed to cause older persons to believe that the longer they have been using technology, the more their loved ones want them to use it as well, creating a sort of reinforcing loop [ 27 ]. Other social points were related to self-esteem, meaning that older persons needed to reach a certain threshold first to publicly admit that they need technology [ 85 ], or doubts by caregivers if they were able to use the devices [ 36 ]. This possibly led older persons to prefer unobtrusive technology and those that could not be noticed by visitors [ 22 , 55 , 88 ].

Lastly, (C) two theoretical articles raised concerns in regard to technology exacerbating stigmatization of women and migrants in caregiving. Both Parks [ 47 ] and Roberts & Mort [ 71 ] suggested that caregiving technology which does not question the underlying expectation that women give care to their relatives will worsen such gendered expectations in caregiving.

We identified 18 articles that mentioned some aspect of trust. For both older persons and caregivers, there was often (A) a general mistrust with technologies compared with existing human caregiving [ 33 , 42 ]. Therefore, caregivers became proxies and were relied on to “understand it” and continue providing care [ 48 ]. For caregivers the lack of trust was associated with the use of technologies, for example, leaving older persons alone with technology [ 81 ], worrying that older persons would not trust the technology [ 29 , 32 ] or that it could change their professional role [ 23 ]. One paper even reported that using technology meant caregivers themselves are not trusted [ 92 ]. Surprisingly, some studies found that older persons had no problem trusting technology, even considering it safer and more reliable than humans [ 58 , 70 ].

The second sub-theme concerned (B) characteristics promoting trust. That is, the degree of automation [ 30 ](, the involvement of trusted humans in design and use [ 34 , 93 ], perceived usefulness of the technology and spent time with the technology all influenced trust [ 59 , 72 , 94 ]. For robots specifically, they were trusted more than virtual agents, such as Alexa [ 60 , 65 ]. Taking this step further, studies discovered that robots with a higher degree of automation or a lower degree in anthropomorphism level increased trust [ 30 ].

There were several miscellaneous considerations not fitting the ones already mentioned above, and we categorized them as follows. Firstly, two theoretical articles mentioned (A) considerations related to research. Ho, [ 27 ] pointed out that empirical evidence of the usefulness of SHHTs is lacking, which therefore may make them less relevant as a possible solution for aging in place. Palm et al. (2013) suggested that, if research would consider the fact that many costs of caregiving are hidden because of non-paid informal caregivers, the actual economic benefits of SHHTs are unknown. Lastly, two articles alluded to (B) psychological phenomena related to the use of SHHTs. Pirhonen et al., [ 58 ] suggested that robots can promote the ethical value of well-being through the promotion of feelings of hope. The other phenomenon was feeling of blame and fear associated with the adoption of the technology, as caregivers may be pushed to use SHHTs in order to not be blamed for failing to use technology [ 18 ]. This then also nudged caregivers to think that using SHHTs cannot do any harm, so it is better to use it than not use it.

Our systematic review investigated if and how ethical considerations appear in the current research on SHHTs in the context of caregiving for older persons. As we included both empirical and theoretical works of literature, our review is more comprehensive that existing systematic reviews (e.g.[ 12 , 13 , 14 ], that have either only explored the empirical side of the research and neglected to study ethical concerns. Our review offers an informative and useful insights on dominant ethical issues related to caregiving, such as autonomy and trust [ 95 , 96 ]. At the same time, the study findings brings forth less known ethical concerns that arise when using technologies in the caregiving context, such as responsibility [ 97 ] and ageism and stigma.

The first key finding of our systematic review is the silence on ethics in SHHTs research for caregiving purposes. Over a third of the reviewed publications did not mention any ethical concern. One possible explanation is related to scarcity [ 98 ]. In the context of research in caregiving for older persons, “scarcity” can be understood in a variety of ways: one way is to see the available space for ethical principles in medical technology research as scarce. For example, according to Einav & Ranzani [ 99 ] “Medical technology itself is not required to be ethical; the ethics of medical technology revolves around when, how and on whom each technology is used” (p.1612). Determining the answers to these questions is done empirically, by providing proof of benefit of the technology, ongoing reporting on (possibly harmful) long term effects, and so on [ 99 ]. Given that publication space in journal is limited to a certain amount of text, the available space that ethical considerations can take up is scarce. Therefore, adding deliberations about the unearthed values or issues in our systematic review, like trust, responsibility or ageism, may simply not fit in the space available in research publications. This may also be the reason why the values of beneficence and non-maleficence were not found through our narrative analysis. While both values are considered crucial in biomedical ethics [ 9 ], the empirically measured benefits may be considered enough by the authors to demonstrate beneficence (and non-maleficence), leading them to not mention the ethical values explicitly again in their publications.

Another interpretation is the scarcity of time, and the felt pressure to “solve” the problem of limited resources in caregiving [ 2 ]. Researchers might be therefore more inclined to focus on the empirical data showing benefits, rather than to engage in elaborations on ethical issues that arise with those benefits. Lastly, as researchers have to compete for limited funding [ 100 ] and given that technological research receives more funding than biomedical ethics [ 101 ], it is likely that the numbers of publications mentioning purely empirical studies exceeds those publications that solely mention the ethical issues (as our theoretical papers did) or that combine empirical and ethical parts. Further research needs to investigate these hypotheses further.

It is not surprising that privacy was the most discussed ethical issue in relation to SHHTs in caregiving. The topic of privacy, especially in relation to monitoring technologies and/or health, has been widely discussed (see for example [ 102 , 103 , 104 ]. A particularly interesting finding within this ethical concern was related to privacy and cognitive impairment. While discussions around autonomy and cognitive impairment are popular in bioethical research (see e.g. [ 105 , 106 ], privacy, on the other hand, has recently gained more attention for both researchers and designers [ 107 ]. The relation in the reviewed studies between cognitive impairment and privacy seemed to be reversely correlated –intrusions into the privacy of older persons with cognitive impairments were deemed as more justified [ 35 , 53 ], which necessarily does not mean that its ethical, but a practical fact that such intrusions become possible or necessary in the given context. A possible explanation lies in the connectedness of autonomy and privacy, in the sense that autonomy is needed to consent for any sort of intrusions [ 108 ].

Surprisingly, more research papers mentioned the topic of human vs. artificial relationships as an ethical concern than autonomy. Autonomy is often the most discussed ethics topic when it comes to use of technology [ 96 ]. However, fears associated with technology replacing human care has recently gained traction [ 109 , 110 , 111 ].The significance of this theme is likely due to the fact that caregiving for older persons has been (and is) a very human-centric activity [ 112 ]. As mentioned before, the persons willing and able to do this labor (both paid and unpaid caregiver) are limited and their pool is shrinking [ 113 ]. The idea of technology possibly filling this gap is not new [ 114 ], but is also clearly causing wariness among both older persons and caregivers, as we have discovered [ 56 , 61 ]. Frequently mentioned was the fear of care being replaced by technology. This finding was to be expected, as nursing is not the only profession where introduction of technology caused fears of job loss [ 115 ]. Within this ethical concern, the importance of human touch and human interaction was underlined [ 110 , 111 ]. Human touch is an important asset for caregivers when they care for older patients, particularly those with dementia, as it is one of the few ways to establish connection and to calm the patient with dementia [ 116 ]. Similarly, human touch and face-to-face interactions are mentioned as a critical aspect of caregiving in general, both for the care recipient and the caregiver [ 117 , 118 ]. While caregivers see the aspect of touching and interacting with older care recipients as a way to make their actions more meaningful and healing [ 90 , 117 ], for care recipients being touched, talked and listened to is part of feeling respected and experiencing dignity [ 118 , 119 ]. Introducing technology into the caregiving profession may therefore quickly elicit associations with cold and lifeless objects [ 59 ]. Future developments, both in the design of the technologies themselves and their implementation in caregiving will require critical discussion among concerned stakeholders and careful decision on how and to what extent the human touch and human care must be preserved.

A unique ethical concern that we have not seen in previous research [ 120 , 121 ] is responsibility, and remarkable within this concern was SHHTs’ negative impact on it. As previously mentioned, the human being and human interaction are seen as central to caregiving [ 117 , 118 ]. This can possibly be extended to concepts exclusively attributable to humans, such as the concept of moral responsibility [ 122 ]. Shifting caregiving tasks onto a technological device, which, by being a device and not a human carer, cannot be morally responsible in the same way as a human being can [ 123 ], may introduce a sense of void that caregivers are reluctant to create. Studies have shown that a mismatch in professional and personal values in nursing causes emotional discomfort and stress [ 124 ], therefore the shift in the professional environment caused by SHHTs is likely to be met with aversion. Additionally, the negative impact of SHHTs on caregiving responsibility was also tied to practical concerns, like not having enough time to learn how to use the technology by the caregivers [ 35 ], or needing to have access to and checking the older person’s health data [ 36 ]. Such concerns point to the possibility that SHHTs can create unforeseen tasks, which could turn into true burdens, instead of alleviating caregivers. Indeed, there are indications that the increase in information about the older person through monitoring technologies causes stress for both caregivers and older persons, as the former feel pressure to look at the available data, while the latter prefer to hide unfavorable information to not seem burdensome for their caregivers [ 125 ]. Another consequence of SHHTs that emerged as a sub-category was the renegotiation of responsibilities among the different stakeholders. In the field of (assistive) technology, this renegotiation is an ongoing process with efforts to make technology and its developers more accountable, through new policies and regulations [ 126 ]. In the realm of assistive technology in healthcare, these negotiations focus on high-risk cases and emergencies [ 127 ]. Who is responsible for the death of a person if the assistive technology failed to recognize an emergency, or to alert humans in time? Such issues around responsibility and legal liability are partially responsible for the slow uptake of technology in caregiving [ 128 ].

Another important but less discussed ethical concern was ageism and stigma. Ageist prejudices include being perceived as slow, useless, burdensome, and incompetent [ 129 ]. Fear of aging and becoming a burden to others is a fear many older persons have, as current social norms demand independence until death [ 130 ]. Furthermore, the general ubiquitous use of technology has possibly exacerbated the issue of ageism, as life became fast paced and more pressure is placed on aging persons to keep up [ 131 ]. While this would call for more attention to studying ageism in relation to technology, our findings indicate that, it does not unfortunately seem at the forefront of concerns that are prevalent in the literature (and thereby the society).

Related to ageism, is the wish of older persons to not be perceived as old and/or in the need of assistance (in the form of technology) explains the prevalent demand for unobtrusive technology. Obtrusiveness, in the context of SHHTs, is defined as “undesirably prominent and or/noticeable”, yet this definition should include the user’s perception and environment, and is thus not an objectively applicable definition [ 132 ]. Nevertheless, we can infer that by “unobtrusive”, users mean SHHTs that is not noticeable by them or, mostly importantly, by other persons to possibly reduce stigma associated with using a technology deemed to be for persons with certain limitations. Further research will have to confirm if unobtrusive technology actually reduces stigma and/or fosters acceptance of such SHHTs in caregiving.

Lastly, the sub-theme of stigmatization of women and immigrants in caregiving and possibly exacerbating their caregiving burden through technology was only discovered in two theoretical publications [ 47 , 71 ]. While it is well known that caregiving burden mostly falls upon women [ 133 , 134 ], many of them with a migration background when it comes to live-in caregivers [ 135 , 136 ]. It is surprising that we found no redistribution of burden of care with technology. This is likely due to the fact that caregiving – be it technologically assisted or not – remains perceived as a more feminine and, unfortunately, low status profession [ 137 ]. The development of technology, however, are still mostly associated with masculinity This tension between the innovators and actual users of technology can lead to the exacerbation of stigma for female and migrant caregivers, as the human bias is conserved by the technology, instead of disrupted through it [ 137 ].

Finally, trust was an expected ethical concern, given that it is a widely discussed topic in relation to technology (see for example, [ 123 , 138 ] and also in the context of nursing [ 95 , 139 ]. Older persons were trusting caregivers to understand SHHTs [ 48 ], while caregivers feared that older persons would not trust the used technology, even though said persons did not express such concerns [ 32 ]. A possibility to mitigate such misunderstandings and put both caregivers and care recipients on an equal understanding of the technology are education tools [ 140 ]. Another surprising finding was that some older persons were inclined to trust SHHTs even more than human caregivers, as they were seen as more reliable [ 70 ]. This trust in technology was increased when a physical robot instead of an only virtual agent was involved [ 60 , 65 ]. Studies in the realm of embodiment of virtual agents and robots suggest that the presence of a body or face promotes human-like interactions with said agents [ 51 ]. Furthermore, our systematic review discovered other characteristics which promote trust in SHHTs, such as perceived usefulness [ 94 ] or time spent with the technology [ 59 ]. Another important aspect is the already existing trust in the person introducing the technology to the user [ 34 , 93 ]. In combining these characteristics in the design and implementation of SHHTs in caregiving, researchers and technology developers need to find creative mechanisms to facilitate trustworthiness and foster adoption of new technologies in caregiving.

Limitations

While we searched 10 databases for publications over a span of 20 years, we are aware that older or newer publications will have escaped our systematic review. Relevant new literature that we have found when writing our results have been incorporated in this manuscript. Furthermore, as we specifically refrained from using terms related to ethics in our search strings to also capture the instances of absence of ethical concerns, this choice may have led to missing a few articles as a consequence, especially in regards to theoretical publications. Lastly, due to lack of resources, we were unable to carry out independent data extraction for all included papers (N = 156) and chose to validate the quality of extracted data by using a random selection of 10% of the included sample. Since there was high agreement on extracted data, we are confident about the quality of our study findings.

SHHTs offer the possibility to mitigate the shortage of human caregiving resources and to enable older persons to age in place, being adequately supported by technology. However, this shift in caregiving comes with ethical challenges. If and how these ethical challenges are mentioned in the current research around SHHTs in caregiving for older persons was the goal of this systematic review. Through analyzing 156 articles, both empirical and theoretical, we discovered that, while over one third of articles did not mention any ethical concerns whatsoever, the other two thirds discussed a plethora of ethical issues. Specifically, we discovered the emergence of concerns with the use of technology in the care of older persons around the theme of human vs. artificial relationships, ageism and stigma, and responsibility. In short, our systematic review offers a comprehensive overview of the currently discussed ethical issues in the context of SHHTs in caregiving for older persons. However, scholars in the fields of gerontology, ethics, and technology working on such issues would be already (or should be) aware that ethical concerns will change with each developing technology and the population it is used for. For instance, with the rise of Artificial intelligence/Machine Learning, new intelligent or smart technologies will continue to mature with use and time. Thus, ethical value such as autonomy will require re-evaluation with this significant content development as well as deciding, if the person would/should be asked to re-consent or how should this decision making proceed should he or she have developed dementia. In sum, more critical work is necessary to prospectively act on ethical concerns that may arise with new and developing technologies that could be used in reducing caregiving burden now and in the future.

Data Availability

All data generated or analyzed during this systematic review are included in this published article and its appendices. Appendix part 1 contains all included articles and their characteristics. Appendix part 2 contains the search strategy and all search strings for all searched databases, as well as the PROSPERO registration number.

Hertog S, Cohen B, Population. 2030: Demographic challenges and opportunities for sustainable development planning. undefined [Internet]. 2015 [zitiert 11. Juli 2022]; Verfügbar unter: https://www.semanticscholar.org/paper/Population-2030%3A-Demographic-challenges-and-for-Hertog-Cohen/f0c 5c06b4bf7b53f7cb61fe155e762ec23edbc0b

Bosch-Farré C, Malagón-Aguilera MC, Ballester-Ferrando D, Bertran-Noguer C, Bonmatí-Tomàs A, Gelabert-Vilella S. u. a. healthy ageing in place: enablers and barriers from the perspective of the elderly. A qualitative study. Int J Environ Res Public Health. 2020;17(18):1–23.

Article   Google Scholar  

Cuvillier B, Chaumon M, Body S, Cros F. Detecting falls at home: user-centered design of a pervasive technology. Hum Technol November. 2016;12(2):165–92.

Fitzpatrick JM, Tzouvara V. Facilitators and inhibitors of transition for older people who have relocated to a long-term care facility: a systematic review. Health Soc Care Community Mai. 2019;27(3):e57–81.

Lee DTF, Woo J, Mackenzie AE. A review of older people’s experiences with residential care placement. J Adv Nurs Januar. 2002;37(1):19–27.

Rohrmann S. Epidemiology of Frailty in Older People. In: Veronese N, Herausgeber. Frailty and Cardiovascular Diseases: Research into an Elderly Population [Internet]. Cham: Springer International Publishing; 2020 [zitiert 10. Februar 2022]. S. 21–7. (Advances in Experimental Medicine and Biology). Verfügbar unter: https://doi.org/10.1007/978-3-030-33330-0_3

Demiris G, Hensel BK, Skubic M, Rantz M. Senior residents’ perceived need of and preferences for “smart home” sensor technologies. Int J Technol Assess Health Care Januar. 2008;24(01):120–4.

Majumder S, Aghayi E, Noferesti M, Memarzadeh-Tehran H, Mondal T, Pang Z. u. a. Smart Homes for Elderly Healthcare-Recent advances and Research Challenges. Sens 31 Oktober. 2017;17(11):E2496.

Google Scholar  

Holm S. Autonomy, authenticity, or best interest: everyday decision-making and persons with dementia. Med Health Care Philos 1 Mai. 2001;4(2):153–9.

Trothen TJ. Intelligent Assistive Technology Ethics for Aging Adults: Spiritual Impacts as a Necessary Consideration | EndNote Click [Internet]. 2022 [zitiert 12. Juli 2022]. Verfügbar unter: https://click.endnote.com/viewer?doi=10.3390%2Frel13050452 &token=WzMxNTc3MzUsIjEwLjMzOTAvcmVsMTMwNTA0NTIiXQ.zGm-wqWo8mCI5L8GIchNEsUCQjg

Cook AM. Ethical issues related to the Use/Non-Use of Assistive Technologies. Dev Disabil Bull. 2009;37:127–52.

Demiris G, Hensel BK. Technologies for an aging society: a systematic review of „smart home“ applications.Yearb Med Inform. 2008;33–40.

Liu L, Stroulia E, Nikolaidis I, Miguel-Cruz A, Rios Rincon A. Smart homes and home health monitoring technologies for older adults: a systematic review. Int J Med Inf Juli. 2016;91:44–59.

Moraitou M, Pateli A, Fotiou S. Smart Health Caring Home: a systematic review of Smart Home Care for Elders and Chronic Disease Patients. In: Vlamos P, editor. Herausgeber. GeNeDis 2016. Cham: Springer International Publishing; 2017. pp. 255–64. (Advances in Experimental Medicine and Biology).

Chapter   Google Scholar  

Klingler C, Silva DS, Schuermann C, Reis AA, Saxena A, Strech D. Ethical issues in public health surveillance: a systematic qualitative review. BMC Public Health 4 April. 2017;17(1):295.

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rogers M. Guidance on the conduct of narrative synthesis in systematic Reviews. A Product from the ESRC Methods Programme. Version 1 | Semantic Scholar [Internet]. 2006 [zitiert 15. September 2022]. Verfügbar unter: https://www.semanticscholar.org/paper/Guidance-on-the-conduct-of-narrative-synthesis-in-A-Popay-Roberts/ed8b23836338f 6fdea0cc55e161b0fc5805f9e27

Draper H, Sorell T. Ethical values and social care robots for older people: an international qualitative study. Ethics Inf Technol 1 März. 2017;19(1):49–68.

Hall A, Wilson CB, Stanmore E, Todd C. Implementing monitoring technologies in care homes for people with dementia: a qualitative exploration using normalization process theory. Int J Nurs Stud Juli. 2017;72:60–70.

Airola E, Rasi P. [PDF] Domestication of a Robotic Medication-Dispensing Service Among Older People in Finnish Lapland | Semantic Scholar [Internet]. 2020 [zitiert 6. September 2022]. Verfügbar unter: https://www.semanticscholar.org/paper/Domestication-of-a-Robotic-Medication-Dispensing-in-Airola-Rasi/c8a 84330af2410efdc0c6efcf56fbaf3490a8292

Aloulou H, Mokhtari M, Tiberghien T, Biswas J, Phua C, Kenneth Lin. JH, u. a. Deployment of assistive living technology in a nursing home environment: methods and lessons learned. BMC Med Inform Decis Mak 8 April. 2013;13:42.

Bankole A, Anderson M, Homdee N. BESI: Behavioral and Environmental Sensing and Intervention for Dementia Caregiver Empowerment—Phases 1 and 2 [Internet]. 2020 [zitiert 6. September 2022]. Verfügbar unter: https://journals.sagepub.com/doi/full/10.1177/1533317520906686

Alexander GL, Rantz M, Skubic M, Aud MA, Wakefield B, Florea E. u. a. sensor systems for monitoring functional status in assisted living facility residents. Res Gerontol Nurs Oktober. 2008;1(4):238–44.

Cavallo F, Aquilano M, Arvati M. An ambient assisted living approach in designing domiciliary services combined with innovative technologies for patients with Alzheimer’s disease: a case study. Am J Alzheimers Dis Other Demen Februar. 2015;30(1):69–77.

Hunter I, Elers P, Lockhart C, Guesgen H, Singh A, Whiddett D. Issues Associated With the Management and Governance of Sensor Data and Information to Assist Aging in Place: Focus Group Study With Health Care Professionals. JMIR MHealth UHealth. 2. Dezember 2020;8(12):e24157.

Cahill J, Portales R, McLoughin S, Nagan N, Henrichs B, Wetherall S. IoT/Sensor-Based infrastructures promoting a sense of Home, Independent Living, Comfort and Wellness. Sens 24 Januar. 2019;19(3):485.

Demiris G, Hensel B. Smart Homes” for patients at the end of life. J Hous Elder 20 Februar. 2009;23(1–2):106–15.

Ho A. Are we ready for artificial intelligence health monitoring in elder care? BMC Geriatr Dezember. 2020;20(1):358.

Kang HG, Mahoney DF, Hoenig H, Hirth VA, Bonato P. Hajjar I, u. a. In situ monitoring of Health in older adults: Technologies and Issues: ISSUES IN IN SITU GERIATRIC HEALTH MONITORING. J Am Geriatr Soc August. 2010;58(8):1579–86.

Arthanat S, Begum M, Gu T, LaRoche DP, Xu D, Zhang N. Caregiver perspectives on a smart home-based socially assistive robot for individuals with Alzheimer’s disease and related dementia. Disabil Rehabil Assist Technol 2 Oktober. 2020;15(7):789–98.

Erebak S, Turgut T. Caregivers’ attitudes toward potential robot coworkers in elder care. Cogn Technol Work Mai. 2019;21(2):327–36.

Meiland FJM, Hattink BJJ, Overmars-Marx T, de Boer ME, Jedlitschka A, Ebben PWG. u. a. participation of end users in the design of assistive technology for people with mild to severe cognitive problems; the european Rosetta project. Int Psychogeriatr Mai. 2014;26(5):769–79.

Bedaf S, Marti P, De Witte L. What are the preferred characteristics of a service robot for the elderly? A multi-country focus group study with older adults and caregivers. Assist Technol 27 Mai. 2019;31(3):147–57.

Epstein I, Aligato A, Krimmel T, Mihailidis A. Older adults’ and caregivers’ perspectives on In-Home Monitoring Technology. J Gerontol Nurs 15 März. 2016;42:1–8.

Chung J, Demiris G, Thompson HJ, Chen KY, Burr R, Patel S. u. a. feasibility testing of a home-based sensor system to monitor mobility and daily activities in korean American older adults. Int J Older People Nurs. 2017;12(1):e12127.

Niemelä M, van Aerschot L, Tammela A, Aaltonen I, Lammi H. Towards ethical guidelines of using Telepresence Robots in Residential Care. Int J Soc Robot 1 Juni. 2019;13(3):431–9.

Robinson EL, Park G, Lane K, Skubic M, Rantz M. Technology for healthy independent living: creating a tailored In-Home Sensor System for older adults and family caregivers. J Gerontol Nurs Juli. 2020;46(7):35–40.

Barnier F, Chekkar R. Building automation, an acceptable solution to dependence? Responses through an Acceptability Survey about a Sensors platform. IRBM Juni. 2018;39(3):167–79.

Birks M, Bodak M, Barlas J, Harwood J, Pether M. Robotic Seals as Therapeutic Tools in an aged care facility: a qualitative study. J Aging Res 1 Januar. 2016;2016:1–7.

Bertera E, Tran B, Wuertz E, Bonner A. A study of the receptivity to telecare technology in a community-based elderly minority population. J Telemed Telecare 1 Februar. 2007;13:327–32.

Mahoney D. An evidence-based adoption of Technology Model for Remote Monitoring of Elders’ Daily Activities. Ageing Int 1 September. 2010;36:66–81.

Boissy P, Corriveau H, Michaud F, Labonte D, Royer MP. A qualitative study of in-home robotic telepresence for home care of community-living elderly subjects. J Telemed Telecare 1 Februar. 2007;13:79–84.

Bradford DK, Kasteren YV, Zhang Q, Karunanithi M. Watching over me: positive, negative and neutral perceptions of in-home monitoring held by independent-living older residents in an australian pilot study. Ageing Soc Juli. 2018;38(7):1377–98.

Cohen C, Kampel T, Verloo H. Acceptability Among Community Healthcare Nurses of Intelligent Wireless Sensor-system Technology for the Rapid Detection of Health Issues in Home-dwelling Older Adults. Open Nurs J [Internet]. 17. April 2017 [zitiert 6. September 2022];11(1). Verfügbar unter: https://opennursingjournal.com/VOLUME/11/PAGE/54/

Boise L, Wild K, Mattek N, Ruhl M, Dodge HH, Kaye J. Willingness of older adults to share data and privacy concerns after exposure to unobtrusive in-home monitoring. Gerontechnology 22 Januar. 2013;11(3):428–35.

Li CZ, Borycki EM. Smart Homes for Healthcare. Stud Health Technol Inform. 2019;257:283–7.

Moyle W. The promise of technology in the future of dementia care. Nat Rev Neurol Juni. 2019;15(6):353–9.

Parks JA. Home-Based Care, Technology, and the maintenance of selves. HEC Forum Juni. 2015;27(2):127–41.

Essén A. The two facets of electronic care surveillance: An exploration of the views of older people who live with monitoring devices | Request PDF [Internet]. 2008 [zitiert 6. September 2022]. Verfügbar unter: https://www.researchgate.net/publication/5457066_The_two_facets_of_electronic_care_surveillance_An_exploration_of_the_views_of_older_people_who_live_with_monitoring_devices

Preuß D, Legal F. Living with the animals: animal or robotic companions for the elderly in smart homes? J Med Ethics Juni. 2017;43(6):407–10.

Geier J, Mauch M, Patsch M, Paulicke D. Wie Pflegekräfte im ambulanten Bereich den Einsatz von Telepräsenzsystemen einschätzen - eine qualitative studie. Pflege Februar. 2020;33(1):43–51.

Kim JY, Liu N, Tan HX, Chu CH. Unobtrusive monitoring to Detect Depression for Elderly with Chronic Illnesses. IEEE Sens J September. 2017;17(17):5694–704.

Barrett E, Burke M, Whelan S, Santorelli A, Oliveira BL, Cavallo F. u. a. evaluation of a companion robot for individuals with dementia: quantitative findings of the MARIO project in an irish residential care setting. J Gerontol Nurs 1 Januar. 2019;47(7):36–45.

Kinney JM, Kart CS, Murdoch LD, Conley CJ. Striving to provide Safety assistance for families of Elders: the SAFE House Project. Dement 1 Oktober. 2004;3(3):351–70.

Baisch S, Kolling T, Rühl S, Klein B, Pantel J, Oswald F. Emotionale Roboter im Pflegekontext: Empirische Analyse des bisherigen Einsatzes und der Wirkungen von Paro und Pleo. Z Für Gerontol Geriatr. Januar 2018;51(1):16–24.

Bobillier Chaumon ME, Cuvillier B, Body Bekkadja S, Cros F. Detecting falls at home: user-centered design of a Pervasive Technology. Hum Technol 29 November. 2016;12:165–92.

Lussier M, Couture M, Moreau M, Laliberté C, Giroux S, Pigot H. u. a. integrating an ambient assisted living monitoring system into clinical decision-making in home care: an embedded case study. Gerontechnology 15 März. 2020;19:77–92.

Klein B, Schlömer I. A robotic shower system: Acceptance and ethical issues. Z Gerontol Geriatr Januar. 2018;51(1):25–31.

Pirhonen J, Melkas H, Laitinen A, Pekkarinen S. Could robots strengthen the sense of autonomy of older people residing in assisted living facilities?—A future-oriented study. Ethics Inf Technol Juni. 2020;22(2):151–62.

Kleiven HH, Ljunggren B, Solbjør M. Health professionals’ experiences with the implementation of a digital medication dispenser in home care services - a qualitative study. BMC Health Serv Res 16 April. 2020;20(1):320.

Görer B, Salah AA, Akın HL. An autonomous robotic exercise tutor for elderly people. Auton Robots 1 März. 2017;41(3):657–78.

Berridge C, Chan KT, Choi Y. Sensor-Based Passive Remote Monitoring and Discordant Values: Qualitative Study of the Experiences of Low-Income Immigrant Elders in the United States. JMIR MHealth UHealth.25. März 2019;7(3):e11516.

Frennert S, Forsberg A, Östlund B. Elderly People’s Perceptions of a Telehealthcare System: Relative Advantage, Compatibility, Complexity and Observability.J Technol Hum Serv. 1. Juli2013;31.

Huisman C, Kort H. Two-year use of Care Robot Zora in dutch nursing Homes: an evaluation study. Healthc Basel Switz 19 Februar. 2019;7(1):E31.

Libin A, Cohen-Mansfield J. Therapeutic robocat for nursing home residents with dementia: preliminary inquiry. Am J Alzheimers Dis Other Demen April. 2004;19(2):111–6.

Marti P, Stienstra JT. Exploring empathy in interaction: scenarios of respectful robotics. GeroPsych J Gerontopsychology Geriatr Psychiatry. 2013;26:101–12.

Wang RH, Sudhama A, Begum M, Huq R, Mihailidis A. Robots to assist daily activities: views of older adults with Alzheimer’s disease and their caregivers. Int Psychogeriatr Januar. 2017;29(1):67–79.

Mitzner TL, Chen TL, Kemp CC, Rogers WA. Identifying the potential for Robotics to assist older adults in different living environments. Int J Soc Robot April. 2014;6(2):213–27.

Kelly D. Smart support at home: the integration of telecare technology with primary and community care systems. Br J Healthc Comput Inform Manage 1 April. 2005;22(3):19–21.

Woods O. Subverting the logics of “smartness” in Singapore: Smart eldercare and parallel regimes of sustainability. Sustain Cities Soc Februar. 2020;53:101940.

Jenkins S, Draper H, Care. Monitoring, and companionship: views on Care Robots from Older People and their carers. Int J Soc Robot 1 November. 2015;7(5):673–83.

Roberts C, Mort M. Reshaping what counts as care: older people, work and new technologies. Alter April. 2009;3(2):138–58.

Lee S, Naguib AM. Toward a sociable and dependable Elderly Care Robot: design, implementation and user study. J Intell Robot Syst April. 2020;98(1):5–17.

Bowes A, McColgan G. Telecare for Older People: promoting independence, participation, and identity. Res Aging Januar. 2013;35(1):32–49.

Morris ME. Social networks as health feedback displays. IEEE Internet Comput September. 2005;9(5):29–37.

Milligan C, Roberts C, Mort M. Telecare and older people: who cares where? Soc Sci Med 1982 Februar. 2011;72(3):347–54.

Sánchez VG, Anker-Hansen C, Taylor I, Eilertsen G. Older people’s attitudes and perspectives of Welfare Technology in Norway. J Multidiscip Healthc. 2019;12:841–53.

Faucounau V, Wu YH, Boulay M, Maestrutti M, Rigaud AS. Caregivers’ requirements for in-home robotic agent for supporting community-living elderly subjects with cognitive impairment. Technol Health Care 30 April. 2009;17(1):33–40.

Ropero F, Vaquerizo-Hdez D, Muñoz P, Barrero D, R-Moreno M. LARES: An AI-based teleassistance system for emergency home monitoring. Cogn Syst Res. 1. April2019;56.

Naick M. Innovative approaches of using assistive technology to support carers to care for people with night-time incontinence issues. World Fed Occup Ther Bull 3 Juli. 2017;73(2):128–30.

Obayashi K, Kodate N, Shigeru M. Can connected technologies improve sleep quality and safety of older adults and care-givers? An evaluation study of sleep monitors and communicative robots at a residential care home in Japan. Technol Soc 1 Juli. 2020;62:101318.

Palm E. Who cares? Moral Obligations in formal and Informal Care Provision in the light of ICT-Based Home Care. Health Care Anal Juni. 2013;21(2):171–88.

Korchut A, Szklener S, Abdelnour C, Tantinya N, Hernández-Farigola J, Ribes JC. u. a. challenges for Service Robots-Requirements of Elderly adults with cognitive impairments. Front Neurol. 2017;8:228.

O’Brien K, Liggett A, Ramirez-Zohfeld V, Sunkara P, Lindquist LA. Voice-Controlled Intelligent Personal Assistants to support aging in place. J Am Geriatr Soc Januar. 2020;68(1):176–9.

Londei ST, Rousseau J, Ducharme F, St-Arnaud A, Meunier J, Saint-Arnaud J. u. a. An intelligent videomonitoring system for fall detection at home: perceptions of elderly people. J Telemed Telecare. 2009;15(8):383–90.

Melkas H. Innovative assistive technology in finnish public elderly-care services: a focus on productivity. Work Read Mass 1 Januar. 2013;46(1):77–91.

Peter C, Kreiner A, Schröter M, Kim H, Bieber G, Öhberg F. u. a. AGNES: connecting people in a multimodal way. J Multimodal User Interfaces 1 November. 2013;7(3):229–45.

Rawtaer I, Mahendran R, Kua EH, Tan HP, Tan HX, Lee TS. u. a. early detection of mild cognitive impairment with In-Home sensors to monitor behavior patterns in Community-Dwelling Senior Citizens in Singapore: cross-sectional feasibility study. J Med Internet Res 5 Mai. 2020;22(5):e16854.

Gokalp H, de Folter J, Verma V, Fursse J, Jones R, Clarke M. Integrated Telehealth and Telecare for Monitoring Frail Elderly with Chronic Disease. Telemed J E-Health Off J Am Telemed Assoc. Dezember 2018;24(12):940–57.

Holthe T, Halvorsrud L, Lund A. A critical occupational perspective on user engagement of older adults in an assisted living facility in technology research over three years. J Occup Sci 2 Juli. 2020;27(3):376–89.

Wright J. Tactile care, mechanical hugs: japanese caregivers and robotic lifting devices. Asian Anthropol 2 Januar. 2018;17(1):24–39.

Mitseva A, Peterson CB, Karamberi C, Oikonomou LC, Ballis AV, Giannakakos C. u. a. gerontechnology: providing a helping hand when caring for cognitively impaired older adults-intermediate results from a controlled study on the satisfaction and acceptance of informal caregivers. Curr Gerontol Geriatr Res. 2012;2012:401705.

Snyder M, Dringus L, Maitland Schladen, Chenall R, Oviawe E. „Remote Monitoring Technologies in Dementia Care: An Interpretative Phe“ by Martha Snyder, Laurie Dringus[Internet]. 2020 [zitiert 13. September 2022]. Verfügbar unter: https://nsuworks.nova.edu/tqr/vol25/iss5/5/

Suwa S, Tsujimura M, Ide H, Kodate N, Ishimaru M, Shimamura A. u. a. home-care professionals’ ethical perceptions of the Development and Use of Home-care Robots for older adults in Japan. Int J Human–Computer Interact 26 August. 2020;36(14):1295–303.

Torta E, Werner F, Johnson D, Juola J, Cuijpers R, Bazzani M. Evaluation of a Small Socially-Assistive Humanoid Robot in Intelligent Homes for the Care of the Elderly. J Intell Robot Syst. 1. Februar2014

Dinç L, Gastmans C. Trust and trustworthiness in nursing: an argument-based literature review. Nurs Inq September. 2012;19(3):223–37.

Moilanen T, Suhonen R, Kangasniemi M. Nursing support for older people’s autonomy in residential care: an integrative review. Int J Older People Nurs März. 2022;17(2):e12428.

Doorn N. Responsibility ascriptions in technology development and engineering: three perspectives. Sci Eng Ethics März. 2012;18(1):69–90.

Kooli C. COVID-19: public health issues and ethical dilemmas. Ethics Med Public Health Juni. 2021;17:100635.

Einav S, Ranzani OT. Focus on better care and ethics: are medical ethics lagging behind the development of new medical technologies? Intensive Care Med 1 August. 2020;46(8):1611–3.

Bahl R, Bahl S. Publication pressure versus Ethics, in Research and Publication. Indian J Community Med Off Publ Indian Assoc Prev Soc Med Dezember. 2021;46(4):584–6.

Pratt B, Hyder A. Fair Resource Allocation to Health Research: Priority Topics for Bioethics Scholarship - Pratt – 2017 - Bioethics - Wiley Online Library [Internet]. 2017 [zitiert 2. September 2022]. Verfügbar unter: https://onlinelibrary.wiley.com/doi/full/ https://doi.org/10.1111/bioe.12350

Malin B, Goodman K. Section editors for the IMIA Yearbook Special Section. Between Access and privacy: Challenges in sharing Health Data. Yearb Med Inform August. 2018;27(1):55–9.

Martani A, Egli P, Widmer M, Elger B. Data protection and biomedical research in Switzerland: setting the record straight. Swiss Med Wkly 24 August. 2020;150:w20332.

Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med Januar. 2019;25(1):37–43.

Brady Wagner LC. Clinical ethics in the context of language and cognitive impairment: rights and protections. Semin Speech Lang November. 2003;24(4):275–84.

Sharkey A, Sharkey N. Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol März. 2012;14(1):27–40.

Berridge C, Demiris G, Kaye J. Domain experts on Dementia-Care Technologies: mitigating risk in design and implementation. Sci Eng Ethics 18 Februar. 2021;27(1):14.

Greshake Tzovaras B, Angrist M, Arvai K, Dulaney M, Estrada-Galiñanes V, Gunderson B. u. a. open humans: a platform for participant-centered research and personal data exploration. GigaScience 1 Juni. 2019;8(6):giz076.

Ienca M, Lipps M, Wangmo T, Jotterand F, Elger B, Kressig R. Health professionals’ and researchers’ views on Intelligent Assistive Technology for psychogeriatric care. Gerontechnology 8 Oktober. 2018;17:139–50.

Ienca M, Schneble C, Kressig RW, Wangmo T. Digital health interventions for healthy ageing: a qualitative user evaluation and ethical assessment. BMC Geriatr 2 Juli. 2021;21(1):412.

Wangmo T, Lipps M, Kressig RW, Ienca M. Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders. BMC Med Ethics 19 Dezember. 2019;20(1):98.

Caregiving AARP. NA for. Caregiving in the United States 2020 [Internet]. AARP. 2020 [zitiert 31. August 2022]. Verfügbar unter: https://www.aarp.org/ppi/info-2020/caregiving-in-the-united-states.html

McGilton KS, Vellani S, Yeung L, Chishtie J, Commisso E, Ploeg J. u. a. identifying and understanding the health and social care needs of older adults with multiple chronic conditions and their caregivers: a scoping review. BMC Geriatr 1 Oktober. 2018;18(1):231.

Sriram V, Jenkinson C, Peters M. Informal carers’ experience of assistive technology use in dementia care at home: a systematic review. BMC Geriatr 14 Juni. 2019;19(1):160.

Schwabe H, Castellacci F. Automation, workers’ skills and job satisfaction. PLoS ONE. 2020;15(11):e0242929.

Huelat B, Pochron S. Stress in the Volunteer Caregiver: Human-Centric Technology Can Support Both Caregivers and People with Dementia. Medicina (Mex). 26. Mai 2020;56:257.

Edvardsson JD, Sandman PO, Rasmussen BH. Meanings of giving touch in the care of older patients: becoming a valuable person and professional. J Clin Nurs Juli. 2003;12(4):601–9.

Stöckigt B, Suhr R, Sulmann D, Teut M, Brinkhaus B. Implementation of intentional touch for geriatric patients with Chronic Pain: a qualitative pilot study. Complement Med Res. 2019;26(3):195–205.

Felber NA, Pageau F, McLean A, Wangmo T. The concept of social dignity as a yardstick to delimit ethical use of robotic assistance in the care of older persons. Med Health Care Philos März. 2022;25(1):99–110.

Ienca M, Wangmo T, Jotterand F, Kressig RW, Elger BS. Ethical design of intelligent assistive technologies for dementia: a descriptive review. Sci Eng Ethics. 2018;24(4):1035.

Zhu J, Shi K, Yang C, Niu Y, Zeng Y, Zhang N. Ethical issues of smart home-based elderly care: A scoping review. J Nurs Manag [Internet]. 22. November 2021 [zitiert 15. September 2022];n/a(n/a). Verfügbar unter: https://onlinelibrary.wiley.com/doi/abs/ https://doi.org/10.1111/jonm.13521

Talbert M. Moral Responsibility. In: Zalta EN, Herausgeber. The Stanford Encyclopedia of Philosophy [Internet]. Winter 2019. Metaphysics Research Lab, Stanford University; 2019 [zitiert 1. Juli 2020]. Verfügbar unter: https://plato.stanford.edu/archives/win2019/entries/moral-responsibility/

DeCamp M, Tilburt JC. Why we cannot trust artificial intelligence in medicine. Lancet Digit Health Dezember. 2019;1(8):e390.

Dall’Ora C, Ball J, Reinius M, Griffiths P. Burnout in nursing: a theoretical review. Hum Resour Health 5 Juni. 2020;18(1):41.

Madara Marasinghe K. Assistive technologies in reducing caregiver burden among informal caregivers of older adults: a systematic review. Disabil Rehabil Assist Technol. 2016;11(5):353–60.

Shah H. Algorithmic accountability. Philos Transact A Math Phys Eng Sci 13 September. 2018;376(2128):20170362.

Fiske A, Henningsen P, Buyx A. Your Robot Therapist will see you now: ethical implications of embodied Artificial Intelligence in Psychiatry, psychology, and psychotherapy. J Med Internet Res 9 Mai. 2019;21(5):e13216.

Scott Kruse C, Karem P, Shifflett K, Vegi L, Ravi K, Brooks M. Evaluating barriers to adopting telemedicine worldwide: a systematic review. J Telemed Telecare Januar. 2018;24(1):4–12.

Chasteen AL, Horhota M, Crumley-Branyon JJ. Overlooked and underestimated: experiences of Ageism in Young, Middle-Aged, and older adults. J Gerontol B Psychol Sci Soc Sci 13 August. 2021;76(7):1323–8.

Svidén G, Wikström BM, Hjortsjö-Norberg M. Elderly Persons’ reflections on relocating to living at Sheltered Housing. Scand J Occup Ther 1 Januar. 2002;9(1):10–6.

McLean A. Ethical frontiers of ICT and older users: cultural, pragmatic and ethical issues. Ethics Inf Technol 1 Dezember. 2011;13(4):313–26.

Zwijsen SA, Niemeijer AR, Hertogh CMPM. Ethics of using assistive technology in the care for community-dwelling elderly people: an overview of the literature. Aging Ment Health Mai. 2011;15(4):419–27.

Jeong JS, Kim SY, Kim JN, Ashamed, Caregivers. Self-Stigma, information, and coping among dementia patient families. J Health Commun 1 November. 2020;25(11):870–8.

Mackinnon CJ. Applying feminist, multicultural, and social justice theory to diverse women who function as caregivers in end-of-life and palliative home care. Palliat Support Care Dezember. 2009;7(4):501–12.

Ha NHL, Chong MS, Choo RWM, Tam WJ, Yap PLK. Caregiving burden in foreign domestic workers caring for frail older adults in Singapore. Int Psychogeriatr August. 2018;30(8):1139–47.

Morales-Gázquez MJ, Medina-Artiles EN, López-Liria R, Aguilar-Parra JM, Trigueros-Ramos R, González-Bernal. JJ, u. a. migrant caregivers of older people in Spain: qualitative insights into relatives’ Experiences. Int J Environ Res Public Health 24 April. 2020;17(8):E2953.

Frennert S. Gender blindness: on health and welfare technology, AI and gender equality in community care. Nurs Inq Dezember. 2021;28(4):e12419.

Starke G, van den Brule R, Elger BS, Haselager P. Intentional machines: a defence of trust in medical artificial intelligence. Bioethics. 2022;36(2):154–61.

Ozaras G, Abaan S. Investigation of the trust status of the nurse-patient relationship. Nurs Ethics August. 2018;25(5):628–39.

Berridge C, Turner NR, Liu L, Karras SW, Chen A, Fredriksen-Goldsen K. u. a. Advance Planning for Technology Use in Dementia Care: Development, Design, and feasibility of a Novel Self-administered decision-making Tool. JMIR Aging 27 Juli. 2022;5(3):e39335.

Download references

Acknowledgements

We thank the information specialist of the University of Basel who advised us on our search strategy.

Open access funding provided by University of Basel. This study was supported financially by the Swiss National Science Foundation (SNF NRP-77 Digital Transformation, Grant Number 407740_187464/1) as part of the SmaRt homES, Older adUlts, and caRegivers: Facilitating social aCceptance and negotiating rEsponsibilities [RESOURCE] project. The funder neither took part in the writing process, nor does any part of the views expressed in the review belong to the funder.

Author information

Authors and affiliations.

Institute of Biomedical Ethics, University of Basel, Bernoullistrasse 28, 4056, Basel, Switzerland

Nadine Andrea Felber, Yi Jiao (Angelina) Tian, Bernice Simone Elger & Tenzin Wangmo

Faculty of Medicine, Université Laval, 1050 Av. de la Médecine, G1V0A6, Québec, QC, Canada

Félix Pageau

You can also search for this author in PubMed   Google Scholar

Contributions

Creation of the search strategy and data extraction was a joint effort of NAF and AT. FP and TW extracted data and prepared it for analysis. AT contributed majorly to the data analysis, together with NAF who is the first author of this manuscript. TW and BE provided final comments and edits. All authors read and approved the manuscript before submission.

Corresponding author

Correspondence to Nadine Andrea Felber .

Ethics declarations

Ethics approval and consent to participate.

not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Additional File 1: PRISMA 2020 checklist

Additional file 2: appendix part 1, additional file 3: appendix part 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Felber, N.A., Tian, Y., Pageau, F. et al. Mapping ethical issues in the use of smart home health technologies to care for older persons: a systematic review. BMC Med Ethics 24 , 24 (2023). https://doi.org/10.1186/s12910-023-00898-w

Download citation

Received : 15 September 2022

Accepted : 02 March 2023

Published : 29 March 2023

DOI : https://doi.org/10.1186/s12910-023-00898-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Biomedical ethics
  • Older persons
  • Health technology

BMC Medical Ethics

ISSN: 1472-6939

importance of literature review pubmed

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can Med Educ J
  • v.12(3); 2021 Jun

Logo of cmej

Writing, reading, and critiquing reviews

Écrire, lire et revue critique, douglas archibald.

1 University of Ottawa, Ontario, Canada;

Maria Athina Martimianakis

2 University of Toronto, Ontario, Canada

Why reviews matter

What do all authors of the CMEJ have in common? For that matter what do all health professions education scholars have in common? We all engage with literature. When you have an idea or question the first thing you do is find out what has been published on the topic of interest. Literature reviews are foundational to any study. They describe what is known about given topic and lead us to identify a knowledge gap to study. All reviews require authors to be able accurately summarize, synthesize, interpret and even critique the research literature. 1 , 2 In fact, for this editorial we have had to review the literature on reviews . Knowledge and evidence are expanding in our field of health professions education at an ever increasing rate and so to help keep pace, well written reviews are essential. Though reviews may be difficult to write, they will always be read. In this editorial we survey the various forms review articles can take. As well we want to provide authors and reviewers at CMEJ with some guidance and resources to be able write and/or review a review article.

What are the types of reviews conducted in Health Professions Education?

Health professions education attracts scholars from across disciplines and professions. For this reason, there are numerous ways to conduct reviews and it is important to familiarize oneself with these different forms to be able to effectively situate your work and write a compelling rationale for choosing your review methodology. 1 , 2 To do this, authors must contend with an ever-increasing lexicon of review type articles. In 2009 Grant and colleagues conducted a typology of reviews to aid readers makes sense of the different review types, listing fourteen different ways of conducting reviews, not all of which are mutually exclusive. 3 Interestingly, in their typology they did not include narrative reviews which are often used by authors in health professions education. In Table 1 , we offer a short description of three common types of review articles submitted to CMEJ.

Three common types of review articles submitted to CMEJ

More recently, authors such as Greenhalgh 4 have drawn attention to the perceived hierarchy of systematic reviews over scoping and narrative reviews. Like Greenhalgh, 4 we argue that systematic reviews are not to be seen as the gold standard of all reviews. Instead, it is important to align the method of review to what the authors hope to achieve, and pursue the review rigorously, according to the tenets of the chosen review type. Sometimes it is helpful to read part of the literature on your topic before deciding on a methodology for organizing and assessing its usefulness. Importantly, whether you are conducting a review or reading reviews, appreciating the differences between different types of reviews can also help you weigh the author’s interpretation of their findings.

In the next section we summarize some general tips for conducting successful reviews.

How to write and review a review article

In 2016 David Cook wrote an editorial for Medical Education on tips for a great review article. 13 These tips are excellent suggestions for all types of articles you are considering to submit to the CMEJ. First, start with a clear question: focused or more general depending on the type of review you are conducting. Systematic reviews tend to address very focused questions often summarizing the evidence of your topic. Other types of reviews tend to have broader questions and are more exploratory in nature.

Following your question, choose an approach and plan your methods to match your question…just like you would for a research study. Fortunately, there are guidelines for many types of reviews. As Cook points out the most important consideration is to be sure that the methods you follow lead to a defensible answer to your review question. To help you prepare for a defensible answer there are many guides available. For systematic reviews consult PRISMA guidelines ; 13 for scoping reviews PRISMA-ScR ; 14 and SANRA 15 for narrative reviews. It is also important to explain to readers why you have chosen to conduct a review. You may be introducing a new way for addressing an old problem, drawing links across literatures, filling in gaps in our knowledge about a phenomenon or educational practice. Cook refers to this as setting the stage. Linking back to the literature is important. In systematic reviews for example, you must be clear in explaining how your review builds on existing literature and previous reviews. This is your opportunity to be critical. What are the gaps and limitations of previous reviews? So, how will your systematic review resolve the shortcomings of previous work? In other types of reviews, such as narrative reviews, its less about filling a specific knowledge gap, and more about generating new research topic areas, exposing blind spots in our thinking, or making creative new links across issues. Whatever, type of review paper you are working on, the next steps are ones that can be applied to any scholarly writing. Be clear and offer insight. What is your main message? A review is more than just listing studies or referencing literature on your topic. Lead your readers to a convincing message. Provide commentary and interpretation for the studies in your review that will help you to inform your conclusions. For systematic reviews, Cook’s final tip is most likely the most important– report completely. You need to explain all your methods and report enough detail that readers can verify the main findings of each study you review. The most common reasons CMEJ reviewers recommend to decline a review article is because authors do not follow these last tips. In these instances authors do not provide the readers with enough detail to substantiate their interpretations or the message is not clear. Our recommendation for writing a great review is to ensure you have followed the previous tips and to have colleagues read over your paper to ensure you have provided a clear, detailed description and interpretation.

Finally, we leave you with some resources to guide your review writing. 3 , 7 , 8 , 10 , 11 , 16 , 17 We look forward to seeing your future work. One thing is certain, a better appreciation of what different reviews provide to the field will contribute to more purposeful exploration of the literature and better manuscript writing in general.

In this issue we present many interesting and worthwhile papers, two of which are, in fact, reviews.

Major Contributions

A chance for reform: the environmental impact of travel for general surgery residency interviews by Fung et al. 18 estimated the CO 2 emissions associated with traveling for residency position interviews. Due to the high emissions levels (mean 1.82 tonnes per applicant), they called for the consideration of alternative options such as videoconference interviews.

Understanding community family medicine preceptors’ involvement in educational scholarship: perceptions, influencing factors and promising areas for action by Ward and team 19 identified barriers, enablers, and opportunities to grow educational scholarship at community-based teaching sites. They discovered a growing interest in educational scholarship among community-based family medicine preceptors and hope the identification of successful processes will be beneficial for other community-based Family Medicine preceptors.

Exploring the global impact of the COVID-19 pandemic on medical education: an international cross-sectional study of medical learners by Allison Brown and team 20 studied the impact of COVID-19 on medical learners around the world. There were different concerns depending on the levels of training, such as residents’ concerns with career timeline compared to trainees’ concerns with the quality of learning. Overall, the learners negatively perceived the disruption at all levels and geographic regions.

The impact of local health professions education grants: is it worth the investment? by Susan Humphrey-Murto and co-authors 21 considered factors that lead to the publication of studies supported by local medical education grants. They identified several factors associated with publication success, including previous oral or poster presentations. They hope their results will be valuable for Canadian centres with local grant programs.

Exploring the impact of the COVID-19 pandemic on medical learner wellness: a needs assessment for the development of learner wellness interventions by Stephana Cherak and team 22 studied learner-wellness in various training environments disrupted by the pandemic. They reported a negative impact on learner wellness at all stages of training. Their results can benefit the development of future wellness interventions.

Program directors’ reflections on national policy change in medical education: insights on decision-making, accreditation, and the CanMEDS framework by Dore, Bogie, et al. 23 invited program directors to reflect on the introduction of the CanMEDS framework into Canadian postgraduate medical education programs. Their survey revealed that while program directors (PDs) recognized the necessity of the accreditation process, they did not feel they had a voice when the change occurred. The authors concluded that collaborations with PDs would lead to more successful outcomes.

Experiential learning, collaboration and reflection: key ingredients in longitudinal faculty development by Laura Farrell and team 24 stressed several elements for effective longitudinal faculty development (LFD) initiatives. They found that participants benefited from a supportive and collaborative environment while trying to learn a new skill or concept.

Brief Reports

The effect of COVID-19 on medical students’ education and wellbeing: a cross-sectional survey by Stephanie Thibaudeau and team 25 assessed the impact of COVID-19 on medical students. They reported an overall perceived negative impact, including increased depressive symptoms, increased anxiety, and reduced quality of education.

In Do PGY-1 residents in Emergency Medicine have enough experiences in resuscitations and other clinical procedures to meet the requirements of a Competence by Design curriculum? Meshkat and co-authors 26 recorded the number of adult medical resuscitations and clinical procedures completed by PGY1 Fellow of the Royal College of Physicians in Emergency Medicine residents to compare them to the Competence by Design requirements. Their study underscored the importance of monitoring collection against pre-set targets. They concluded that residency program curricula should be regularly reviewed to allow for adequate clinical experiences.

Rehearsal simulation for antenatal consults by Anita Cheng and team 27 studied whether rehearsal simulation for antenatal consults helped residents prepare for difficult conversations with parents expecting complications with their baby before birth. They found that while rehearsal simulation improved residents’ confidence and communication techniques, it did not prepare them for unexpected parent responses.

Review Papers and Meta-Analyses

Peer support programs in the fields of medicine and nursing: a systematic search and narrative review by Haykal and co-authors 28 described and evaluated peer support programs in the medical field published in the literature. They found numerous diverse programs and concluded that including a variety of delivery methods to meet the needs of all participants is a key aspect for future peer-support initiatives.

Towards competency-based medical education in addictions psychiatry: a systematic review by Bahji et al. 6 identified addiction interventions to build competency for psychiatry residents and fellows. They found that current psychiatry entrustable professional activities need to be better identified and evaluated to ensure sustained competence in addictions.

Six ways to get a grip on leveraging the expertise of Instructional Design and Technology professionals by Chen and Kleinheksel 29 provided ways to improve technology implementation by clarifying the role that Instructional Design and Technology professionals can play in technology initiatives and technology-enhanced learning. They concluded that a strong collaboration is to the benefit of both the learners and their future patients.

In his article, Seven ways to get a grip on running a successful promotions process, 30 Simon Field provided guidelines for maximizing opportunities for successful promotion experiences. His seven tips included creating a rubric for both self-assessment of likeliness of success and adjudication by the committee.

Six ways to get a grip on your first health education leadership role by Stasiuk and Scott 31 provided tips for considering a health education leadership position. They advised readers to be intentional and methodical in accepting or rejecting positions.

Re-examining the value proposition for Competency-Based Medical Education by Dagnone and team 32 described the excitement and controversy surrounding the implementation of competency-based medical education (CBME) by Canadian postgraduate training programs. They proposed observing which elements of CBME had a positive impact on various outcomes.

You Should Try This

In their work, Interprofessional culinary education workshops at the University of Saskatchewan, Lieffers et al. 33 described the implementation of interprofessional culinary education workshops that were designed to provide health professions students with an experiential and cooperative learning experience while learning about important topics in nutrition. They reported an enthusiastic response and cooperation among students from different health professional programs.

In their article, Physiotherapist-led musculoskeletal education: an innovative approach to teach medical students musculoskeletal assessment techniques, Boulila and team 34 described the implementation of physiotherapist-led workshops, whether the workshops increased medical students’ musculoskeletal knowledge, and if they increased confidence in assessment techniques.

Instagram as a virtual art display for medical students by Karly Pippitt and team 35 used social media as a platform for showcasing artwork done by first-year medical students. They described this shift to online learning due to COVID-19. Using Instagram was cost-saving and widely accessible. They intend to continue with both online and in-person displays in the future.

Adapting clinical skills volunteer patient recruitment and retention during COVID-19 by Nazerali-Maitland et al. 36 proposed a SLIM-COVID framework as a solution to the problem of dwindling volunteer patients due to COVID-19. Their framework is intended to provide actionable solutions to recruit and engage volunteers in a challenging environment.

In Quick Response codes for virtual learner evaluation of teaching and attendance monitoring, Roxana Mo and co-authors 37 used Quick Response (QR) codes to monitor attendance and obtain evaluations for virtual teaching sessions. They found QR codes valuable for quick and simple feedback that could be used for many educational applications.

In Creation and implementation of the Ottawa Handbook of Emergency Medicine Kaitlin Endres and team 38 described the creation of a handbook they made as an academic resource for medical students as they shift to clerkship. It includes relevant content encountered in Emergency Medicine. While they intended it for medical students, they also see its value for nurses, paramedics, and other medical professionals.

Commentary and Opinions

The alarming situation of medical student mental health by D’Eon and team 39 appealed to medical education leaders to respond to the high numbers of mental health concerns among medical students. They urged leaders to address the underlying problems, such as the excessive demands of the curriculum.

In the shadows: medical student clinical observerships and career exploration in the face of COVID-19 by Law and co-authors 40 offered potential solutions to replace in-person shadowing that has been disrupted due to the COVID-19 pandemic. They hope the alternatives such as virtual shadowing will close the gap in learning caused by the pandemic.

Letters to the Editor

Canadian Federation of Medical Students' response to “ The alarming situation of medical student mental health” King et al. 41 on behalf of the Canadian Federation of Medical Students (CFMS) responded to the commentary by D’Eon and team 39 on medical students' mental health. King called upon the medical education community to join the CFMS in its commitment to improving medical student wellbeing.

Re: “Development of a medical education podcast in obstetrics and gynecology” 42 was written by Kirubarajan in response to the article by Development of a medical education podcast in obstetrics and gynecology by Black and team. 43 Kirubarajan applauded the development of the podcast to meet a need in medical education, and suggested potential future topics such as interventions to prevent learner burnout.

Response to “First year medical student experiences with a clinical skills seminar emphasizing sexual and gender minority population complexity” by Kumar and Hassan 44 acknowledged the previously published article by Biro et al. 45 that explored limitations in medical training for the LGBTQ2S community. However, Kumar and Hassen advocated for further progress and reform for medical training to address the health requirements for sexual and gender minorities.

In her letter, Journey to the unknown: road closed!, 46 Rosemary Pawliuk responded to the article, Journey into the unknown: considering the international medical graduate perspective on the road to Canadian residency during the COVID-19 pandemic, by Gutman et al. 47 Pawliuk agreed that international medical students (IMGs) do not have adequate formal representation when it comes to residency training decisions. Therefore, Pawliuk challenged health organizations to make changes to give a voice in decision-making to the organizations representing IMGs.

In Connections, 48 Sara Guzman created a digital painting to portray her approach to learning. Her image of a hand touching a neuron showed her desire to physically see and touch an active neuron in order to further understand the brain and its connections.

Advertisement

Advertisement

Publics’ views on ethical challenges of artificial intelligence: a scoping review

  • Open access
  • Published: 19 December 2023

Cite this article

You have full access to this open access article

  • Helena Machado   ORCID: orcid.org/0000-0001-8554-7619 1 ,
  • Susana Silva   ORCID: orcid.org/0000-0002-1335-8648 2 &
  • Laura Neiva   ORCID: orcid.org/0000-0002-1954-7597 3  

2526 Accesses

9 Altmetric

Explore all metrics

This scoping review examines the research landscape about publics’ views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed ® and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that emerge in research on this topic. The analysis reveals that innovation and legitimation stand out as the primary impetuses for engaging the public in deliberations concerning the ethical dilemmas associated with AI technologies. Supplementary motives are rooted in educational endeavors, democratization initiatives, and inspirational pursuits, whereas politicization emerges as a comparatively infrequent incentive. The study participants predominantly comprise the general public and professional groups, followed by AI system developers, industry and business managers, students, scholars, consumers, and policymakers. The ethical dimensions most commonly explored in the literature encompass human agency and oversight, followed by issues centered on privacy and data governance. Conversely, topics related to diversity, nondiscrimination, fairness, societal and environmental well-being, technical robustness, safety, transparency, and accountability receive comparatively less attention. This paper delineates the concrete operationalization of calls for public involvement in AI governance within the research sphere. It underscores the intricate interplay between ethical concerns, public involvement, and societal structures, including political and economic agendas, which serve to bolster technical proficiency and affirm the legitimacy of AI development in accordance with the institutional norms that underlie responsible research practices.

Similar content being viewed by others

importance of literature review pubmed

Artificial intelligence ethics has a black box problem

Jean-Christophe Bélisle-Pipon, Erica Monteferrante, … Vincent Couture

importance of literature review pubmed

Are we Nearly There Yet? A Desires & Realities Framework for Europe’s AI Strategy

Ariana Polyviou & Efpraxia D. Zamani

importance of literature review pubmed

Ensuring a ‘Responsible’ AI future in India: RRI as an approach for identifying the ethical challenges from an Indian perspective

Nitika Bhalla, Laurence Brooks & Tonii Leach

Avoid common mistakes on your manuscript.

1 Introduction

Current advances in the research, development, and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics that is accompanied by calls for AI technology to be democratically accountable and trustworthy from the publics’ Footnote 1 perspective [ 1 , 2 , 3 , 4 , 5 ]. Consequently, several ethics guidelines for AI have been released in recent years. As of early 2020, there were 167 AI ethics guidelines documents around the world [ 6 ]. Organizations such as the European Commission (EC), the Organization for Economic Co-operation and Development (OECD), and the United Nations Educational, Scientific and Cultural Organization (UNESCO) recognize that public participation is crucial for ensuring the responsible development and deployment of AI technologies, Footnote 2 emphasizing the importance of inclusivity, transparency, and democratic processes to effectively address the societal implications of AI [ 11 , 12 ]. These efforts were publicly announced as aiming to create a common understanding of ethical AI development and foster responsible practices that address societal concerns while maximizing AI’s potential benefits [ 13 , 14 ]. The concept of human-centric AI has emerged as a key principle in many of these regulatory initiatives, with the purposes of ensuring that human values are incorporated into the design of algorithms, that humans do not lose control over automated systems, and that AI is used in the service of humanity and the common good to improve human welfare and human rights [ 15 ]. Using the same rationale, the opacity and rapid diffusion of AI have prompted debate about how such technologies ought to be governed and which actors and values should be involved in shaping governance regimes [ 1 , 2 ].

While industry and business have traditionally tended to be seen as having no or little incentive to engage with ethics or in dialogue, AI leaders currently sponsor AI ethics [ 6 , 16 , 17 ]. However, some concerns call for ethics, public participation, and human-centric approaches in areas such as AI with high economic and political importance to be used within an instrumental rationale by the AI industry. A growing corpus of critical literature has conceived the development of AI ethics as efforts to reduce ethics to another form of industrial capital or to coopt and capture researchers as part of efforts to control public narratives [ 12 , 18 ]. According to some authors, one of the reasons why ethics is so appealing to many AI companies is to calm critical voices from the publics; therefore, AI ethics is seen as a way of gaining or restoring trust, credibility and support, as well as legitimation, while criticized practices are calmed down to maintain the agenda of industry and science [ 12 , 17 , 19 , 20 ].

Critical approaches also point out that despite regulatory initiatives explicitly invoking the need to incorporate human values into AI systems, they have the main objective of setting rules and standards to enable AI-based products and services to circulate in markets [ 20 , 21 , 22 ] and might serve to avoid or delay binding regulation [ 12 , 23 ]. Other critical studies argue that AI ethics fails to mitigate the racial, social, and environmental damage of AI technologies in any meaningful sense [ 24 ] and excludes alternative ethical practices [ 25 , 26 ]. As explained by Su [ 13 ], in a paper that considers the promise and perils of international human rights in AI governance, while human rights can serve as an authoritative source for holding AI developers accountable, its application to AI governance in practice shows a lack of effectiveness, an inability to effect structural change, and the problem of cooptation.

In a value analysis of AI national strategies, Wilson [ 5 ] concludes that the publics are primarily cast as recipients of AI’s abstract benefits, users of AI-driven services and products, a workforce in need of training and upskilling, or an important element for thriving democratic society that unlocks AI's potential. According to the author, when AI strategies articulate a governance role for the publics, it is more like an afterthought or rhetorical gesture than a clear commitment to putting “society-in-the-loop” into AI design and implementation [ 5 , pp. 7–8]. Another study of how public participation is framed in AI policy documents [ 4 ] shows that high expectations are assigned to public participation as a solution to address concerns about the concentration of power, increases in inequality, lack of diversity, and bias. However, in practice, this framing thus far gives little consideration to some of the challenges well known for researchers and practitioners of public participation with science and technology, such as the difficulty of achieving consensus among diverse societal views, the high resource requirements for public participation exercises, and the risks of capture by vested interests [ 4 , pp. 170–171]. These studies consistently reveal a noteworthy pattern: while references to public participation in AI governance are prevalent in the majority of AI national strategies, they tend to remain abstract and are often overshadowed by other roles, values, and policy concerns.

Some authors thus contended that the increasing demand to involve multiple stakeholders in AI governance, including the publics, signifies a discernible transformation within the sphere of science and technology policy. This transformation frequently embraces the framework of “responsible innovation”, Footnote 3 which emphasizes alignment with societal imperatives, responsiveness to evolving ethical, social, and environmental considerations, and the participation of the publics as well as traditionally defined stakeholders [ 3 , 28 ]. When investigating how the conception and promotion of public participation in European science and technology policies have evolved, Macq, Tancoine, and Strasser [ 29 ] distinguish between “participation in decision-making” (pertaining to science policy decisions or decisions on research topics) and “participation in knowledge and innovation-making”. They find that “while public participation had initially been conceived and promoted as a way to build legitimacy of research policy decisions by involving publics into decision-making processes, it is now also promoted as a way to produce better or more knowledge and innovation by involving publics into knowledge and innovation-making processes, and thus building legitimacy for science and technology as a whole” [ 29 , p. 508]. Although this shift in science and technology research policies has been noted, there exists a noticeable void in the literature in regard to understanding how concrete research practices incorporate public perspectives and embrace multistakeholder approaches, inclusion, and dialogue.

While several studies have delved into the framing of the publics’ role within AI governance in several instances (from Big Tech initiatives to hiring ethics teams and guidelines issued from multiple institutions to governments’ national policies related to AI development), discussing the underlying motivations driving the publics’ participation and the ethical considerations resulting from such involvement, there remains a notable scarcity of knowledge concerning how publicly voiced concerns are concretely translated into research efforts [ 30 , pp. 3–4, 31 , p. 8, 6]. To address this crucial gap, our scoping review endeavors to analyse the research landscape about the publics’ views on the ethical challenges of AI. Our primary objective is to uncover the motivations behind involving the publics in research initiatives, identify the segments of the publics that are considered in these studies, and illuminate the ethical concerns that warrant specific attention. Through this scoping review, we aim to enhance the understanding of the political and social backdrop within which debates and prior commitments regarding values and conditions for publics’ participation in matters related to science and technology are formulated and expressed [ 29 , 32 , 33 ] and which specific normative social commitments are projected and performed by institutional science [ 34 , p. 108, [ 35 , p. 856].

We followed the guidance for descriptive systematic scoping reviews by Levac et al. [ 36 ], based on the methodological framework developed by Arksey and O’Malley [ 37 ]. The steps of the review are listed below:

2.1 Stage 1: identifying the research question

The central question guiding this scoping review is the following: What motivations, publics and ethical issues emerge in research addressing the publics’ views on the ethical challenges of AI? We ask:

What motivations for engaging the publics with AI technologies are articulated?

Who are the publics invited?

Which ethical issues concerning AI technologies are perceived as needing the participation of the publics?

2.2 Stage 2: identifying relevant studies

A search of the publications on PubMed® and Web of Science™ was conducted on 19 May 2023, with no restriction set for language or time of publication, using the following search expression: (“AI” OR “artificial intelligence”) AND (“public” OR “citizen”) AND “ethics”. The search was followed by backwards reference tracking, examining the references of the selected publications based on full-text assessment.

2.3 Stage 3: study selection

The inclusion criteria allowed only empirical, peer-reviewed, original full-length studies written in English to explore publics’ views on the ethical challenges of AI as their main outcome. The exclusion criteria disallowed studies focusing on media discourses and texts. The titles of 1612 records were retrieved. After the removal of duplicates, 1485 records were examined. Two authors (HM and SS) independently screened all the papers retrieved initially, based on the title and abstract, and afterward, based on the full text. This was crosschecked and discussed in both phases, and perfect agreement was achieved.

The screening process is summarized in Fig.  1 . Based on title and abstract assessments, 1265 records were excluded because they were neither original full-length peer-reviewed empirical studies nor focused on the publics’ views on the ethical challenges of AI. Of the 220 fully read papers, 54 met the inclusion criteria. After backwards reference tracking, 10 papers were included, and the final review was composed of 64 papers.

figure 1

Flowchart showing the search results and screening process for the scoping review of publics’ views on ethical challenges of AI

2.4 Stage 4: charting the data

A standardized data extraction sheet was initially developed by two authors (HM and SS) and completed by two coders (SS and LN), including both quantitative and qualitative data (Supplemental Table “Data Extraction”). We used MS Excel to chart the data from the studies.

The two coders independently charted the first 10 records, with any disagreements or uncertainties in abstractions being discussed and resolved by consensus. The forms were further refined and finalized upon consensus before completing the data charting process. Each of the remaining records was charted by one coder. Two meetings were held to ensure consistency in data charting and to verify accuracy. The first author (HM) reviewed the results.

Descriptive data for the characterization of studies included information about the authors and publication year, the country where the study was developed, study aims, type of research (quantitative, qualitative, or other), assessment of the publics’ views, and sample. The types of research participants recruited as publics were coded into 11 categories: developers of AI systems; managers from industry and business; representatives of governance bodies; policymakers; academics and researchers; students; professional groups; general public; local communities; patients/consumers; and other (specify).

Data on the main motivations for researching the publics’ views on the ethical challenges of AI were also gathered. Authors’ accounts of their motivations were synthesized into eight categories according to the coding framework proposed by Weingart and colleagues [ 33 ] concerning public engagement with science and technology-related issues: education (to inform and educate the public about AI, improving public access to scientific knowledge); innovation (to promote innovation, the publics are considered to be a valuable source of knowledge and are called upon to contribute to knowledge production, bridge building and including knowledge outside ‘formal’ ethics); legitimation (to promote public trust in and acceptance of AI, as well as of policies supporting AI); inspiration (to inspire and raise interest in AI, to secure a STEM-educated labor force); politicization (to address past political injustices and historical exclusion); democratization (to empower citizens to participate competently in society and/or to participate in AI); other (specify); and not clearly evident.

Based on the content analysis technique [ 38 ], ethical issues perceived as needing the participation of the publics were identified through quotations stated in the studies. These were then summarized in seven key ethical principles, according to the proposal outlined by the EC's Ethics Guidelines for Trustworthy AI [ 39 ]: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

2.5 Stage 5: collating, summarizing, and reporting the results

The main characteristics of the 64 studies included can be found in Table  1 . Studies were grouped by type of research and ordered by the year of publication. The findings regarding the publics invited to participate are presented in Fig.  2 . The main motivations for engaging the publics with AI technologies and the ethical issues perceived as needing the participation of the publics are summarized in Tables  2 and 3 , respectively. The results are presented below in a narrative format, with complimentary tables and figures to provide a visual representation of key findings.

figure 2

Publics invited to engage with issues framed as ethical challenges of AI

There are some methodological limitations in this scoping review that should be taken into account when interpreting the results. The use of only two search engines may preclude the inclusion of relevant studies, although supplemented by scanning the reference list of eligible studies. An in-depth analysis of the topics explored within each of the seven key ethical principles outlined by the EC's Ethics Guidelines for Trustworthy AI was not conducted. This assessment would lead to a detailed understanding of the publics’ views on ethical challenges of AI.

3.1 Study characteristics

Most of the studies were in recent years, with 35 of the 64 studies being published in 2022 and 2023. Journals were listed either on the databases related to Science Citation Index Expanded (n = 25) or the Social Science Citation Index (n = 23), with fewer journals indexed in the Emerging Sources Citation Index (n = 7) and the Arts and Humanities Citation Index (n = 2). Works covered a wide range of fields, including health and medicine (services, policy, medical informatics, medical ethics, public and environmental health); education; business, management and public administration; computer science; information sciences; engineering; robotics; communication; psychology; political science; and transportation. Beyond the general assessment of publics’ attitudes toward, preferences for, and expectations and concerns about AI, the publics’ views on ethical challenges of AI technologies have been studied mainly concerning healthcare and public services and less frequently regarding autonomous vehicles (AV), education, robotic technologies, and smart homes. Most of the studies (n = 47) were funded by research agencies, with 7 papers reporting conflicts of interest.

Quantitative research approaches have assessed the publics’ views on the ethical challenges of AI mainly through online or web-based surveys and experimental platforms, relying on Delphi studies, moral judgment studies, hypothetical vignettes, and choice-based/comparative conjoint surveys. The 25 qualitative studies collected data mainly by semistructured or in-depth interviews. Analysis of publicly available material reporting on AI-use cases, focus groups, a post hoc self-assessment, World Café, participatory research, and practice-based design research were used once or twice. Multi or mixed-methods studies relied on surveys with open-ended and closed questions, frequently combined with focus groups, in-depth interviews, literature reviews, expert opinions, examinations of relevant curriculum examples, tests, and reflexive writings.

The studies were performed (where stated) in a wide variety of countries, including the USA and Australia. More than half of the studies (n = 38) were conducted in a single country. Almost all studies used nonprobability sampling techniques. In quantitative studies, sample sizes varied from 2.3 M internet users in an online experimental platform study [ 40 ] to 20 participants in a Delphi study [ 41 ]. In qualitative studies, the samples varied from 123 participants in 21 focus groups [ 42 ] to six expert interviews [ 43 ]. In multi or mixed-methods studies, samples varied from 2036 participants [ 44 ] to 21 participants [ 45 ].

3.2 Motivations for engaging the publics

The qualitative synthesis of the motivations for researching the publics’ views on the ethical challenges of AI is presented in Table  2 and ordered by the number of studies referencing them in the scoping review. More than half of the studies (n = 37) addressed a single motivation. Innovation (n = 33) and legitimation (n = 29) proved to have the highest relevance as motivations for engaging the publics in the ethical challenges of AI technologies, as articulated in 15 studies. Additional motivations are rooted in education (n = 13), democratization (n = 11), and inspiration (n = 9). Politicization was mentioned in five studies. Although they were not authors’ motivations, few studies were found to have educational [ 46 , 47 ], democratization [ 48 , 49 ], and legitimation or inspirations effects [ 50 ].

To consider the publics as a valuable source of knowledge that can add real value to innovation processes in both the private and public sectors was the most frequent motivation mentioned in the literature. The call for public participation is rooted in the aspiration to add knowledge outside “formal” ethics at three interrelated levels. First, at a societal level, by asking what kind of AI we want as a society based on novel experiments on public policy preferences [ 51 ] and on the study of public perceptions, values, and concerns regarding AI design, development, and implementation in domains such as health care [ 46 , 52 , 53 , 54 , 55 ], public and social services [ 49 , 56 , 57 , 58 ], AV [ 59 , 60 ] and journalism [ 61 ]. Second, at a practical level, the literature provides insights into the perceived usefulness of AI applications [ 62 , 63 ] and choices between boosting developers’ voluntary adoption of ethical standards or imposing ethical standards via regulation and oversight [ 64 ], as well as suggesting specific guidance for the development and use of AI systems [ 65 , 66 , 67 ]. Finally, at a theoretical level, literature expands the social-technical perspective [ 68 ] and motivated-reasoning theory [ 69 ].

Legitimation was also a frequent motivation for engaging the publics. It was underpinned by the need for public trust in and social licences for implementing AI technologies. To ensure the long-term social acceptability of AI as a trustworthy technology [ 70 , 71 ] was perceived as essential to support its use and to justify its implementation. In one study [ 72 ], the authors developed an AI ethics scale to quantify how AI research is accepted in society and which area of ethical, legal, and social issues (ELSI) people are most concerned with. Public trust in and acceptance of AI is claimed by social institutions such as governments, private sectors, industry bodies, and the science community, behaving in a trustworthy manner, respecting public concerns, aligning with societal values, and involving members of the publics in decision-making and public policy [ 46 , 48 , 73 , 74 , 75 ], as well as in the responsible design and integration of AI technologies [ 52 , 76 , 77 ].

Education, democratization, and inspiration had a more modest presence as motivations to explore the publics’ views on the ethical challenges of AI. Considering the emergence of new roles and tasks related to AI, the literature has pointed to the public need to ensure the safe use of AI technologies by incorporating ethics and career futures into the education, preparation, and training of both middle school and university students and the current and future health workforce. Improvements in education and guidance for developers and older adults were also noticed. The views of the publics on what needs to be learned or how this learning may be supported or assessed were perceived as crucial. In one study [ 78 ], the authors developed strategies that promote learning related to AI through collaborative media production, connecting computational thinking to civic issues and creative expression. In another study [ 79 ], real-world scenarios were successfully used as a novel approach to teaching AI ethics. Rhim et al. [ 76 ] provided AV moral behavior design guidelines for policymakers, developers, and the publics by reducing the abstractness of AV morality.

Studies motivated by democratization promoted broader public participation in AI, aiming to empower citizens both to express their understandings, apprehensions, and concerns about AI [ 43 , 78 , 80 , 81 ] and to address ethical issues in AI as critical consumers, (potential future) developers of AI technologies or would-be participants in codesign processes [ 40 , 43 , 45 , 78 , 82 , 83 ]. Understanding the publics’ views on the ethical challenges of AI is expected to influence companies and policymakers [ 40 ]. In one study [ 45 ], the authors explored how a digital app might support citizens’ engagement in AI governance by informing them, raising public awareness, measuring publics’ attitudes and supporting collective decision-making.

Inspiration revolved around three main motivations: to raise public interest in AI [ 46 , 48 ]; to guide future empirical and design studies [ 79 ]; and to promote developers’ moral awareness through close collaboration between all those involved in the implementation, use, and design of AI technologies [ 46 , 61 , 78 , 84 , 85 ].

Politicization was the less frequent motivation reported in the literature for engaging the publics. Recognizing the need for mitigation of social biases [ 86 ], public participation to address historically marginalized populations [ 78 , 87 ], and promoting social equity [ 79 ] were the highlighted motives.

3.3 The invited publics

Study participants were mostly the general public and professional groups, followed by developers of AI systems, managers from industry and business, students, academics and researchers, patients/consumers, and policymakers (Fig.  2 ). The views of local communities and representatives of governance bodies were rarely assessed.

Representative samples of the general public were used in five papers related to studies conducted in the USA [ 88 ], Denmark [ 73 ], Germany [ 48 ], and Austria [ 49 , 63 ]. The remaining random or purposive samples from the general public comprised mainly adults and current and potential users of AI products and services, with few studies involving informal caregivers or family members of patients (n = 3), older people (n = 2), and university staff (n = 2).

Samples of professional groups included mainly healthcare professionals (19 out of 24 studies). Educators, law enforcement, media practitioners, and GLAM professionals (galleries, libraries, archives, and museums) were invited once.

3.4 Ethical issues

The ethical issues concerning AI technologies perceived as needing the participation of the publics are depicted in Table  3 . They were mapped by measuring the number of studies referencing them in the scoping review. Human agency and oversight (n = 55) was the most frequent ethical aspect that was studied in the literature, followed by those centered on privacy and data governance (n = 43). Diversity, nondiscrimination and fairness (n = 39), societal and environmental well-being (n = 39), technical robustness and safety (n = 38), transparency (n = 35), and accountability (n = 31) were less frequently discussed.

The concerns regarding human agency and oversight were the replacement of human beings by AI technologies and deskilling [ 47 , 55 , 67 , 74 , 75 , 89 , 90 ]; the loss of autonomy, critical thinking, and innovative capacities [ 50 , 58 , 61 , 77 , 78 , 83 , 85 , 90 ]; the erosion of human judgment and oversight [ 41 , 70 , 91 ]; and the potential for (over)dependence on technology and “oversimplified” decisions [ 90 ] due to the lack of publics’ expertise in judging and controlling AI technologies [ 68 ]. Beyond these ethical challenges, the following contributions of AI systems to empower human beings were noted: more fruitful and empathetic social relationships [ 47 , 68 , 90 ]; enhancing human capabilities and quality of life [ 68 , 70 , 74 , 83 , 92 ]; improving efficiency and productivity at work [ 50 , 53 , 62 , 65 , 83 ] by reducing errors [ 77 ], relieving the burden of professionals and/or increasing accuracy in decisions [ 47 , 55 , 90 ]; and facilitating and expanding access to safe and fair healthcare [ 42 , 53 , 54 ] through earlier diagnosis, increased screening and monitoring, and personalized prescriptions [ 47 , 90 ]. To foster human rights, allowing people to make informed decisions, the last say was up to the person themselves [ 42 , 43 , 46 , 55 , 64 , 67 , 73 , 76 ]. People should determine where and when to use automated functions and which functions to use [ 44 , 54 ], developing “job sharing” arrangements with machines and humans complementing and enriching each other [ 56 , 65 , 90 ]. The literature highlights the need to build AI systems that are under human control [ 48 , 70 ] whether to confirm or to correct the AI system’s outputs and recommendations [ 66 , 90 ]. Proper oversight mechanisms were seen as crucial to ensure accuracy and completeness, with divergent views about who should be involved in public participation approaches [ 86 , 87 ].

Data sharing and/or data misuse were considered the major roadblocks regarding privacy and data governance, with some studies pointing out distrust of participants related to commercial interests in health data [ 55 , 90 , 93 , 94 , 95 ] and concerns regarding risks of information getting into the hands of hackers, banks, employers, insurance companies, or governments [ 66 ]. As data are the backbone of AI, secure methods of data storage and protection are understood as needing to be provided from the input to the output data. Recognizing that in contemporary societies, people are aware of the consequences of smartphone use resulting in the minimization of privacy concerns [ 93 ], some studies have focused on the impacts of data breaches and loss of privacy and confidentiality [ 43 , 45 , 46 , 60 , 62 , 80 ] in relation to health-sensitive personal data [ 46 , 93 ], potentially affecting more vulnerable populations, such as senior citizens and mentally ill patients [ 82 , 90 ] as well as those at young ages [ 50 ], and when journalistic organizations collect user data to provide personalized news suggestions [ 61 ]. The need to find a balance between widening access to data and ensuring confidentiality and respect for privacy [ 53 ] was often expressed in three interrelated terms: first, the ability of data subjects to be fully informed about how data will be used and given the option of providing informed consent [ 46 , 58 , 78 ] and controlling personal information about oneself [ 57 ]; second, the need for regulation [ 52 , 65 , 87 ], with one study reporting that AI developers complain about the complexity, slowness, and obstacles created by regulation [ 64 ]; and last, the testing and certification of AI-enabled products and services [ 71 ]. The study by De Graaf et al. [ 91 ] discussed the robots’ right to store and process the data they collect, while Jenkins and Draper [ 42 ] explored less intrusive ways in which the robot could use information to report back to carers about the patient’s adherence to healthcare.

Studies discussing diversity, nondiscrimination, and fairness have pointed to the development of AI systems that reflect and reify social inequalities [ 45 , 78 ] through nonrepresentative datasets [ 55 , 58 , 96 , 97 ] and algorithmic bias [ 41 , 45 , 85 , 98 ] that might benefit some more than others. This could have multiple negative consequences for different groups based on ethnicity, disease, physical disability, age, gender, culture, or socioeconomic status [ 43 , 55 , 58 , 78 , 82 , 87 ], from the dissemination of hate speech [ 79 ] to the exacerbation of discrimination, which negatively impacts peace and harmony within society [ 58 ]. As there were cross-country differences and issue variations in the publics’ views of discriminatory bias [ 51 , 72 , 73 ], fostering diversity, inclusiveness, and cultural plurality [ 61 ] was perceived as crucial to ensure the transferability/effectiveness of AI systems in all social groups [ 60 , 94 ]. Diversity, nondiscrimination, and fairness were also discussed as a means to help reduce health inequalities [ 41 , 67 , 90 ], to compensate for human preconceptions about certain individuals [ 66 ], and to promote equitable distribution of benefits and burdens [ 57 , 71 , 80 , 93 ], namely, supporting access by all to the same updated and high-quality AI systems [ 50 ]. In one study [ 83 ], students provided constructive solutions to build an unbiased AI system, such as using a dataset that includes a diverse dataset engaging people of different ages, genders, ethnicities, and cultures. In another study [ 86 ], participants recommended diverse approaches to mitigate algorithmic bias, from open disclosure of limitations to consumer and patient engagement, representation of marginalized groups, incorporation of equity considerations into sampling methods and legal recourse, and identification of a wide range of stakeholders who may be responsible for addressing AI bias: developers, healthcare workers, manufacturers and vendors, policymakers and regulators, AI researchers and consumers.

Impacts on employment and social relationships were considered two major ethical challenges regarding societal and environmental well-being. The literature has discussed tensions between job creation [ 51 ] and job displacement [ 42 , 90 ], efficiency [ 90 ], and deskilling [ 57 ]. The concerns regarding future social relationships were the loss of empathy, humanity, and/or sensitivity [ 52 , 66 , 90 , 99 ]; isolation and fewer social connections [ 42 , 47 , 90 ]; laziness [ 50 , 83 ]; anxious counterreactions [ 83 , 99 ]; communication problems [ 90 ]; technology dependence [ 60 ]; plagiarism and cheating in education [ 50 ]; and becoming too emotionally attached to a robot [ 65 ]. To overcome social unawareness [ 56 ] and lack of acceptance [ 65 ] due to financial costs [ 56 , 90 ], ecological burden [ 45 ], fear of the unknown [ 65 , 83 ] and/or moral issues [ 44 , 59 , 100 ], AI systems need to provide public benefit sharing [ 55 ], consider discrepancies between public discourse about AI and the utility of the tools in real-world settings and practices [ 53 ], conform to the best standards of sustainability and address climate change and environmental justice [ 60 , 71 ]. Successful strategies in promoting the acceptability of robots across contexts included an approachable and friendly looking as possible, but not too human-like [ 49 , 65 ], and working with, rather than in competition, with humans [ 42 ].

The publics were invited to participate in the following ethical issues related to technical robustness and safety: usability, reliability, liability, and quality assurance checks of AI tools [ 44 , 45 , 55 , 62 , 99 ]; validity of big data analytic tools [ 87 ]; the degree to which an AI system can perform tasks without errors or mistakes [ 50 , 57 , 66 , 84 , 90 , 93 ]; and needed resources to perform appropriate (cyber)security [ 62 , 101 ]. Other studies approached the need to consider both material and normative concerns of AI applications [ 51 ], namely, assuring that AI systems are developed responsibly with proper consideration of risks [ 71 ] and sufficient proof of benefits [ 96 ]. One study [ 64 ] highlighted that AI developers tend to be reluctant to recognize safety issues, bias, errors, and failures, and when they do so, they do so in a selective manner and in their terms by adopting positive-sounding professional jargon as AI robustness.

Some studies recognized the need for greater transparency that reduces the mystery and opaqueness of AI systems [ 71 , 82 , 101 ] and opens its “black box” [ 64 , 71 , 98 ]. Clear insights about “what AI is/is not” and “how AI technology works” (definition, applications, implications, consequences, risks, limitations, weaknesses, threats, rewards, strengths, opportunities) were considered as needed to debunk the myth about AI as an independent entity [ 53 ] and for providing sufficient information and understandable explanations of “what’s happening” to society and individuals [ 43 , 48 , 72 , 73 , 78 , 102 ]. Other studies considered that people, when using AI tools, should be made fully aware that these AI devices are capturing and using their data [ 46 ] and how data are collected [ 58 ] and used [ 41 , 46 , 93 ]. Other transparency issues reported in the literature included the need for more information about the composition of data training sets [ 55 ], how algorithms work [ 51 , 55 , 84 , 94 , 97 ], how AI makes a decision [ 57 ] and the motivations for that decision [ 98 ]. Transparency requirements were also addressed as needing the involvement of multiple stakeholders: one study reported that transparency requirements should be seen as a mediator of debate between experts, citizens, communities, and stakeholders [ 87 ] and cannot be reduced to a product feature, avoiding experiences where people feel overwhelmed by explanations [ 98 ] or “too much information” [ 66 ].

Accountability was perceived by the publics as an important ethical issue [ 48 ], while developers expressed mixed attitudes, from moral disengagement to a sense of responsibility and moral conflict and uncertainty [ 85 ]. The literature has revealed public skepticism regarding accountability mechanisms [ 93 ] and criticism about the shift of responsibility away from tech industries that develop and own AI technologies [ 53 , 68 ], as it opens space for users to assume their own individual responsibility [ 78 ]. This was the case in studies that explored accountability concerns regarding the assignment of fault and responsibility for car accidents using self-driving technology [ 60 , 76 , 77 , 88 ]. Other studies considered that more attention is needed to scrutinize each application across the AI life cycle [ 41 , 71 , 94 ], to explainability of AI algorithms that provide to the publics the cause of AI outcomes [ 58 ], and to regulations that assign clear responsibility concerning litigation and liability [ 52 , 89 , 101 , 103 ].

4 Discussion

Within the realm of research studies encompassed in the scoping review, the contemporary impetus for engaging the publics in ethical considerations related to AI predominantly revolves around two key motivations: innovation and legitimation. This might be explained by the current emphasis on responsible innovation, which values the publics’ participation in knowledge and innovation-making [ 29 ] within a prioritization of the instrumental role of science for innovation and economic return [ 33 ]. Considering the publics as a valuable source of knowledge that should be called upon to contribute to knowledge innovation production is underpinned by the desire for legitimacy, specifically centered around securing the publics’ endorsement of scientific and technological advancements [ 33 , 104 ]. Approaching the publics’ views on the ethical challenges of AI can also be used as a form of risk prevention to reduce conflict and close vital debates in contention areas [ 5 , 34 , 105 ].

A second aspect that stood out in this finding is a shift in the motivations frequently reported as central for engaging the publics with AI technologies. Previous studies analysing AI national policies and international guidelines addressing AI governance [ 3 , 4 , 5 ] and a study analysing science communication journals [ 33 ] highlighted education, inspiration and democratization as the most prominent motivations. Our scoping review did not yield similar findings, which might signal a departure, in science policy related to public participation, from the past emphasis on education associated with the deficit model of public understanding of science and democratization of the model of public engagement with science [ 106 , 107 ].

The underlying motives for the publics’ engagement raise the question of the kinds of publics it addresses, i.e., who are the publics that are supposed to be recruited as research participants [ 32 ]. Our findings show a prevalence of the general public followed by professional groups and developers of AI systems. The wider presence of the general public indicates not only what Hagendijk and Irwin [ 32 , p. 167] describe as a fashionable tendency in policy circles since the late 1990s, and especially in Europe, focused on engaging 'the public' in scientific and technological change but also the avoidance of the issues of democratic representation [ 12 , 18 ]. Additionally, the unspecificity of the “public” does not stipulate any particular action [ 24 ] that allows for securing legitimacy for and protecting the interests of a wide range of stakeholders [ 19 , 108 ] while bringing the risk of silencing the voices of the very publics with whom engagement is sought [ 33 ]. The focus on approaching the publics’ views on the ethical challenges of AI through the general public also demonstrates how seeking to “lay” people’s opinions may be driven by a desire to promote public trust and acceptance of AI developments, showing how science negotiates challenges and reinstates its authority [ 109 ].

While this strategy is based on nonscientific audiences or individuals who are not associated with any scientific discipline or area of inquiry as part of their professional activities, the converse strategy—i.e., involving professional groups and AI developers—is also noticeable in our findings. This suggests that technocratic expert-dominated approaches coexist with a call for more inclusive multistakeholder approaches [ 3 ]. This coexistence is reinforced by the normative principles of the “responsible innovation” framework, in particular the prescription that innovation should include the publics as well as traditionally defined stakeholders [ 3 , 110 ], whose input has become so commonplace that seeking the input of laypeople on emerging technologies is sometimes described as a “standard procedure” [ 111 , p. 153].

In the body of literature included in the scoping review, human agency and oversight emerged as the predominant ethical dimension under investigation. This finding underscores the pervasive significance attributed to human centricity, which is progressively integrated into public discourses concerning AI, innovation initiatives, and market-driven endeavours [ 15 , 112 ]. In our perspective, the importance given to human-centric AI is emblematic of the “techno-regulatory imaginary” suggested by Rommetveit and van Dijk [ 35 ] in their study about privacy engineering applied in the European Union’s General Data Protection Regulation. This term encapsulates the evolving collective vision and conceptualization of the role of technology in regulatory and oversight contexts. At least two aspects stand out in the techno-regulatory imaginary, as they are meant to embed technoscience in societally acceptable ways. First, it reinstates pivotal demarcations between humans and nonhumans while concurrently producing intensified blurring between these two realms. Second, the potential resolutions offered relate to embedding fundamental rights within the structural underpinnings of technological architectures [ 35 ].

Following human agency and oversight, the most frequent ethical issue discussed in the studies contained in our scoping review was privacy and data governance. Our findings evidence additional central aspects of the “techno-regulatory imaginary” in the sense that instead of the traditional regulatory sites, modes of protecting privacy and data are increasingly located within more privatized and business-oriented institutions [ 6 , 35 ] and crafted according to a human-centric view of rights. The focus on secure ways of data storage and protection as in need to be provided from the input to the output data, the testing and certification of AI-enabled products and services, the risks of data breaches, and calls for finding a balance between widening access to data and ensuring confidentiality and respect for privacy, exhibited by many studies in this scoping review, portray an increasing framing of privacy and data protection within technological and standardization sites. This tendency shows how forms of expertise for privacy and data protection are shifting away from traditional regulatory and legal professionals towards privacy engineers and risk assessors in information security and software development. Another salient element to highlight pertains to the distribution of responsibility for privacy and data governance [ 6 , 113 ] within the realm of AI development through engagement with external stakeholders, including users, governmental bodies, and regulatory authorities. It extends from an emphasis on issues derived from data sharing and data misuse to facilitating individuals to exercise control over their data and privacy preferences and to advocating for regulatory frameworks that do not impede the pace of innovation. This distribution of responsibility shared among the contributions and expectations of different actors is usually convoked when the operationalization of ethics principles conflicts with AI deployment [ 6 ]. In this sense, privacy and data governance are reconstituted as a “normative transversal” [ 113 , p. 20], both of which work to stabilize or close controversies, while their operationalization does not modify any underlying operations in AI development.

Diversity, nondiscrimination and fairness, societal and environmental well-being, technical robustness and safety, transparency, and accountability were the ethical issues less frequently discussed in the studies included in this scoping review. In contrast, ethical issues of technical robustness and safety, transparency, and accountability “are those for which technical fixes can be or have already been developed” and “implemented in terms of technical solutions” [ 12 , p. 103]. The recognition of issues related to technical robustness and safety expresses explicit admissions of expert ignorance, error, or lack of control, which opens space for politics of “optimization of algorithms” [ 114 , p. 17] while reinforcing “strategic ignorance” [ 114 , p. 89]. In the words of the sociologist Linsey McGoey, strategic ignorance refers to “any actions which mobilize, manufacture or exploit unknowns in a wider environment to avoid liability for earlier actions” [ 115 , p. 3].

According to the analysis of Jobin et al. [ 11 ] of the global landscape of existing ethics guidelines for AI, transparency comprising efforts to increase explainability, interpretability, or other acts of communication and disclosure is the most prevalent principle in the current literature. Transparency gains high relevance in ethics guidelines because this principle has become a pro-ethical condition “enabling or impairing other ethical practices or principles” [Turilli and Floridi 2009, [ 11 ], p. 14]. Our findings highlight transparency as a crucial ethical concern for explainability and disclosure. However, as emphasized by Ananny and Crawford [ 116 , p. 973], there are serious limitations to the transparency ideal in making black boxes visible (i.e., disclosing and explaining algorithms), since “being able to see a system is sometimes equated with being able to know how it works and governs it—a pattern that recurs in recent work about transparency and computational systems”. The emphasis on transparency mirrors Aradau and Blanke’s [ 114 ] observation that Big Tech firms are creating their version of transparency. They are prompting discussions about their data usage, whether it is for “explaining algorithms” or addressing bias and discrimination openly.

The framing of ethical issues related to accountability, as elucidated by the studies within this scoping review, manifests as a commitment to ethical conduct and the transparent allocation of responsibility and legal obligations in instances where the publics encounters algorithmic deficiencies, glitches, or other imperfections. Within this framework, accountability becomes intricately intertwined with the notion of distributed responsibility, as expounded upon in our examination of how the literature addresses challenges in privacy and data governance. Simultaneously, it converges with our discussion on optimizing algorithms concerning ethical concerns on technical robustness and safety by which AI systems are portrayed as fallible yet eternally evolving towards optimization. As astutely observed by Aradau and Blanke [ 114 , p. 171], “forms of accountability through error enact algorithmic systems as fallible but ultimately correctable and therefore always desirable. Errors become temporary malfunctions, while the future of algorithms is that of indefinite optimization”.

5 Conclusion

This scoping review of how publics' views on ethical challenges of AI are framed, articulated, and concretely operationalized in the research sector shows that ethical issues and publics formation are closely entangled with symbolic and social orders, including political and economic agendas and visions. While Steinhoff [ 6 ] highlights the subordinated nature of AI ethics within an innovation network, drawing on insights from diverse sources beyond Big Tech, we assert that this network is dynamically evolving towards greater hybridity and boundary fusion. In this regard, we extend Steinhoff's argument by emphasizing the imperative for a more nuanced understanding of how this network operates within diverse contexts. Specifically, within the research sector, it operates through a convergence of boundaries, engaging human and nonhuman entities and various disciplines and stakeholders. Concurrently, the advocacy for diversity and inclusivity, along with the acknowledgement of errors and flaws, serves to bolster technical expertise and reaffirm the establishment of order and legitimacy in alignment with the institutional norms underpinning responsible research practices.

Our analysis underscores the growing importance of involving the publics in AI knowledge creation and innovation, both to secure public endorsement and as a tool for risk prevention and conflict mitigation. We observe two distinct approaches: one engaging nonscientific audiences and the other involving professional groups and AI developers, emphasizing the need for inclusivity while safeguarding expert knowledge. Human-centred approaches are gaining prominence, emphasizing the distinction and blending of human and nonhuman entities and embedding fundamental rights in technological systems. Privacy and data governance emerge as the second most prevalent ethical concern, shifting expertise away from traditional regulatory experts to privacy engineers and risk assessors. The distribution of responsibility for privacy and data governance is a recurring theme, especially in cases of ethical conflicts with AI deployment. However, there is a notable imbalance in attention, with less focus on diversity, nondiscrimination, fairness, societal, and environmental well-being, compared to human-centric AI, privacy, and data governance being managed through technical fixes. Last, acknowledging technical robustness and safety, transparency, and accountability as foundational ethics principles reveals an openness to expert limitations, allowing room for the politics of algorithm optimization, framing AI systems as correctable and perpetually evolving.

Data availability

This manuscript has data included as electronic supplementary material. The dataset constructed by the authors, resulting from a search of publications on PubMed ® and Web of Science™, analysed in the current study, is not publicly available. But it can be available from the corresponding author on reasonable request.

In this article, we will employ the term "publics" rather than the singular "public" to delineate our viewpoint concerning public participation in AI. Our option is meant to acknowledge that there are no uniform, monolithic viewpoints or interests. From our perspective, the term "publics" allows for a more nuanced understanding of the various groups, communities, and individuals who may have different attitudes, beliefs, and concerns regarding AI. This choice may differ from the terminology employed in the referenced literature.

The following examples are particularly illustrative of the multiplicity of organizations emphasizing the need for public participation in AI. The OECD Recommendations of the Council on AI specifically emphasizes the importance of empowering stakeholders considering essential their engagement to adoption of trustworthy [ 7 , p. 6]. The UNESCO Recommendation on the Ethics of AI emphasizes that public awareness and understanding of AI technologies should be promoted (recommendation 44) and it encourages governments and other stakeholders to involve the publics in AI decision-making processes (recommendation 47) [ 8 , p. 23]. The European Union (EU) White Paper on AI [ 9 , p. 259] outlines the EU’s approach to AI, including the need for public consultation and engagement. The Ethics Guidelines for Trustworthy AI [ 10 , pp. 19, 239], developed by the High-Level Expert Group on AI (HLEG) appointed by the EC, emphasize the importance of public participation and consultation in the design, development, and deployment of AI systems.

“Responsible Innovation” (RI) and “Responsible Research and Innovation” (RRI) have emerged in parallel and are often used interchangeably, but they are not the same thing [ 27 , 28 ]. RRI is a policy-driven discourse that emerged from the EC in the early 2010s, while RI emerged largely from academic roots. For this paper, we will not consider the distinctive features of each discourse, but instead focus on the common features they share.

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L.: Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Sci. Eng. Ethics 24 , 505–528 (2017). https://doi.org/10.1007/s11948-017-9901-7

Article   Google Scholar  

Cussins, J.N.: Decision points in AI governance. CLTC white paper series. Center for Long-term Cybersecurity. https://cltc.berkeley.edu/publication/decision-points-in-ai-governance/ (2020). Accessed 8 July 2023

Ulnicane, I., Okaibedi Eke, D., Knight, W., Ogoh, G., Stahl, B.: Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies. Interdiscip. Sci. Rev. 46 (1–2), 71–93 (2021). https://doi.org/10.1080/03080188.2020.1840220

Ulnicane, I., Knight, W., Leach, T., Stahl, B., Wanjiku, W.: Framing governance for a contested emerging technology: insights from AI policy. Policy Soc. 40 (2), 158–177 (2021). https://doi.org/10.1080/14494035.2020.1855800

Wilson, C.: Public engagement and AI: a values analysis of national strategies. Gov. Inf. Q. 39 (1), 101652 (2022). https://doi.org/10.1016/j.giq.2021.101652

Steinhoff, J.: AI ethics as subordinated innovation network. AI Soc. (2023). https://doi.org/10.1007/s00146-023-01658-5

Organization for Economic Co-operation and Development. Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (2019). Accessed 8 July 2023

United Nations Educational, Scientific and Cultural Organization. Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137 (2021). Accessed 28 June 2023

European Commission. On artificial intelligence – a European approach to excellence and trust. White paper. COM(2020) 65 final. https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (2020). Accessed 28 June 2023

European Commission. The ethics guidelines for trustworthy AI. Directorate-General for Communications Networks, Content and Technology, EC Publications Office. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (2019). Accessed 10 July 2023

Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1 , 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 30 , 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

Su, A.: The promise and perils of international human rights law for AI governance. Law Technol. Hum. 4 (2), 166–182 (2022). https://doi.org/10.5204/lthj.2332

Article   MathSciNet   Google Scholar  

Ulnicane, I.: Emerging technology for economic competitiveness or societal challenges? Framing purpose in artificial intelligence policy. GPPG. 2 , 326–345 (2022). https://doi.org/10.1007/s43508-022-00049-8

Sigfrids, A., Leikas, J., Salo-Pöntinen, H., Koskimies, E.: Human-centricity in AI governance: a systemic approach. Front Artif. Intell. 6 , 976887 (2023). https://doi.org/10.3389/frai.2023.976887

Benkler, Y.: Don’t let industry write the rules for AI. Nature 569 (7755), 161 (2019). https://doi.org/10.1038/d41586-019-01413-1

Phan, T., Goldenfein, J., Mann, M., Kuch, D.: Economies of virtue: the circulation of ‘ethics’ in Big Tech. Sci. Cult. 31 (1), 121–135 (2022). https://doi.org/10.1080/09505431.2021.1990875

Ochigame, R.: The invention of “ethical AI”: how big tech manipulates academia to avoid regulation. Intercept. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/ (2019). Accessed 10 July 2023

Ferretti, T.: An institutionalist approach to ai ethics: justifying the priority of government regulation over self-regulation. MOPP 9 (2), 239–265 (2022). https://doi.org/10.1515/mopp-2020-0056

van Maanen, G.: AI ethics, ethics washing, and the need to politicize data ethics. DISO 1 (9), 1–23 (2022). https://doi.org/10.1007/s44206-022-00013-3

Gerdes, A.: The tech industry hijacking of the AI ethics research agenda and why we should reclaim it. Discov. Artif. Intell. 2 (25), 1–8 (2022). https://doi.org/10.1007/s44163-022-00043-3

Amariles, D.R., Baquero, P.M.: Promises and limits of law for a human-centric artificial intelligence. Comput. Law Secur. Rev. 48 (105795), 1–10 (2023). https://doi.org/10.1016/j.clsr.2023.105795

Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1 (11), 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

Munn, L.: The uselessness of AI ethics. AI Ethics 3 , 869–877 (2022). https://doi.org/10.1007/s43681-022-00209-w

Heilinger, J.C.: The ethics of AI ethics. A constructive critique. Philos. Technol. 35 (61), 1–20 (2022). https://doi.org/10.1007/s13347-022-00557-9

Roche, C., Wall, P.J., Lewis, D.: Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00218-9

Diercks, G., Larsen, H., Steward, F.: Transformative innovation policy: addressing variety in an emerging policy paradigm. Res. Policy 48 (4), 880–894 (2019). https://doi.org/10.1016/j.respol.2018.10.028

Owen, R., Pansera, M.: Responsible innovation and responsible research and innovation. In: Dagmar, S., Kuhlmann, S., Stamm, J., Canzler, W. (eds.) Handbook on Science and Public Policy, pp. 26–48. Edward Elgar, Cheltenham (2019)

Google Scholar  

Macq, H., Tancoigne, E., Strasser, B.J.: From deliberation to production: public participation in science and technology policies of the European Commission (1998–2019). Minerva 58 (4), 489–512 (2020). https://doi.org/10.1007/s11024-020-09405-6

Cath, C.: Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos. Trans. Royal Soc. A. 376 , 20180080 (2018). https://doi.org/10.1098/rsta.2018.0080

Wilson, C.: The socialization of civic participation norms in government?: Assessing the effect of the Open Government Partnership on countries’e-participation. Gov. Inf. Q. 37 (4), 101476 (2020). https://doi.org/10.1016/j.giq.2020.101476

Hagendijk, R., Irwin, A.: Public deliberation and governance: engaging with science and technology in contemporary Europe. Minerva 44 (2), 167–184 (2006). https://doi.org/10.1007/s11024-006-0012-x

Weingart, P., Joubert, M., Connoway, K.: Public engagement with science - origins, motives and impact in academic literature and science policy. PLoS One 16 (7), e0254201 (2021). https://doi.org/10.1371/journal.pone.0254201

Wynne, B.: Public participation in science and technology: performing and obscuring a political–conceptual category mistake. East Asian Sci. 1 (1), 99–110 (2007). https://doi.org/10.1215/s12280-007-9004-7

Rommetveit, K., Van Dijk, N.: Privacy engineering and the techno-regulatory imaginary. Soc. Stud. Sci. 52 (6), 853–877 (2022). https://doi.org/10.1177/03063127221119424

Levac, D., Colquhoun, H., O’Brien, K.: Scoping studies: advancing the methodology. Implement. Sci. 5 (69), 1–9 (2010). https://doi.org/10.1186/1748-5908-5-69

Arksey, H., O’Malley, L.: Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8 (1), 19–32 (2005). https://doi.org/10.1080/1364557032000119616

Stemler, S.: An overview of content analysis. Pract. Asses. Res. Eval. 7 (17), 1–9 (2001). https://doi.org/10.7275/z6fm-2e34

European Commission. European Commission's ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (2021). Accessed 8 July 2023

Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al.: The moral machine experiment. Nature 563 (7729), 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6

Liyanage, H., Liaw, S.T., Jonnagaddala, J., Schreiber, R., Kuziemsky, C., Terry, A.L., de Lusignan, S.: Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb. Med. Inform. 28 (1), 41–46 (2019). https://doi.org/10.1055/s-0039-1677901

Jenkins, S., Draper, H.: Care, monitoring, and companionship: views on care robots from older people and their carers. Int. J. Soc. Robot. 7 (5), 673–683 (2015). https://doi.org/10.1007/s12369-015-0322-y

Tzouganatou, A.: Openness and privacy in born-digital archives: reflecting the role of AI development. AI Soc. 37 (3), 991–999 (2022). https://doi.org/10.1007/s00146-021-01361-3

Liljamo, T., Liimatainen, H., Pollanen, M.: Attitudes and concerns on automated vehicles. Transp. Res. Part F Traffic Psychol. Behav. 59 , 24–44 (2018). https://doi.org/10.1016/j.trf.2018.08.010

Couture, V., Roy, M.C., Dez, E., Laperle, S., Belisle-Pipon, J.C.: Ethical implications of artificial intelligence in population health and the public’s role in its governance: perspectives from a citizen and expert panel. J. Med. Internet Res. 25 , e44357 (2023). https://doi.org/10.2196/44357

McCradden, M.D., Sarker, T., Paprica, P.A.: Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research. BMJ Open 10 (10), e039798 (2020). https://doi.org/10.1136/bmjopen-2020-039798

Blease, C., Kharko, A., Annoni, M., Gaab, J., Locher, C.: Machine learning in clinical psychology and psychotherapy education: a mixed methods pilot survey of postgraduate students at a Swiss University. Front. Public Health 9 (623088), 1–8 (2021). https://doi.org/10.3389/fpubh.2021.623088

Kieslich, K., Keller, B., Starke, C.: Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data Soc. 9 (1), 1–15 (2022). https://doi.org/10.1177/20539517221092956

Willems, J., Schmidthuber, L., Vogel, D., Ebinger, F., Vanderelst, D.: Ethics of robotized public services: the role of robot design and its actions. Gov. Inf. Q. 39 (101683), 1–11 (2022). https://doi.org/10.1016/J.Giq.2022.101683

Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R.H., Agyemang, B.: What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ. 10 (15), 1–24 (2023). https://doi.org/10.1186/S40561-023-00237-X

Ehret, S.: Public preferences for governing AI technology: comparative evidence. J. Eur. Public Policy 29 (11), 1779–1798 (2022). https://doi.org/10.1080/13501763.2022.2094988

Esmaeilzadeh, P.: Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med. Inform. Decis. Mak. 20 (170), 1–19 (2020). https://doi.org/10.1186/s12911-020-01191-1

Laïï, M.C., Brian, M., Mamzer, M.F.: Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J. Transl. Med. 18 (14), 1–13 (2020). https://doi.org/10.1186/S12967-019-02204-Y

Valles-Peris, N., Barat-Auleda, O., Domenech, M.: Robots in healthcare? What patients say. Int. J. Environ. Res. Public Health 18 (9933), 1–18 (2021). https://doi.org/10.3390/ijerph18189933

Hallowell, N., Badger, S., Sauerbrei, A., Nellaker, C., Kerasidou, A.: “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease. BMC Med. Ethics 23 (112), 1–14 (2022). https://doi.org/10.1186/s12910-022-00842-4

Criado, J.I., de Zarate-Alcarazo, L.O.: Technological frames, CIOs, and artificial intelligence in public administration: a socio-cognitive exploratory study in spanish local governments. Gov. Inf. Q. 39 (3), 1–13 (2022). https://doi.org/10.1016/J.Giq.2022.101688

Isbanner, S., O’Shaughnessy, P.: The adoption of artificial intelligence in health care and social services in Australia: findings from a methodologically innovative national survey of values and attitudes (the AVA-AI Study). J. Med. Internet Res. 24 (8), e37611 (2022). https://doi.org/10.2196/37611

Kuberkar, S., Singhal, T.K., Singh, S.: Fate of AI for smart city services in India: a qualitative study. Int. J. Electron. Gov. Res. 18 (2), 1–21 (2022). https://doi.org/10.4018/Ijegr.298216

Kallioinen, N., Pershina, M., Zeiser, J., Nezami, F., Pipa, G., Stephan, A., Konig, P.: Moral judgements on the actions of self-driving cars and human drivers in dilemma situations from different perspectives. Front. Psychol. 10 (2415), 1–15 (2019). https://doi.org/10.3389/fpsyg.2019.02415

Vrščaj, D., Nyholm, S., Verbong, G.P.J.: Is tomorrow’s car appealing today? Ethical issues and user attitudes beyond automation. AI Soc. 35 (4), 1033–1046 (2020). https://doi.org/10.1007/s00146-020-00941-z

Bastian, M., Helberger, N., Makhortykh, M.: Safeguarding the journalistic DNA: attitudes towards the role of professional values in algorithmic news recommender designs. Digit. Journal. 9 (6), 835–863 (2021). https://doi.org/10.1080/21670811.2021.1912622

Kaur, K., Rampersad, G.: Trust in driverless cars: investigating key factors influencing the adoption of driverless cars. J. Eng. Technol. Manag. 48 , 87–96 (2018). https://doi.org/10.1016/j.jengtecman.2018.04.006

Willems, J., Schmid, M.J., Vanderelst, D., Vogel, D., Ebinger, F.: AI-driven public services and the privacy paradox: do citizens really care about their privacy? Public Manag. Rev. (2022). https://doi.org/10.1080/14719037.2022.2063934

Duke, S.A.: Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. Ethics Inf. Technol. 24 (1), 1–15 (2022). https://doi.org/10.1007/s10676-022-09627-0

Cresswell, K., Cunningham-Burley, S., Sheikh, A.: Health care robotics: qualitative exploration of key challenges and future directions. J. Med. Internet Res. 20 (7), e10410 (2018). https://doi.org/10.2196/10410

Amann, J., Vayena, E., Ormond, K.E., Frey, D., Madai, V.I., Blasimme, A.: Expectations and attitudes towards medical artificial intelligence: a qualitative study in the field of stroke. PLoS One 18 (1), e0279088 (2023). https://doi.org/10.1371/journal.pone.0279088

Aquino, Y.S.J., Rogers, W.A., Braunack-Mayer, A., Frazer, H., Win, K.T., Houssami, N., et al.: Utopia versus dystopia: professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int. J. Med. Inform. 169 (104903), 1–10 (2023). https://doi.org/10.1016/j.ijmedinf.2022.104903

Sartori, L., Bocca, G.: Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI Soc. 38 (2), 443–458 (2022). https://doi.org/10.1007/s00146-022-01422-1

Chen, Y.-N.K., Wen, C.-H.R.: Impacts of attitudes toward government and corporations on public trust in artificial intelligence. Commun. Stud. 72 (1), 115–131 (2021). https://doi.org/10.1080/10510974.2020.1807380

Aitken, M., Ng, M., Horsfall, D., Coopamootoo, K.P.L., van Moorsel, A., Elliott, K.: In pursuit of socially ly-minded data-intensive innovation in banking: a focus group study of public expectations of digital innovation in banking. Technol. Soc. 66 (101666), 1–10 (2021). https://doi.org/10.1016/j.techsoc.2021.101666

Choung, H., David, P., Ross, A.: Trust and ethics in AI. AI Soc. 38 (2), 733–745 (2023). https://doi.org/10.1007/s00146-022-01473-4

Hartwig, T., Ikkatai, Y., Takanashi, N., Yokoyama, H.M.: Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US. AI Soc. 38 (4), 1609–1626 (2023). https://doi.org/10.1007/s00146-021-01323-9

Ploug, T., Sundby, A., Moeslund, T.B., Holm, S.: Population preferences for performance and explainability of artificial intelligence in health care: choice-based conjoint survey. J. Med. Internet Res. 23 (12), e26611 (2021). https://doi.org/10.2196/26611

Zheng, B., Wu, M.N., Zhu, S.J., Zhou, H.X., Hao, X.L., Fei, F.Q., et al.: Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv. Res. 21 (1067), 1–13 (2021). https://doi.org/10.1186/S12913-021-07044-5

Ma, J., Tojib, D., Tsarenko, Y.: Sex robots: are we ready for them? An exploration of the psychological mechanisms underlying people’s receptiveness of sex robots. J. Bus. Ethics 178 (4), 1091–1107 (2022). https://doi.org/10.1007/s10551-022-05059-4

Rhim, J., Lee, G.B., Lee, J.H.: Human moral reasoning types in autonomous vehicle moral dilemma: a cross-cultural comparison of Korea and Canada. Comput. Hum. Behav. 102 , 39–56 (2020). https://doi.org/10.1016/j.chb.2019.08.010

Dempsey, R.P., Brunet, J.R., Dubljevic, V.: Exploring and understanding law enforcement’s relationship with technology: a qualitative interview study of police officers in North Carolina. Appl. Sci-Basel 13 (6), 1–17 (2023). https://doi.org/10.3390/App13063887

Lee, C.H., Gobir, N., Gurn, A., Soep, E.: In the black mirror: youth investigations into artificial intelligence. ACM Trans. Comput. Educ. 22 (3), 1–25 (2022). https://doi.org/10.1145/3484495

Kong, S.C., Cheung, W.M.Y., Zhang, G.: Evaluating an artificial intelligence literacy programme for developing university students? Conceptual understanding, literacy, empowerment and ethical awareness. Educ. Technol. Soc. 26 (1), 16–30 (2023). https://doi.org/10.30191/Ets.202301_26(1).0002

Street, J., Barrie, H., Eliott, J., Carolan, L., McCorry, F., Cebulla, A., et al.: Older adults’ perspectives of smart technologies to support aging at home: insights from five world cafe forums. Int. J. Environ. Res. Public Health 19 (7817), 1–22 (2022). https://doi.org/10.3390/Ijerph19137817

Ikkatai, Y., Hartwig, T., Takanashi, N., Yokoyama, H.M.: Octagon measurement: public attitudes toward AI ethics. Int J Hum-Comput Int. 38 (17), 1589–1606 (2022). https://doi.org/10.1080/10447318.2021.2009669

Wang, S., Bolling, K., Mao, W., Reichstadt, J., Jeste, D., Kim, H.C., Nebeker, C.: Technology to support aging in place: older adults’ perspectives. Healthcare (Basel) 7 (60), 1–18 (2019). https://doi.org/10.3390/healthcare7020060

Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y.H., Breazeal, C.: Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: an exploratory study. Int. J. Artif. Intell. Educ. 33 , 290–324 (2022). https://doi.org/10.1007/s40593-022-00293-3

Henriksen, A., Blond, L.: Executive-centered AI? Designing predictive systems for the public sector. Soc. Stud. Sci. (2023). https://doi.org/10.1177/03063127231163756

Nichol, A.A., Halley, M.C., Federico, C.A., Cho, M.K., Sankar, P.L.: Not in my AI: moral engagement and disengagement in health care AI development. Pac. Symp. Biocomput. 28 , 496–506 (2023)

Aquino, Y.S.J., Carter, S.M., Houssami, N., Braunack-Mayer, A., Win, K.T., Degeling, C., et al.: Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives. J. Med. Ethics (2023). https://doi.org/10.1136/jme-2022-108850

Nichol, A.A., Bendavid, E., Mutenherwa, F., Patel, C., Cho, M.K.: Diverse experts’ perspectives on ethical issues of using machine learning to predict HIV/AIDS risk in sub-Saharan Africa: a modified Delphi study. BMJ Open 11 (7), e052287 (2021). https://doi.org/10.1136/bmjopen-2021-052287

Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J.B., Shariff, A., et al.: Drivers are blamed more than their automated cars when both make mistakes. Nat. Hum. Behav. 4 (2), 134–143 (2020). https://doi.org/10.1038/s41562-019-0762-8

Blease, C., Kaptchuk, T.J., Bernstein, M.H., Mandl, K.D., Halamka, J.D., DesRoches, C.M.: Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners’ views. J. Med. Internet Res. 21 (3), e12802 (2019). https://doi.org/10.2196/12802

Blease, C., Locher, C., Leon-Carlyle, M., Doraiswamy, M.: Artificial intelligence and the future of psychiatry: qualitative findings from a global physician survey. Digit. Health 6 , 1–18 (2020). https://doi.org/10.1177/2055207620968355

De Graaf, M.M.A., Hindriks, F.A., Hindriks, K.V.: Who wants to grant robots rights? Front Robot AI 8 , 781985 (2022). https://doi.org/10.3389/frobt.2021.781985

Guerouaou, N., Vaiva, G., Aucouturier, J.-J.: The shallow of your smile: the ethics of expressive vocal deep-fakes. Philos. Trans. R Soc. B Biol. Sci. 377 (1841), 1–11 (2022). https://doi.org/10.1098/rstb.2021.0083

McCradden, M.D., Baba, A., Saha, A., Ahmad, S., Boparai, K., Fadaiefard, P., Cusimano, M.D.: Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study. CMAJ Open 8 (1), E90–E95 (2020). https://doi.org/10.9778/cmajo.20190151

Rogers, W.A., Draper, H., Carter, S.M.: Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues. Bioethics 36 (4), 624–633 (2021). https://doi.org/10.1111/bioe.12885

Tosoni, S., Voruganti, I., Lajkosz, K., Habal, F., Murphy, P., Wong, R.K.S., et al.: The use of personal health information outside the circle of care: consent preferences of patients from an academic health care institution. BMC Med. Ethics 22 (29), 1–14 (2021). https://doi.org/10.1186/S12910-021-00598-3

Allahabadi, H., Amann, J., Balot, I., Beretta, A., Binkley, C., Bozenhard, J., et al.: Assessing trustworthy AI in times of COVID-19: deep learning for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients. IEEE Trans. Technol. Soc. 3 (4), 272–289 (2022). https://doi.org/10.1109/TTS.2022.3195114

Gray, K., Slavotinek, J., Dimaguila, G.L., Choo, D.: Artificial intelligence education for the health workforce: expert survey of approaches and needs. JMIR Med. Educ. 8 (2), e35223 (2022). https://doi.org/10.2196/35223

Alfrink, K., Keller, I., Doorn, N., Kortuem, G.: Tensions in transparent urban AI: designing a smart electric vehicle charge point. AI Soc. 38 (3), 1049–1065 (2022). https://doi.org/10.1007/s00146-022-01436-9

Bourla, A., Ferreri, F., Ogorzelec, L., Peretti, C.S., Guinchard, C., Mouchabac, S.: Psychiatrists’ attitudes toward disruptive new technologies: mixed-methods study. JMIR Ment. Health 5 (4), e10240 (2018). https://doi.org/10.2196/10240

Kopecky, R., Kosova, M.J., Novotny, D.D., Flegr, J., Cerny, D.: How virtue signalling makes us better: moral preferences with respect to autonomous vehicle type choices. AI Soc. 38 , 937–946 (2022). https://doi.org/10.1007/s00146-022-01461-8

Lam, K., Abramoff, M.D., Balibrea, J.M., Bishop, S.M., Brady, R.R., Callcut, R.A., et al.: A Delphi consensus statement for digital surgery. NPJ Digit. Med. 5 (100), 1–9 (2022). https://doi.org/10.1038/s41746-022-00641-6

Karaca, O., Çalışkan, S.A., Demir, K.: Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – development, validity and reliability study. BMC Med. Educ. 21 (112), 1–9 (2021). https://doi.org/10.1186/s12909-021-02546-6

Papyshev, G., Yarime, M.: The limitation of ethics-based approaches to regulating artificial intelligence: regulatory gifting in the context of Russia. AI Soc. (2022). https://doi.org/10.1007/s00146-022-01611-y

Balaram, B., Greenham, T., Leonard, J.: Artificial intelligence: real public engagement. RSA, London. https://www.thersa.org/globalassets/pdfs/reports/rsa_artificial-intelligence---real-public-engagement.pdf (2018). Accessed 28 June 2023

Hagendorff, T.: A virtue-based framework to support putting AI ethics into practice. Philos Technol. 35 (55), 1–24 (2022). https://doi.org/10.1007/s13347-022-00553-z

Felt, U., Wynne, B., Callon, M., Gonçalves, M. E., Jasanoff, S., Jepsen, M., et al.: Taking european knowledge society seriously. Eur Comm, Brussels, 1–89 (2007). https://op.europa.eu/en/publication-detail/-/publication/5d0e77c7-2948-4ef5-aec7-bd18efe3c442/language-en

Michael, M.: Publics performing publics: of PiGs, PiPs and politics. Public Underst. Sci. 18 (5), 617–631 (2009). https://doi.org/10.1177/09636625080985

Hu, L.: Tech ethics: speaking ethics to power, or power speaking ethics? J. Soc. Comput. 2 (3), 238–248 (2021). https://doi.org/10.23919/JSC.2021.0033

Strasser, B., Baudry, J., Mahr, D., Sanchez, G., Tancoigne, E.: “Citizen science”? Rethinking science and public participation. Sci. Technol. Stud. 32 (2), 52–76 (2019). https://doi.org/10.23987/sts.60425

De Saille, S.: Innovating innovation policy: the emergence of ‘Responsible Research and Innovation.’ J. Responsible Innov. 2 (2), 152–168 (2015). https://doi.org/10.1080/23299460.2015.1045280

Schwarz-Plaschg, C.: Nanotechnology is like… The rhetorical roles of analogies in public engagement. Public Underst. Sci. 27 (2), 153–167 (2018). https://doi.org/10.1177/0963662516655686

Taylor, R.R., O’Dell, B., Murphy, J.W.: Human-centric AI: philosophical and community-centric considerations. AI Soc. (2023). https://doi.org/10.1007/s00146-023-01694-1

van Dijk, N., Tanas, A., Rommetveit, K., Raab, C.: Right engineering? The redesign of privacy and personal data protection. Int. Rev. Law Comput. Technol. 32 (2–3), 230–256 (2018). https://doi.org/10.1080/13600869.2018.1457002

Aradau, C., Blanke, T.: Algorithmic reason. The new government of self and others. Oxford University Press, Oxford (2022)

Book   Google Scholar  

McGoey, L.: The unknowers. How strategic ignorance rules the word. Zed, London (2019)

Ananny, M., Crawford, K.: Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20 (3), 973–989 (2018). https://doi.org/10.1177/1461444816676645

Download references

Acknowledgements

The authors would like to express their gratitude to Rafaela Granja (CECS, University of Minho) for her insightful support in an early stage of preparation of this manuscript, and to the AIDA research netwrok for the inspiring debates.

Open access funding provided by FCT|FCCN (b-on). Helena Machado and Susana Silva did not receive funding to assist in the preparation of this work. Laura Neiva received funding from FCT—Fundação para a Ciência e a Tecnologia, I.P., under a PhD Research Studentships (ref.2020.04764.BD), and under the project UIDB/00736/2020 (base funding) and UIDP/00736/2020 (programmatic funding).

Author information

Authors and affiliations.

Department of Sociology, Institute for Social Sciences, University of Minho, Braga, Portugal

Helena Machado

Department of Sociology and Centre for Research in Anthropology (CRIA), Institute for Social Sciences, University of Minho, Braga, Portugal

Susana Silva

Institute for Social Sciences, Communication and Society Research Centre (CECS), University of Minho, Braga, Portugal

Laura Neiva

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by HM, SS, and LN. The first draft of the manuscript was written by HM and SS. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Helena Machado .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose. The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 20 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Machado, H., Silva, S. & Neiva, L. Publics’ views on ethical challenges of artificial intelligence: a scoping review. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00387-1

Download citation

Received : 08 October 2023

Accepted : 16 November 2023

Published : 19 December 2023

DOI : https://doi.org/10.1007/s43681-023-00387-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Public involvement
  • Publics’ views
  • Responsible research
  • Find a journal
  • Publish with us
  • Track your research

The role of the literature review in research

  • PMID: 8346318
  • DOI: 10.1097/00006527-199301320-00014
  • Information Services*
  • Nursing Research / methods*
  • Review Literature as Topic*

IMAGES

  1. 15 Literature Review Examples (2024)

    importance of literature review pubmed

  2. Why Is Literature Review Important? (3 Benefits Explained)

    importance of literature review pubmed

  3. The Importance of Literature Review in Scientific Research Writing by

    importance of literature review pubmed

  4. Best Literature Review Writing

    importance of literature review pubmed

  5. IMPORTANCE OF LITERATURE REVIEW WRITING IN RESEARCH ARTICLE

    importance of literature review pubmed

  6. Qualities of an effective literature review in a proposal

    importance of literature review pubmed

VIDEO

  1. 3_session2 Importance of literature review, types of literature review, Reference management tool

  2. Sources And Importance Of Literature Review(ENGLISH FOR RESEARCH PAPER WRITING)

  3. Literature review with Semantic Scholar. #ai #review #artificial #pubmed #literaturerev

  4. Introduction of research paper made easy #research #citation #literature #pubmed #review

  5. How PubMed Works: Selection. March 9, 2023

  6. Is PubMed free to the public?

COMMENTS

  1. The Literature Review: A Foundation for High-Quality Medical Education Research

    Purpose and Importance of the Literature Review. An understanding of the current literature is critical for all phases of a research study. Lingard 9 recently invoked the "journal-as-conversation" metaphor as a way of understanding how one's research fits into the larger medical education conversation. As she described it: "Imagine yourself joining a conversation at a social event.

  2. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  3. Literature Reviews

    A literature review search is an iterative process. Your goal is to find all of the articles that are pertinent to your subject. Successful searching requires you to think about the complexity of language. You need to match the words you use in your search to the words used by article authors and database indexers.

  4. Writing a literature review

    When writing a literature review it is important to start with a brief introduction, followed by the text broken up into subsections and conclude with a summary to bring everything together. A summary table including title, author, publication date and key findings is a useful feature to present in your review (see Table 1 for an example).

  5. The Importance of Scholarly Reviews in Medical Literature

    To ensure best-of-evidence care, scholarly reviews streamline the information and adjust the coordinates of our knowledge grid. A scholarly review is a "research within research"—a macrocosm of pooled data that can be retrieved as processed and reliable source material for further studies. Such a review must be distinguished from an ...

  6. Literature Reviews and Systematic Reviews of Research: The ...

    The systematic review is a method, which is the main aim to synthesize and summarize the results of studies on the same research area. Systematic reviews have some differences from literature reviews in some aspects. The most distinct difference is systematic reviews involve a detailed and well-defined plan with a search strategy (Uman, 2011 ).

  7. Reviewing the literature

    Implementing evidence into practice requires nurses to identify, critically appraise and synthesise research. This may require a comprehensive literature review: this article aims to outline the approaches and stages required and provides a working example of a published review. Literature reviews aim to answer focused questions to: inform professionals and patients of the best available ...

  8. The Advantage of Literature Reviews for Evidence-Based Practice

    The importance of literature reviews is highlighted by the dedication of this entire issue to reviews. ... Katz J. R. (2015). A Systematic review of the literature on screening for exercise-induced asthma: Considerations for school nurses. The Journal of School Nursing, 31(1), 70-76. Crossref. PubMed. ISI. ... PubMed: 25631692. Authors ...

  9. Systematic Reviews: Literature Review

    When performing literature searches for a systematic review it's important to use a wide range of resources and searching methods in order to identify all relevant studies. As expert searchers, librarians play an important role in making sure your searches are comprehensive and reproducible. Standard 3.1.1 of the Institute of Medicine's Finding ...

  10. Why the literature review is important

    Why the literature review is important. Why the literature review is important. Why the literature review is important J Prosthodont. 2010 Dec;19(8):656. doi: 10.1111/j.1532-849X.2010.00664.x. Author Nellie Kremenak. PMID: ... Review Literature as Topic*

  11. An overview of the functionalities of PubMed

    An important feature of PubMed is the ability to conduct truncated searches. As an example, 'spondyloarthropathy', 'ankylosing spondylitis' and 'spondyloarthropathies' are related diseases. One could search for literature related to any of these words by using the search string 'spond*' that is searches identify any article ...

  12. Mapping ethical issues in the use of smart home health technologies to

    Background The worldwide increase in older persons demands technological solutions to combat the shortage of caregiving and to enable aging in place. Smart home health technologies (SHHTs) are promoted and implemented as a possible solution from an economic and practical perspective. However, ethical considerations are equally important and need to be investigated. Methods We conducted a ...

  13. Writing, reading, and critiquing reviews

    A review is more than just listing studies or referencing literature on your topic. Lead your readers to a convincing message. Provide commentary and interpretation for the studies in your review that will help you to inform your conclusions. For systematic reviews, Cook's final tip is most likely the most important- report completely.

  14. Publics' views on ethical challenges of artificial intelligence: a

    This scoping review examines the research landscape about publics' views on the ethical challenges of AI. To elucidate how the concerns voiced by the publics are translated within the research domain, this study scrutinizes 64 publications sourced from PubMed® and Web of Science™. The central inquiry revolves around discerning the motivations, stakeholders, and ethical quandaries that ...

  15. Communication in healthcare: a narrative review of the literature and

    Design: Narrative literature review. Methods: A search was carried out on the databases PubMed, Web of Science and The Cochrane Library by means of the (MeSH)terms 'communication', 'primary health care', 'correspondence', 'patient safety', 'patient handoff' and 'continuity of patient care'. Reviewers screened 4609 records and 462 full texts ...

  16. The role of the literature review in research

    The role of the literature review in research. The role of the literature review in research Plast Surg Nurs. 1993 Summer;13(2):115-6. doi: 10.1097/00006527-199301320-00014. Author R J Hinojosa. PMID: 8346318 DOI: 10.1097/00006527-199301320-00014 No abstract available. MeSH terms ...