Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Sign up for alerts
  • ADVERTISEMENT FEATURE Advertiser retains sole responsibility for the content of this article

IBM Research uses advanced computing to accelerate therapeutic and biomarker discovery

  • IBM Research

Produced by

Over the past decade, artificial intelligence (AI) has emerged as an engine of discovery by helping to unlock information from large repositories of previously inaccessible data. The cloud has expanded computer capacity exponentially by creating a global network of remote and distributed computing resources. And quantum computing has arrived on the scene as a game changer in processing power by harnessing quantum simulation to overcome the scaling and complexity limits of classical computing.

In parallel to these advances in computing, in which IBM is a world leader, the healthcare and life sciences have undergone their own information revolution. There has been an explosion in genomic, proteomic, metabolomic and a plethora of other foundational scientific data, as well as in diagnostic, treatment, outcome and other related clinical data. Paradoxically, however, this unprecedented increase in information volume has resulted in reduced accessibility and a diminished ability to use the knowledge embedded in that information. This reduction is caused by siloing of the data, limitations in existing computing capacity, and processing challenges associated with trying to model the inherent complexity of living systems.

IBM Research is now working on designing and implementing computational architectures that can convert the ever-increasing volume of healthcare and life-sciences data into information that can be used by scientists and industry experts the world over. Through an AI approach powered by high-performance computing (HPC)—a synergy of quantum and classical computing—and implemented in a hybrid cloud that takes advantage of both private and public environments, IBM is poised to lead the way in knowledge integration, AI-enriched simulation, and generative modeling in the healthcare and life sciences. Quantum computing, a rapidly developing technology, offers opportunities to explore and potentially address life-science challenges in entirely new ways.

“The convergence of advances in computation taking place to meet the growing challenges of an ever-shifting world can also be harnessed to help accelerate the rate of discovery in the healthcare and life sciences in unprecedented ways,” said Ajay Royyuru, IBM fellow and CSO for healthcare and life sciences at IBM Research. “At IBM, we are at the forefront of applying these new capabilities for advancing knowledge and solving complex problems to address the most pressing global health challenges.”

Improving the drug discovery value chain

Innovation in the healthcare and life sciences, while overall a linear process leading from identifying drug targets to therapies and outcomes, relies on a complex network of parallel layers of information and feedback loops, each bringing its own challenges (Fig. 1). Success with target identification and validation is highly dependent on factors such as optimized genotype–phenotype linking to enhance target identification, improved predictions of protein structure and function to sharpen target characterization, and refined drug design algorithms for identifying new molecular entities (NMEs). New insights into the nature of disease are further recalibrating the notions of disease staging and of therapeutic endpoints, and this creates new opportunities for improved clinical-trial design, patient selection and monitoring of disease progress that will result in more targeted and effective therapies.

Accelerated discovery at a glance

Fig. 1 | Accelerated discovery at a glance. IBM is developing a computing environment for the healthcare and life sciences that integrates the possibilities of next-generation technologies—artificial intelligence, the hybrid cloud, and quantum computing—to accelerate the rate of discovery along the drug discovery and development pipeline.

Powering these advances are several core computing technologies that include AI, quantum computing, classical computing, HPC, and the hybrid cloud. Different combinations of these core technologies provide the foundation for deep knowledge integration, multimodal data fusion, AI-enriched simulations and generative modeling. These efforts are already resulting in rapid advances in the understanding of disease that are beginning to translate into the development of better biomarkers and new therapeutics (Fig. 2).

“Our goal is to maximize what can be achieved with advanced AI, simulation and modeling, powered by a combination of classical and quantum computing on the hybrid cloud,” said Royyuru. “We anticipate that by combining these technologies we will be able to accelerate the pace of discovery in the healthcare and life sciences by up to ten times and yield more successful therapeutics and biomarkers.”

Optimized modeling of NMEs

Developing new drugs hinges on both the identification of new disease targets and the development of NMEs to modulate those targets. Developing NMEs has typically been a one-sided process in which the in silico or in vitro activities of large arrays of ligands would be tested against one target at a time, limiting the number of novel targets explored and resulting in ‘crowding’ of clinical programs around a fraction of validated targets. Recent developments in proteochemometric modeling—machine learning-driven methods to evaluate de novo protein interactions in silico—promise to turn the tide by enabling the simultaneous evaluation of arrays of both ligands and targets, and exponentially reducing the time required to identify potential NMEs.

Proteochemometric modeling relies on the application of deep machine learning tools to determine the combined effect of target and ligand parameter changes on the target–ligand interaction. This bimodal approach is especially powerful for large classes of targets in which active-site similarities and lack of activity data for some of the proteins make the conventional discovery process extremely challenging.

Protein kinases are ubiquitous components of many cellular processes, and their modulation using inhibitors has greatly expanded the toolbox of treatment options for cancer, as well as neurodegenerative and viral diseases. Historically, however, only a small fraction of the kinome has been investigated for its therapeutic potential owing to biological and structural challenges.

Using deep machine learning algorithms, IBM researchers have developed a generative modeling approach to access large target–ligand interaction datasets and leverage the information to simultaneously predict activities for novel kinase–ligand combinations 1 . Importantly, their approach allowed the researchers to determine that reducing the kinase representation from the full protein sequence to just the active-site residues was sufficient to reliably drive their algorithm, introducing an additional time-saving, data-use optimization step.

Machine learning methods capable of handling multimodal datasets and of optimizing information use provide the tools for substantially accelerating NME discovery and harnessing the therapeutic potential of large and sometimes only minimally explored molecular target spaces.

Focusing on therapeutics and biomarkers

Fig. 2 | Focusing on therapeutics and biomarkers. The identification of new molecular entities or the repurposing potential of existing drugs 2 , together with improved clinical and digital biomarker discovery, as well as disease staging approaches 3 , will substantially accelerate the pace of drug discovery over the next decade. AI, artificial intelligence.

Drug repurposing from real-world data

Electronic health records (EHRs) and insurance claims contain a treasure trove of real-world data about the healthcare history, including medications, of millions of individuals. Such longitudinal datasets hold potential for identifying drugs that could be safely repurposed to treat certain progressive diseases not easily explored with conventional clinical-trial designs because of their long time horizons.

Turning observational medical databases into drug-repurposing engines requires the use of several enabling technologies, including machine learning-driven data extraction from unstructured sources and sophisticated causal inference modeling frameworks.

Parkinson’s disease (PD) is one of the most common neurodegenerative disorders in the world, affecting 1% of the population above 60 years of age. Within ten years of disease onset, an estimated 30–80% of PD patients develop dementia, a debilitating comorbidity that has made developing disease-modifying treatments to slow or stop its progression a high priority.

IBM researchers have now developed an AI-driven, causal inference framework designed to emulate phase 2 clinical trials to identify candidate drugs for repurposing, using real-world data from two PD patient cohorts totaling more than 195,000 individuals 2 . Extracting relevant data from EHRs and claims data, and using dementia onset as a proxy for evaluating PD progression, the team identified two drugs that significantly delayed progression: rasagiline, a drug already in use to treat motor symptoms in PD, and zolpidem, a known psycholeptic used to treat insomnia. Applying advanced causal inference algorithms, the IBM team was able to show that the drugs exert their effects through distinct mechanisms.

Using observational healthcare data to emulate otherwise costly, large and lengthy clinical trials to identify repurposing candidates highlights the potential for applying AI-based approaches to accelerate potential drug leads into prospective registration trials, especially in the context of late-onset progressive diseases for which disease-modifying therapeutic solutions are scarce.

Enhanced clinical-trial design

One of the main bottlenecks in drug discovery is the high failure rate of clinical trials. Among the leading causes for this are shortcomings in identifying relevant patient populations and therapeutic endpoints owing to a fragmented understanding of disease progression.

Using unbiased machine-learning approaches to model large clinical datasets can advance the understanding of disease onset and progression, and help identify biomarkers for enhanced disease monitoring, prognosis, and trial enrichment that could lead to higher rates of trial success.

Huntington’s disease (HD) is an inherited neurodegenerative disease that results in severe motor, cognitive and psychiatric disorders and occurs in about 3 per 100,000 inhabitants worldwide. HD is a fatal condition, and no disease-modifying treatments have been developed to date.

An IBM team has now used a machine-learning approach to build a continuous dynamic probabilistic disease-progression model of HD from data aggregated from multiple disease registries 3 . Based on longitudinal motor, cognitive and functional measures, the researchers were able to identify nine disease states of clinical relevance, including some in the early stages of HD. Retrospective validation of the results with data from past and ongoing clinical studies showed the ability of the new disease-progression model of HD to provide clinically meaningful insights that are likely to markedly improve patient stratification and endpoint definition.

Model-based determination of disease stages and relevant clinical and digital biomarkers that lead to better monitoring of disease progression in individual participants is key to optimizing trial design and boosting trial efficiency and success rates.

A collaborative effort

IBM has established its mission to advance the pace of discovery in healthcare and life sciences through the application of a versatile and configurable collection of accelerator and foundation technologies supported by a backbone of core technologies (Fig. 1). It recognizes that a successful campaign to accelerate discovery for therapeutics and biomarkers to address well-known pain points in the development pipeline requires external, domain-specific partners to co-develop, practice, and scale the concept of technology-based acceleration. The company has already established long-term commitments with strategic collaborators worldwide, including the recently launched joint Cleveland Clinic–IBM Discovery Accelerator, which will house the first private-sector, on-premises IBM Quantum System One in the United States. The program is designed to actively engage with universities, government, industry, startups and other relevant organizations, cultivating, supporting and empowering this community with open-source tools, datasets, technologies and educational resources to help break through long-standing bottlenecks in scientific discovery. IBM is engaging with biopharmaceutical enterprises that share this vision of accelerated discovery.

“Through partnerships with leaders in healthcare and life sciences worldwide, IBM intends to boost the potential of its next-generation technologies to make scientific discovery faster, and the scope of the discoveries larger than ever,” said Royyuru. “We ultimately see accelerated discovery as the core of our contribution to supercharging the scientific method.”

Born, J. et al. J. Chem. Inf. Model. 62 , 240–257 (2022).

Article   PubMed   Google Scholar  

Laifenfeld, D. et al. Front. Pharmacol. 12 , 631584 (2021).

Mohan, A. et al. Mov. Disord. 37 , 553–562 (2022).

Harrer, S. et al. Trends Pharmacol Sci. 40 , 577–591 (2019).

Parikh, J. et al. J. Pharmacokinet. Pharmacodyn. 49 , 51–64 (2022).

Kashyap, A. et al. Trends Biotechnol. 40 , 647–676 (2021).

Norel, R. et al. npj Parkinson’s Dis. 6 , 12 (2020).

Article   Google Scholar  

Download references

ibm ai research

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

ibm ai research

Collaborate with us.

Access the people, tools, and insights to create your next breakthrough..

ibm ai research

MIT-IBM Watson AI Lab

A unique collaboration between IBM Research and MIT, the lab explores new AI paradigms, like foundation models, that directly impact business and society. Member companies sit on an advisory board and directly influence the lab’s research portfolio, grounding R&D with invaluable domain knowledge.

IBM Quantum Network

A worldwide collective shaping the future of quantum computing. The Network’s 200+ members include leading Fortune 500 companies, internationally recognized universities and laboratories, and cutting-edge startups. Quantum Network members benefit from close working relationships with our in-house experts. Premium Plan members access our most powerful systems with shorter wait times, hands-on support, and training.

AI Hardware Center

The AI Hardware center builds AI systems from the ground up – from materials, chips, devices, architecture, and the entire software stack. Working with NY State and a broader ecosystem of partners, the Center is paving the way for the next generation of AI systems that will improve business and the lives of people all over the world.

Joint development

With joint development, you will collaborate directly with our researchers to develop breakthrough technologies to help solve your biggest problems faster. Many of these engagements result in bespoke solutions tailored to your needs, drawing from our portfolio of advanced technology and innovations.

All press releases

 alt=

IBM and MBZUAI Join Forces to Advance AI Research with New Center of Excellence

ibm ai research

ABU DHABI, United Arab Emirates , May 25, 2022 / PRNewswire / -- Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)—the world's first graduate, research university dedicated to Artificial Intelligence (AI)—has announced plans for a strategic collaboration with IBM (NYSE:  IBM ). Senior leaders from both organizations signed a Memorandum of Understanding aimed at advancing fundamental AI research, as well as accelerating the types of scientific breakthroughs that could unlock the potential of AI to help solve some of humanity's greatest challenges. 

IBM Corporation logo. (PRNewsfoto/IBM)

Professor Eric Xing , President of MBZUAI, delivered short remarks, as did Jonathan Adashek , IBM's Senior Vice President and Chief Communications Officer, and Saad Toma , General Manager, IBM Middle East, and Africa . The agreement was then signed by Sultan Al Hajji , Vice President for Public Affairs and Alumni Relations at MBZUAI and Wael Abdoush , General Manager IBM Gulf and Levant.

"We're excited to to be among the first research universities in the MENA region to host a Center of Excellence for AI research and development with technology and expertise from a world-leading technological giant like IBM. This center will provide highly valuable resource and collaborative environment to our faculty and students to broaden their work in AI. IBM has a long history of technological innovation, and we look forward to joining their latest efforts in our region and together advance AI technology and commercialization for mutual good," MBZUAI President, Professor Eric Xing said.

Saad Toma , General Manager, IBM Middle East and Africa , said: "This collaboration will help drive innovations in AI which is critical for the future of business and society. We're bringing together some of the brightest minds across both the industry and academia, while reinforcing IBM's commitment to promoting knowledge and skills in critical areas for the UAE's development, where the use of technologies like AI is fundamental."

Central to the collaboration is the establishment of a new AI Center of Excellence to be based at the university's Masdar City campus. The Center will leverage the talents of IBM researchers, in collaboration with MBZUAI faculty and students, and will focus on the advancement of both fundamental and applied research objectives. 

The initiative seeks to develop, validate, and incubate technologies that harness the capabilities of AI to address civic, social, and business challenges. Further, the collaboration aims to provide real-life applications, particularly in the fields of natural language processing, as well as AI applications that seek to further climate and sustainability goals, and accelerate discoveries in healthcare.

IBM will provide targeted training and technologies as part of the initiative, which supports the university's vision to be a global leader for advancing AI and its application for the good of society and business. For example, through the  IBM Academic Initiative , IBM will provide MBZUAI students and faculty  with access to IBM tools, software, courseware and cloud accounts for teaching, learning, and non-commercial research. In addition, through the  IBM Skills Academy  program, MBZUAI will have access to curated AI curricula, lectures, labs, industry use cases, design-thinking sessions, and an AI Practitioner certification. 

The planned relationship is subject to the parties reaching definitive agreements.

About IBM 

IBM is a leading global hybrid cloud and AI, and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs, and gain the competitive edge in their industries. Nearly 3,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. Visit  www.ibm.com  for more information.

About Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)

MBZUAI is a graduate, research university focused on artificial intelligence, computer science, and digital technologies across industrial sectors. The university aims to empower students, businesses, and governments to advance artificial intelligence as a global force for positive progress. MBZUAI offers various graduate programs designed to pursue advanced, specialized knowledge and skills in artificial intelligence, including computer vision, machine learning, and natural language processing. For more information, please visit www.mbzuai.ac.ae

For press inquiries, please contact:

 
   

Nicholas Demille   
Associate Director of Communications at
MBZUAI
  

Jumana Akkawi
Communications Director, IBM Middle East
and Africa and Central Eastern Europe
  

Aya Sakoury
Head of Communications at MBZUAI

 

Release Categories

  • Artificial intelligence
  • Social impact
  • Hybrid cloud
  • Research and innovation

Additional Assets

  • MIT-IBM Watson AI Lab
  • Inside the lab
  • Papers + Code

IBM Research

314 Main St. Cambridge, MA 02141

AI science for real-world impact

Welcome to the MIT-IBM Watson AI Lab

CLEVRER: The first video dataset for neuro-symbolic reasoning

Computer Vision Explainability ICLR Neuro-Symbolic AI

We are a community of scientists at MIT and IBM Research. We conduct AI research and work with global organizations to bridge algorithms to impact business and society.

↳ Inside the Lab

ibm ai research

Creating bespoke programming languages for efficient visual AI systems

05/03/2024 - MIT News

ibm ai research

This tiny chip can safeguard user data while enabling efficient computing on a smartphone

04/23/2024 - MIT News

ibm ai research

3 Questions: Enhancing last-mile logistics with machine learning

04/16/2024 - MIT News

ibm ai research

Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior

04/30/2022 - IBM Research Blog

ibm ai research

A faster, better way to prevent an AI chatbot from giving toxic responses

04/10/2024 - MIT News

ibm ai research

Dealing with the limitations of our noisy world

03/01/2024 - MIT News

Black Loans Matter: Fighting Bias for AI Fairness in Lending

AI Fairness

ibm ai research

Can a Fruit Fly Learn Word Embeddings?

ibm ai research

Large Associative Memory Problem in Neurobiology and Machine Learning

  • ↳ Membership

The MIT-IBM Watson AI Lab membership program is forging a new model for R&D. We extend the unique collaboration between MIT and IBM Research to a small group of innovative companies and strategic partners comprising leaders in consumer technology, medical devices, finance, construction, energy, and international development.

ibm ai research

  • Neuro-Symbolic AI
  • Causal Inference
  • Graph Deep Learning
  • Natural Language Processing
  • Transfer Learning
  • Computer Vision
  • Efficient AI
  • Time Series
  • Automated Planning

ibm ai research

More From Forbes

Ibm demonstrates groundbreaking artificial intelligence research using foundational models and generative ai.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

AI has already demonstrated its power to revolutionize industries and accelerate scientific investigation. One field of AI research that has made stunning advancements is in the area of foundation models and generative AI, which enables computers to generate original content based on input data. This technology has been used to create everything from music and art to fake news reports.

OpenAI recently showcased the impressive capabilities of artificial intelligence by offering free access to ChatGPT, a state-of-the-art generative transformer model. The move generated widespread media attention and excitement among users, highlighting the massive potential of AI. This demonstration came just three months after the release of ChatGPT to the public.

Faced with the disruptive impact of OpenAI's GPT-3 model, Google and Microsoft were compelled to reveal AI integration plans for their respective search engines. The demonstration of AI's practical and powerful capabilities by OpenAI will no doubt raise the public’s expectations and demand for more advanced AI products in the future. OpenAI's move sparked one of the quickest and most significant disruptions in an industry segment that has ever been witnessed.

It is universally acknowledged that human life is of paramount importance. In this article, we shed light on the life-saving potential of AI by examining its practical applications in the creation of new antibiotics and other scientific AI tools. Innovative use of foundation models and generative AI has the capability to increase revenues, optimize processes, and streamline the creation and accumulation of knowledge, however, it also has the potential to save millions of lives around the world. This discussion aims to increase visibility for the importance of AI’s potential to save lives and highlight the need to expand its development and deployment in these areas.

From simple algorithms to breakthrough advances

Artificial intelligence (AI) had rather simple beginnings in the 1950s. It tackled simple algorithms and mathematical models designed for specific tasks. Much later, in the 1990s, AI research underwent a major shift towards machine learning algorithms that enabled computers to improve their performance by analyzing patterns in data and transferring that knowledge to new applications. This shift gave rise to numerous breakthroughs in the field, including the development of deep learning algorithms that revolutionized areas such as computer vision and natural language processing (NLP). These advances have led to even more new achievements and further expanded the potential of AI.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

Today, AI researchers continue to push the boundaries by developing new algorithms and models that can tackle increasingly complex tasks. AI and the size of models continues to evolve at an unprecedented pace, producing responses that are more human-like and expanding the range of tasks it can perform. Breakthroughs and applications are still being made in areas such as natural language processing (NLP), computer vision, and robotics. Despite its limitations and challenges, AI has proven to be a transformative force across a wide array of industries and fields, including healthcare, finance, transportation, and education.

Cutting-edge AI research by an IBM Master Inventor

IBM has one of the largest and most well-funded AI research programs in the world and I recently had the opportunity to discuss its program with Dr. Payel Das, principal research staff member and manager at IBM Research who is also an IBM master inventor.

Dr. Das has served as an adjunct associate professor in the department of Applied Physics and Applied Mathematics (APAM) at Columbia University. She is currently serving as an advisory board member of AMS at Stony Brook University. Dr. Das received her B.S. from Presidency College in Kolkata, India, and her M.S. from the Indian Institute of Technology in Chennai, India. She was awarded a Ph.D. in theoretical biophysics from Rice University in Houston, Texas. Dr. Das has coauthored more than 40 peer-reviewed publications. She has also received awards from Harvard Belfer Center TAPP 2021 and IEEE open source 2022. She also has a number of IBM awards including the IBM Outstanding Technical Achievement Award (the highest technical award at IBM), two IBM Research Division Awards, one IBM Eminence and Excellence Award, and two IBM Invention Achievement Awards.

As a member of the Trustworthy AI department and the generative AI lead within IBM Research, Dr. Das is currently focused on developing new algorithms, methods, and tools to develop generative AI systems that are created from foundation models.

Her team is also working on using synthetic data to make the AI models more trustworthy and to ensure fairness and robustness in downstream AI applications.

The power of synthetic data and how it advances AI

In our data-driven era, synthetic data has become an indispensable tool for testing and training AI models. This computer-generated information is cost-effective to produce, comes with automatic labeling, and avoids many of the ethical, logistical, and privacy challenges associated with training deep learning models on real-world data.

Synthetic data is critical for business applications as it offers solutions when real data is scarce or inadequate. One of the key advantages of synthetic data is its ability to be generated in vast quantities, making it ideal for training AI models. Furthermore, synthetic data can be designed to encompass a diverse range of variations and examples, leading to better generalization and usability of the model. These attributes make synthetic data an indispensable tool in the advancement of AI and its real-world applications.

It is crucial that the generated synthetic data adheres to user-defined controls to ensure it serves its intended purpose and minimizes potential risks. The specific controls required vary depending on the intended application and desired outcome. Ensuring that synthetic data aligns with these controls is essential to guarantee its effectiveness and safety in real-world applications.

Transforming the future with universal representation models

Transformers have been widely adopted for many different applications and proven to be highly ... [+] effective for processing complex data such as natural language

The first AI models utilized feedforward neural networks, which were effective in modeling non-sequential data. However, they were not equipped to handle sequential data. To overcome this limitation, recurrent neural networks (RNNs) were developed in the 1990s, but it wasn't until around 2010 that they saw widespread implementation.

This breakthrough in technology expanded the capabilities of AI to process sequential data and paved the way for further advancements in the field. Then another type of AI model called a transformer, radically improved AI capabilities.

The transformer made its first appearance in a 2017 Google research paper that proposed a new type of neural network architecture. Transformers also incorporated self-attention mechanisms that allowed models to focus on relevant parts of an input and made more accurate predictions.

The self-attention mechanism is a defining feature that sets transformers apart from other encoder-decoder architectures. This mechanism proves especially beneficial in natural language processing as it enables the model to grasp the relationships between words in a sentence and recognize long-term dependencies. The transformer accomplishes this by assigning weights to each element in the sequence, based on its relevance to the task. This way, the model can prioritize the most crucial parts of the input, resulting in more context-aware and informed predictions or decisions. The integration of self-attention mechanisms has greatly advanced the capabilities of AI models in natural language processing.

According to Dr. Das, in recent years, there has been a shift away from RNNs as the primary architecture for natural language processing tasks. RNNs can be difficult to train and can suffer from vanishing gradient problems, which can make it challenging to learn long-term dependencies in language data. By contrast, transformers have been shown to be more effective in achieving state-of-the-art results on a variety of natural language processing tasks.

Unlocking the power of foundational models

Models that are trained using large-scale data and self-supervision techniques can produce a universal representation that is not specific to any particular task. This representation can then be utilized in various other applications with little to no further adjustment.

These models are referred to as "foundational models," a term coined by Stanford University in a 2021 research paper. Many of today's foundational models adopt transformer architecture and have proven versatile in a broad range of natural language processing (NLP) tasks. This is due to their pre-training on vast datasets, which results in powerful machine learning models ready for deployment. The use of foundational models has greatly impacted and improved the field of NLP.

Dr. Das and the IBM research team have been involved in a significant amount of AI research with foundation models and generative AI.

The above graphic shows how a foundation model can be used to build models for different fields by using text as the input data. They may or may not use transformer architecture. On the left side of the graphic, a large language model is shown, which progressively maps letters to words to sentences and finally to language.

The illustration on the right side of the graphic depicts a chemistry transformer model, which connects atoms to molecules and to chemistry. The same concept could be applied to build foundation models for biology or other related fields by representing biological or chemical molecules as text.

It's crucial to note that the transformer architecture is adaptable to a diverse array of fields, as long as the input data can be expressed in textual form. This versatility makes the transformer architecture a valuable tool for creating machine learning models in many domains.

Pushing the boundaries of creativity with generative AI

Generative models have the ability to create new and unique images, audio, or text for a variety of applications. These models have also enabled AI systems to become more effective at processing complex data and have opened up new possibilities for using AI in a wide range of applications.

Foundation models serve as a strong basis for creating generative models due to their ability to handle and learn from vast amounts of data. By adjusting the parameters of these models to focus on a specific task, like generating images or text, new generative AI models can be created that produce unique content within specific fields.

Dall.E2 byOpenAI

As an illustration, if the objective is to develop a generative AI model for art, a pre-trained foundational model would first be trained on a vast collection of art images. After successful training, it could then be utilized to produce novel and original pieces of art. Above is a sample of art created by an AI program named Dall.E2 in response to a prompt requesting it to generate a painted portrait of a human face, as perceived by AI.

Overcoming the small data challenge in generative AI

Implications of foundation models go well beyond NLP

“When we first started working on generative AI,” Dr. Das said, “it occurred to us that one of our problems was learning from small data for any domain-specific or any industry-specific application.”

Generative AI models require large amounts of data to accurately learn and generate new, similar data. When working with small data sets, the performance and usefulness of these models can be limited. Dr. Das recognizes this challenge and understands that techniques like transfer learning and data augmentation can help improve their performance in these situations.

Despite the challenges posed by small data sets for generative AI models, for each of the domains in the above graphic, a vast amount of unlabeled data exists in businesses. This data provides an opportunity to train custom foundational models, enabling the solution of previously thought unsolvable problems. This aligns with IBM Research's focus on exploring new AI capabilities through generative AI and pushing the boundaries of AI science.

Broad generative AI research

Thrice Revealed

IBM has made significant contributions in each of the domains represented in the image. Their work is so extensive, it is difficult to cover all their achievements in a single article.

Synthesizing antimicrobials with generative AI

AI has the potential to revolutionize various fields and speed up scientific progress. As an example, Dr. Das and her research team have leveraged AI to develop innovative antimicrobials to fight against lethal antibiotic-resistant bacteria.

The Fight Against Superbugs

Antibiotics were first used to treat serious infections in the 1940s. Since then, antibiotics have saved millions of lives and transformed modern medicine. Yet the CDC estimates that about 47 million antibiotic treatments are prescribed each year for infections that don’t need antibiotics.

The overuse of antibiotics is a critical problem because it contributes to the development of antibiotic-resistant infections caused by common bacteria like E.coli and staphylococcus, as well as more dangerous and rare bacteria such as MRSA. These resistant infections are challenging to treat and can result in serious consequences like sepsis, organ malfunction, and death.

When traditional antibiotics are no longer able to effectively kill bacteria, it becomes much more difficult or even impossible to treat and control infections. These antibiotic-resistant bacteria—commonly called superbugs—can spread quickly and cause serious infections, particularly in hospitals and other healthcare settings. Superbugs can also be found in the environment, in food, and on surfaces, plus they can be transmitted from person to person.

It is a serious global health problem. Drug-resistant diseases kill 700,000 people annually around the world; by 2050, that number is expected to rise to 10 million deaths per year.

How bacteria outsmart antibiotics

Bacteria and viruses transform into superbugs through the activation of innate defense strategies that render antibiotics ineffective. These defense mechanisms can involve physical, chemical, or biological processes that safeguard the germs and enable them to escape or counteract danger to their existence. Such processes may produce enzymes that inactivate antibiotics, alter the bacterial cell wall making the organism less responsive to the drugs, or allow the bacteria to obtain genetic information from other bacteria that possess inherent immunity to antibiotics.

Streamlining drug development with AI

The conventional method of creating a new antimicrobial drug is a lengthy and expensive undertaking, frequently taking many years and a hefty sum of money before it can be commercially available. But recent advancements in Artificial Intelligence (AI) are revolutionizing the drug discovery and development process.

By utilizing AI's ability to generate and evaluate numerous possible drug candidates, researchers can swiftly pinpoint the most promising options and concentrate their efforts on them. This streamlines the drug development process, cutting down the time and cost involved and leading to the production of more efficient antimicrobial drugs at a quicker pace.

Overview and 48-day timeline of the research team’s proposed AI-driven approach for accelerated ... [+] antimicrobial design Source: IBM Research, Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics. Nature Biomed. Eng., March 2021

In a collaborative effort between Dr. Das and her team at IBM, as well as other organizations, they conducted a study to find innovative solutions to the problem of antimicrobial resistance. The study utilized AI to synthesize and evaluate 20 unique antimicrobial peptide designs, chosen from a pool of 90,000 sequences.

The AI models were specifically designed to combat antibiotic resistance, incorporating controls for broad-spectrum efficacy and low toxicity, and slowing down the emergence of resistance. This approach aimed to create effective solutions that not only fight against resistant bacteria but also minimize the risk of harmful side effects and prevent further resistance from developing.

The team tested these designs against a diverse range of gram-negative and gram-positive bacteria, which led to the identification of six successful drug candidates. The toxicity of these candidates was further evaluated in both a mouse model and a test tube.

AI-powered success

Dr. Das expressed excitement about the success of the design, pointing out that it embodies many of the sought-after characteristics expected in the next generation of drug candidates. The accompanying illustration outlines the plan and estimated duration of using AI to speed up the antimicrobial design process, which can be accomplished in just one and a half months, significantly quicker than the conventional method that takes several years.

The use of AI in accelerating the discovery of new antimicrobial drugs has proven to be a game-changer, offering clear benefits such as faster speed and reduced expenses. Moreover, AI models offer a more streamlined approach by directing the attention of researchers to the most promising leads. Additionally, generative AI enables scientists to design innovative drug compounds that boast unique features and elevated efficacy compared to existing drugs.

The researchers at IBM have harnessed the power of generative AI to streamline the development of new antimicrobial drugs. Additionally, they have used AI to create valuable tools, such as MolFormer and MolGPT, for predicting the properties of chemical molecules which plays a crucial role in various fields including drug discovery and material design.

Wrapping up

Generative AI has captured the attention of various industries, including music, art, healthcare, and pharmaceuticals, as one of the most exciting advancements in AI in recent times. Despite its limitations and challenges, AI continues to demonstrate its potential to revolutionize different fields.

Its ability to swiftly create and test life-saving medicines for antibiotic-resistant bacteria and other pathogens is a testament to its significance and promise.

With the recent buzz surrounding OpenAI's GPT-3 trial and the subsequent developments by Google and Microsoft, it's likely we will not only see a surge in AI-powered products in the coming year, I expect further disruptions to occur. Some may be trivial, but the hope is that many will feature meaningful integrations of AI that will be beneficial to the markets.

Analyst Notes:

  • While some may question the absence of a discussion on the combination of facial recognition and AI, it is important to note that facial recognition technology and GPT models are separate AI technologies with distinct functions and methods. IBM, which was once a leader in human face data, has chosen not to work in the field of facial recognition due to the controversial political and privacy issues surrounding it. However, IBM is still actively involved in other AI modalities such as language processing, image recognition, graphics analysis, speech recognition, and various combinations in multimodal AI applications.
  • A final remark on the market response to the release of GPT-3: It was noteworthy that Microsoft appeared well-prepared when the GPT-3 news broke, whereas Google seemed caught off guard and was forced to hold an emergency meeting with its founders to come up with a plan. In contrast, Microsoft had already planned how it was going to integrate AI into its operations. There is a significant disparity in search revenue between the two companies, with Microsoft earning a total of $22 billion in 2022 search revenue, while Google had $59 billion in the last quarter of 2022. It is surprising that Google was not more prepared to defend against the potential impact of GPT-3, considering the model’s search-applicable capabilities and the obvious threat it posed to one-third of Google's total revenue.
  • DALL.E2 mentioned in the article is a cutting-edge deep learning model that generates digital images based natural language input. It is based on a version of OpenAI’s GPT-3.
  • For more information about more of IBM’s AI research, you might be interested in my previous articles:

IBM CodeNet: Artificial Intelligence That Can Program Computers And Solve A $100 Billion Legacy Code Problem

IBM’s AutoAI Has The Smarts To Make Data Scientists A Lot More Productive – But What’s Scary Is That It’s Getting A Whole Lot Smarter

Note: Moor Insights & Strategy writers and editors may have contributed to this article.  

Moor Insights & Strategy provides or has provided paid services to technology companies like all research and tech industry analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ,  IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro. 

Paul Smith-Goodson

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

With the world as your laboratory, you'll push the boundaries of science, technology and business to make the world work better, no matter where discovery takes place.

Featured stories

Learn about what type of work you'll be doing.

This study dives into how likely we are to perceive newly common words as a result of the pandemic.

Meet Catherine, who joined IBM as an AI research scientist intern and is now a full-time software developer.

Listen to IBM researchers talk about their work on the first-ever atomic-resolution images of molecules of extraterrestrial origin.

We aspire to make a lasting, positive global impact on business ethics, the environment and the communities where we work and live in.

We actively support initiatives like Call for Code that bring technology to communities in need. Working with partners like the United Nations and the Linux® Foundation on open source projects, we're able to fight systemic racism, improve clean water access and more.

We empower our IBMers to exemplify behavior that fosters a culture of conscious inclusion and belonging, where innovation can thrive. We're dedicated to promoting, advancing and celebrating plurality of thought from those of all backgrounds and experiences.

Not only has IBM pledged to skill 30 million people globally by 2030, our IBMers have also committed to achieve a minimum of 40 hours of personal learning every year through our skills programs.

Man giving a lecture

James, Quantum Research Staff Member

Conduct ground-breaking research into new superconducting microwave devices and develop new ways to operate them in the field of quantum computing.

Work on cutting-edge research in cloud infrastructures, specifically networking, contributing to state-of-the-art products, services, open source and innovation.

Develop safe, explainable and responsible AI systems utilizing knowledge-based and data-driven AI techniques.

Stay up-to-date on career opportunities in Research that match your skills and interests.

IBM gets lift from software, AI demand as consulting slips

  • Medium Text

Illustration shows IBM logo

Sign up here.

Reporting by Arsheeya Bajwa in Bengaluru; Editing by Sriraj Kalluvila

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

The Apple Inc. logo is seen hanging at the entrance to the Apple store on 5th Avenue in New York

US union and Apple reach tentative labor agreement

The International Association of Machinists and Aerospace Workers (IAM) Coalition of Organized Retail Employees (IAM CORE) reached a tentative agreement with tech giant Apple on Friday over improvement in work-life balance, pay raises and job security.

Astronauts arrive before launch to the International Space Station, in Cape Canaveral

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Photo of McKinsey Partners, Lareina Yee and Roger Roberts

Future frontiers: Navigating the next wave of tech innovations

Join Lareina Yee and Roger Roberts on Tuesday, July 30, at 12:30 p.m. EDT/6:30 p.m. CET as they discuss the future of these technological trends, the factors that will fuel their growth, and strategies for investing in them through 2024 and beyond.

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

IMAGES

  1. Artificial Intelligence

    ibm ai research

  2. IBM's Revolutionary Artificial Intelligence Simulates the Real World

    ibm ai research

  3. IBM’s AI Classifies Seizures With 98.4% Accuracy

    ibm ai research

  4. Artificial Intelligence

    ibm ai research

  5. IBM and AI Experience

    ibm ai research

  6. Artificial Intelligence

    ibm ai research

COMMENTS

  1. Artificial Intelligence

    AI is revolutionizing how business gets done, but popular models can be costly and are often proprietary. At IBM Research, we're designing powerful new foundation models and generative AI systems with trust and transparency at their core. We're working to drastically lower the barrier to entry for AI development, and to do that, we're committed to an open-source approach to enterprise AI.

  2. IBM Research

    At IBM Research, we're inventing what's next in AI, quantum computing, and hybrid cloud to shape the world ahead.

  3. IBM's new AIU artificial intelligence chip

    Meet the IBM Artificial Intelligence Unit It's our first complete system-on-chip designed to run and train deep learning models faster and more efficiently than a general-purpose CPU.

  4. LLMs revolutionized AI. LLM-based AI agents are what's next

    AI agents built on large language models control the path to solving a complex problem. They can typically act on feedback to refine their plan of action, a capability that can improve performance and help them accomplish more sophisticated tasks.

  5. Trustworthy AI

    Our trust in technology relies on understanding how it works. It's important to understand why AI makes the decisions it does. We're developing tools to make AI more explainable, fair, robust, private, and transparent.

  6. Artificial Intelligence (AI) Solutions

    What's new in AI from IBM Research. Explore IBM's AI Solutions IBM's artificial intelligence solutions help you build the future of your business. These include: IBM® watsonx™, our AI and data platform and portfolio of AI-powered assistants; IBM® Granite™, our family of open-sourced, high-performing and cost-efficient models trained ...

  7. The most important AI trends in 2024

    2022 was the year that generative artificial intelligence (AI) exploded into the public consciousness, and 2023 was the year it began to take root in the business world. 2024 thus stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives.

  8. Artificial Intelligence

    Leverage educational content like blogs, articles, videos, podcasts, tutorials, reports and more, crafted by IBM experts, on emerging AI and ML technologies.

  9. AI Alliance Launches as an International Community of Leading

    The AI Alliance is focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness. By bringing together leading developers, scientists, academic institutions, companies, and other innovators, we will pool resources and knowledge to address ...

  10. AI

    AI is redefining business by combining the social, creative and leadership skills of the human mind with advanced technology. Our studies address the key strategic questions surrounding AI.

  11. Research

    Artificial intelligence is still in its infancy. We need scientific advancement for progress and innovation in business and enterprise AI. At the MIT-IBM Watson AI Lab, scientists from MIT and IBM Research are working together to build the future of AI.

  12. IBM Research uses advanced computing to accelerate therapeutic and

    Focusing on accelerated discovery, IBM Research is leveraging next-generation computing technologies—artificial intelligence, the hybrid cloud, and quantum computing—to streamline and optimize ...

  13. Collaborate with us

    MIT-IBM Watson AI Lab. A unique collaboration between IBM Research and MIT, the lab explores new AI paradigms, like foundation models, that directly impact business and society. Member companies sit on an advisory board and directly influence the lab's research portfolio, grounding R&D with invaluable domain knowledge.

  14. IBM and MBZUAI Join Forces to Advance AI Research with New Center of

    Central to the collaboration is the establishment of a new AI Center of Excellence to be based at the university's Masdar City campus. The Center will leverage the talents of IBM researchers, in collaboration with MBZUAI faculty and students, and will focus on the advancement of both fundamental and applied research objectives. The initiative ...

  15. IBM Research

    IBM Research is a group of researchers, scientists, technologists, designers, and thinkers inventing what's next in computing. We're relentlessly curious about all the ways that computing can ...

  16. Home

    MIT and IBM Research have joined forces to create the MIT-IBM Watson AI Lab, the world's foremost academic-industry alliance for advanced AI research focused on real-world impact in business and society.

  17. The CEO's guide to generative AI: Responsible AI & Ethics

    The IBM Institute for Business Value uses data-driven research and expert analysis to deliver thought-provoking insights to leaders on the emerging trends that will determine future success.'.

  18. AI Drives IBM's Quarterly Results Amid Economic Challenges

    Much of the strong performance IBM reported for 2Q 2024 is attributable to the transformative impact of artificial intelligence.

  19. IBM Built A Giant AI Supercomputer In The Cloud To Train Its ...

    IBM realized that building an AI supercomputer with an architecture designed to build and train massive AI models would be beneficial to its research efforts, and eventually, its customers. The ...

  20. Ibm-research-ai

    IBM Developer is your one-stop location for getting hands-on training and learning in-demand skills on relevant technologies such as generative AI, data science, AI, and open source.

  21. home

    The Rensselaer-IBM Artificial Intelligence Research Collaboration (AIRC), a member of the IBM AI Horizons, is dedicated to advancing the science of artificial intelligence and enabling the use of AI and machine learning in research investigations, innovations, and applications of joint interest to both Rensselaer and IBM.

  22. IBM Research Touts AI Supercomputer For Foundation AI

    IBM has built a supercomputer in the IBM Cloud to help its scientists create new AI models.

  23. IBM Demonstrates Groundbreaking Artificial Intelligence Research Using

    One field of AI research that has made stunning advancements is in the area of foundation models and generative AI, which enables computers to generate original content based on input data.

  24. Jobs in Research team

    Available jobs Artificial Intelligence Researcher. Develop safe, explainable and responsible AI systems utilizing knowledge-based and data-driven AI techniques. Available jobs. Featured Research jobs. The world is our laboratory: No matter where discovery takes place, IBM researchers push the boundaries of science, technology, and business.

  25. IBM gets lift from software, AI demand as consulting slips

    IBM beat analysts' estimates for second-quarter revenue and raised its annual growth forecast for its software business on Wednesday, riding on higher AI-linked spending by clients looking to ...

  26. IBM Beats Quarterly Revenue Estimates on Software Strength, AI Demand

    (Reuters) -IBM beat analysts' estimates for second-quarter revenue on Wednesday, riding on strong demand for its software and higher AI-linked spending by clients looking to tap the booming ...

  27. The state of AI in early 2024: Gen AI adoption spikes and starts to

    As generative AI adoption accelerates, survey respondents report measurable benefits and increased mitigation of the risk of inaccuracy. A small group of high performers lead the way.