Why AI for drug discovery will far exceed current expectations

June 18, 2025

First published in STAT+ on June 18, 2025

Why AI for drug discovery will far exceed current expectations 

The usual biotech metric of “how many AI drugs are in the clinic?” misses the point

By Brendan Frey, Chief Innovation Officer and Co-founder of Deep Genomics 

In a time of relentless AI hype and mounting global economic uncertainty, the real winners will be those who think long-term. One of the most transformative opportunities lies in building AI that can decode biology, promising not just massive returns, but a profound impact on global health. 

In the 1990s, after co-inventing deep learning systems that underlie modern AI chatbots, I turned my attention to AI for decoding biology and designing genetic medicines. I’ve experienced the arc of this field over 20 years, from a time before AI therapeutics startups existed.  

Someone recently asked me whether I have doubts about AI’s impact on drug discovery, given the breathless ups and downs of popular opinion. My answer was a steadfast “No!” In fact, I’m more enthusiastic than ever about where we’re headed. 

But exceeding expectations won’t happen automatically. It depends on what we do next: how we develop the technology, how we measure progress, and how we adapt our thinking. This is a story not just about AI’s potential, but about the mindset and momentum required to realize it. 

The AI I’m talking about isn’t for running clinical trials, writing regulatory filings, or preparing internal reports. We already have chatbots for that.  

I’m talking about AI that uncovers new target biology and designs therapeutics. If we get it right, one day we will be able to say, "Here’s a therapeutic. Without AI, we simply would not have found it." 

We’re closer to that future than many realize.  

Rethinking the metrics of success  

The usual biotech metric of “how many AI drugs are in the clinic?” misses the point. Disruptive technologies rarely follow linear paths, and traditional indicators often lag behind the most telling signs of future breakthroughs. 

Breakthroughs in speech recognition, chess, and chatbots were not anticipated by output metrics such as speech recognition accuracy, number of chess games won, and quality of chatbot interaction. These metrics were quite poor and improved very slowly, without indicating that a major leap was imminent. Instead, real breakthroughs tended to follow changes to the AI itself: removing humans from the loop, dramatically increasing the amount of data the system could process, and scaling up how much it could learn. This makes sense: For humans, an excellent indicator of future success is whether a child has learned to read, rather than current cash earnings. 

Take as an example training an image classifier to recognize apples. If we only optimize it to get better at identifying apples, it will eventually perform that task very well, but it won’t generalize beyond that narrow goal. However, if we make the system fundamentally better at understanding images – by improving its capacity, training data, and architecture – it will not only recognize apples but also begin to classify other fruits, and eventually, a wide range of objects.This shift from narrow optimization to broad capability is why breakthroughs often outpace what surface-level metrics suggest.  

The momentum is real and building. Let’s take drastic increases in data as an example. The availability of digitized text took off in the late 1990s with the introduction of the web. Twenty years later, the wealth of digitized text led to AI chatbots. In biology, it’s been hard to get the right data, but genomes, transcriptomes, and other data took off late in the first decade of this century, and now, 20 years later, we are seeing AI for decoding genome biology. 

The future of AI in drug discovery should be ascertained not by how well it’s working now on traditional biotech metrics, but by how quickly the AI is improving.  

The real breakthrough: From narrow AI to foundation models 

A fundamental shift in AI is happening behind the scenes. The industry is moving beyond fragmented, task-specific, “narrow AI” toward “AI foundation models.”  

To understand why this is a big deal, let’s look back at the past decade. In the mid-2010s, startups, including my own, focused on specific slices of the drug discovery process, such as predicting pathogenic mechanisms of patient mutations, oligonucleotide-mediated gene regulation, small molecule-protein interactions, and other narrow applications. This was where we all believed AI could make the most immediate impact. Expectations soared, and so did the funding. 

Since then, the industry has learned that this narrow approach isn’t the right one — readers, beware companies that offer to bolt on narrow AI tools! 

Many of these AI drug programs faltered preclinically or clinically. The simplest explanation for this is that a good drug requires getting many things right: a high confidence target, an assessment of the patient population, an effective mechanism of action, a molecule that can be efficiently manufactured, a biomarker used to assess benefit, and more. The early startups built narrow AI systems that could each solve only one of these aspects, not all of them, and scaling was impossible. 

To outsiders, these lessons might look like failures, but I see them as revelations on the path to success. The industry is now focusing on embracing a new approach that overcomes the above issues. 

Foundation models are trained on trillions of data points with minimal human touch and are built to understand broad areas. In fact, foundation models often surprise their inventors with new discoveries. This changes how we think about the very nature of drug discovery. 

Foundation models are not just more powerful but also more versatile. They can tackle multiple complex challenges at once and exhibit properties of emergent intelligence — solving problems they weren’t explicitly designed to solve. By fine-tuning foundation models using data for a specific disease, tissue, or modality, they can predict target biology and therapeutics tailored to that context. 

With this level of comprehension, pharmaceutical companies will be able to develop medicines for diseases once considered untreatable, with greater precision and confidence than ever before. 

Behaviors that lead to breakthroughs 

In addition to using AI indicators of future success instead of traditional biotech indicators, and embracing a shift from narrow AI to foundation models, I think there are three key research behaviors that are necessary. To explain them, let me tell you about my personal involvement in the deep learning technology that underlies modern AI chatbots. In the 1990s, I was a graduate student studying neural networks under Geoffrey Hinton, known as “the godfather of modern AI.” I think the following three behaviors were key to the success of our collaboration. 

Persistence. In the 1990s, most AI leaders dismissed neural networks, and papers or grants with “neural network” in the title were often rejected. A cluster of researchers — including Geoff, Yann LeCun (now chief AI scientist at Meta), and Yoshua Bengio — persisted, and it’s a good thing we did. Inspired by Geoff, I worked on neural nets during my doctoral studies. Our “wake-sleep algorithm” ended up ushering in a new category of modern neural networks. Persisting in the face of skepticism is necessary for disruption.  

Exploration. Breakthroughs often came when we repurposed methods or combined approaches. Often, methods we created to solve one task were actually better suited to solving another one. In 2015, when I launched Deep Genomics, we published a paper on using convolutional neural networks to predict the binding of proteins to RNA and DNA. In 2021, DeepMind and Calico combined our approach with transformer neural networks, leading to significant performance gains. Innovation depends on openness to new uses and hybrid ideas. 

Rapid iteration. For a long time, people doubted neural networks being used for speech recognition, playing chess, human conversation — recall how awkward chatbots were 10 years ago — or discovering therapeutics. It’s true that after many attempts, neural networks still didn’t work better than other approaches. But what’s important is that while those approaches were not improving over time, neural networks were improving. Now, by far the best speech recognition systems and chess players are neural networks. In the next 10 years, AI drug discovery will outpace humans. Rather than asking how good something is now, ask how fast it’s improving. 

Yes, it was a nonlinear path to success. But now we have AI chatbots and AI technologies that will drive the future of medicine and drug discovery. And, just last fall, my mentor received the Nobel Prize for his work on neural networks. 

Partnering with pharmaceutical companies to drive clinical outcomes 

For AI startups to achieve their full potential, deep collaboration with pharmaceutical companies is essential, but this will require more than just bolting narrow AI capabilities onto existing pharmaceutical workflows. 

Pharmaceutical companies bring critical capabilities: R&D infrastructure, domain expertise, manufacturing, and more. AI foundation models hold the promise of generating broad, generalizable insights, but translating these insights into viable therapeutics requires access to what pharmaceutical companies have to offer. 

This is why deep strategic partnerships, not vendor-client relationships, are necessary. The most effective partnerships will combine the computational power and innovation culture of TechBio organizations with the scientific rigor, clinical expertise, and drug development know-how of pharmaceutical companies.  

Bridging the AI-biology divide with multilingualism 

The most effective breakthroughs in AI-driven drug discovery aren’t just technical; they’re cultural. Success depends on multilingualism: the ability to foster meaningful dialogue between, for example, experimental biologists and AI researchers. Without it, even the most promising innovations risk failure. 

Biologists and AI researchers operate at different speeds, rely on distinct methodologies, and often struggle to align expectations. AI teams move fast, generating hypotheses and refining models in rapid cycles. Biology teams work to longer timelines, conducting experiments that may take weeks or months. Without synchronization, frustration builds. 

To close this gap, pharmaceutical companies must design workflows that keep both groups engaged, allowing AI teams to refine models, explore parallel datasets, and iterate while biology teams conduct experiments in a way that supports rapid iteration, trial and error, and scalability when the time is right. AI should be embedded in the process, not tacked on, helping generate experimental designs and new biological insights rather than simply optimizing existing workflows. 

Upskilling is also necessary. Biologists don’t need to code machine learning models, but they should grasp the capabilities and limitations of AI. AI specialists don’t need to master every aspect of experimental biology, but they must understand biological frameworks and the information, noise, and confounding factors in experimental data. Everyone should understand how their work fits into the broader effort.  

Organizations that empower deep expertise while cultivating cross-disciplinary connectors will be best positioned to unify AI and biology, and push drug discovery forward. 

Artificial general intelligence is not enough — we need superhuman AI 

It’s worthwhile reflecting on something extraordinary about AI for decoding biology and what that means for the future of computing. 

There’s been a lot of hype about AI chatbots and achieving artificial general intelligence (AGI) in the next few years — AI that can learn any intellectual task a human can. When it comes to drug discovery, we are much more ambitious. The language of biology is orders of magnitude more complex than human language. The difference is striking, when you consider that the complexity of human language is limited by its human creators, whereas for biology it is the other way around: humans are created by biology.  

The language of biology is far beyond human comprehension, so we have to go far beyond AGI. We are seeking to build, and have already built, superhuman AI. Examples of such AI systems include AlphaFold, BigRNA, Enformer, and Geneformer. But these are just the first steps — there is still much more to do.  

While this is a hard problem to solve, it will help us push the preconceived boundaries of AI. 

Where we go from here 

Transformation won’t happen overnight. It will require continued investment, deeper collaboration between TechBio and pharmaceutical companies, and a willingness to rethink traditional ways of working. But we are already seeing the early signs of what’s possible.  

In the years ahead, foundation models will evolve to become central pillars of pharmaceutical R&D that generate the most successful drug programs. These models will enable us to move beyond incremental improvements and toward discoveries that were once unimaginable, unlocking treatments for diseases that have long eluded us. 

It will happen not as a single breakthrough, but as a fundamental transformation of how we approach the science of AI and medicine. The breakthroughs that will redefine medicine aren’t behind us; they are ahead, waiting to be uncovered. And for the first time, we have the tools to find them. 

At the beginning, I said the future looks like "Here’s a therapeutic. Without AI, we simply would not have found it." By now, I hope it's clear: Without great teams, measuring the right success indicators, and a good deal of patience, we simply will not find the right AI. 

Brendan Frey, Ph.D., is founder and chief innovation officer of Deep Genomics, co-founder of the Vector Institute for Artificial Intelligence, and professor at the University of Toronto.