A Nobel Win, AGI, and the Future of Humanity: Demis Hassabis on AI’s Next Frontiers
A Nobel Win, AGI, and the Future of Humanity: Demis Hassabis on AI’s Next Frontiers

A Nobel Win, AGI, and the Future of Humanity: Demis Hassabis on AI’s Next Frontiers

Demis Hassabis, CEO and co-founder of Google DeepMind, took the stage at the Financial Times’ “Live from Davos” event to discuss how artificial intelligence can revolutionise scientific discovery, tackle our most pressing global challenges, and ultimately reshape humanity’s future. The conversation, held in front of a packed audience, offered an in-depth look at Hassabis’s journey to a Nobel Prize, his vision for the next chapter of AI, and the ethical and geopolitical considerations that come with rapid technological advancements.

A Surreal Nobel Prize

Hassabis began by reflecting on the “surreal experience” of receiving the Nobel Prize for the work on AlphaFold—a pioneering AI system capable of predicting 3D protein structures based solely on amino acid sequences. “It’s still not quite real,” he confessed, recalling how the announcement came just four years after the release of AlphaFold, a remarkably short timescale for the Nobel committee.

AlphaFold itself has been transformative: it solved a decades-long challenge known as the “protein-folding problem” and made it possible to predict the structures of over 200 million proteins—work that could have taken human researchers “a billion years of PhD time,” as Hassabis quipped. The AlphaFold database has been offered freely to the global scientific community, accelerating research in fields from molecular biology to drug discovery.

Beyond AlphaFold: The Quest for New Therapies

One of the key questions Hassabis addressed was the practical impact of AI on medical breakthroughs. His new venture, Isomorphic Labs, focuses on drug discovery, building upon AlphaFold’s models of protein structures. Yet, he emphasised that knowing a protein’s 3D structure is “only one part of the puzzle” in designing an effective drug. Researchers must also ensure drug molecules are non-toxic, soluble, and optimally targeted.

He hinted that the first AI-guided compounds could enter clinical trials by the end of the year—a remarkable pace. “We’re working with big pharmaceutical companies like Eli Lilly and Novartis,” he said, “and we aim to tackle everything from oncology to cardiovascular disease.”

Gemini 2.0 and the Rise of Multimodal AI

Turning to the broader AI landscape, Hassabis highlighted recent breakthroughs at Google DeepMind. Central to the discussion was Gemini 2.0, an advanced language model (LLM) designed to be both powerful and efficient. According to Hassabis, Gemini’s “Flash” version is built for scalability and might soon serve billions of users.

A significant milestone is the development of world models—AI systems capable of interpreting not just text but also images, videos, and real-world physics. Hassabis pointed to the company’s state-of-the-art video model, VEO 2, which accurately simulates real-world actions, such as slicing a tomato. These emerging capabilities pave the way toward “universal assistants”—AI agents that can help users with everything from scheduling appointments to learning complex subjects.

Agentic AI: Balancing Utility and Risk

One theme threaded throughout Hassabis’s remarks was the movement from passive Q&A chatbots to agentic AI—systems that can take action on a user’s behalf. Rather than simply suggesting a restaurant, for example, an AI agent would be able to book the table, handle negotiations, or coordinate multiple tasks autonomously.

Yet this evolution raises complex questions: “What happens when there are millions of AI agents interacting with each other, each with different goals?” Hassabis asked. The potential for beneficial use—maximising our time, improving efficiency—is tremendous. But it also underscores the need for new oversight mechanisms to ensure safety, reliability, and accountability if (and when) these agents begin to operate at scale.

Fact-Checking and “Thinking Models”

A central criticism of current language models is hallucination, where systems generate incorrect or fabricated information. Hassabis sees multiple paths to mitigate this. DeepMind is researching new training objectives to filter out misinformation, integrating search engines to fact-check AI outputs in real time, and focusing on “reasoning models” that allow an AI to introspect, backtrack, and refine its answers before finalising a response.

Drawing on DeepMind’s historical breakthroughs such as AlphaGo, Hassabis noted that techniques like search trees and planning—so effective in mastering games—can also enhance an AI’s ability to reason in real-world settings. However, the “messiness” of reality poses a more significant challenge than a well-defined board game. Even tiny inaccuracies can compound over multiple “thinking” steps, a puzzle researchers are actively trying to solve.

Toward Artificial General Intelligence (AGI)

One of the boldest statements came when Hassabis revisited DeepMind’s foundational ambition: developing Artificial General Intelligence (AGI)—AI that matches or surpasses human cognitive abilities. He estimated a five-to-ten-year timeline for achieving systems capable of the full range of human cognition, acknowledging that “one or two breakthroughs” are still needed.

Why chase AGI? For Hassabis, it is about tackling science’s deepest mysteries—time, consciousness, and the fundamental nature of reality. “I don’t know why more people don’t worry about these things,” he mused. “We interact with these concepts every day without truly understanding them.”

Regulation, Geopolitics, and the Need for ‘Big Picture’ Thinking

While AGI holds dazzling promise—cures for all diseases, clean energy, even interplanetary travel—Hassabis warned that realising it also poses unprecedented risks. Bad actors could repurpose general AI models. Unchecked proliferation of the most advanced systems could exacerbate geopolitical tensions, as nations race to dominate a technology seen as pivotal for economic and military power.

He envisions a global approach—akin to a “CERN for AI”—where governments, industry, and academia join forces in the final stretch toward AGI. Yet tensions between major powers, questions over open-source access, and different regulatory philosophies pose real hurdles. “We may need to slow down on capabilities at some point,” he noted, “to allow enough time for society to adapt and implement safety measures.”

A Cautious Optimism

Despite the challenges, Hassabis radiates what he calls “cautious optimism.” The caution comes from the dual use of AI—the same technology that can eradicate disease might also produce harmful pathogens if misapplied. The optimism stems from humanity’s resilience and ingenuity. Throughout history, humans have harnessed breakthroughs—from fire to the internet—to build better societies when guided by thoughtful governance and ethical considerations.

Standing in Davos, Hassabis closed by emphasising our collective responsibility. “We need to thread the needle,” he said. “If we do this right, we can cure diseases, solve climate issues, and even spread consciousness to the stars. It’s a big ‘if,’ but if we choose cooperation, if we choose responsible innovation, there’s a truly incredible future waiting.”

REACH OUT
REACH OUT
REACH OUT
Discover the potential of AI and start creating impactful initiatives with insights, expert support, and strategic partnerships.