Artificial intelligence may be progressing faster than anyone expected, and Jared Kaplan, co-founder and chief scientist of Anthropic, believes we could be on the brink of “human-level” AI within just two or three years—much sooner than the 2030 benchmark many once anticipated. In a recent conversation hosted by Azeem Azhar, Kaplan discussed Anthropic’s latest work on their large language model Claude, the importance of “thinking time,” and the need for serious safety measures as AI’s capabilities rapidly expand.
Not long ago, 2030 seemed an optimistic date for achieving AI that could rival (or surpass) human intelligence. Yet Kaplan’s outlook has shifted in the wake of groundbreaking developments over the past year. He explains that while defining “human-level” AI precisely is tricky, current systems can already solve complex tasks that once seemed out of reach, hinting we may cross that threshold in just two or three years.
Kaplan points to the remarkable evolution of AI models and their ability to handle increasingly complicated challenges—things that would take a human hours or days of focused work. Tasks once regarded as well beyond machine capability, like nuanced reading, coding assistance, and long-term project management, are now routine in advanced versions of models like Claude.
One of the key features Kaplan highlights is “thinking time,” or test-time scaling, which allows a model to pause, iterate, and refine its reasoning mid-task. Put simply, the longer a model can “think,” the better it performs on more complex challenges. This approach can sometimes substitute for deploying a bigger, more expensive model. Instead, a smaller model with ample thinking time can tackle a difficult problem step by step.
Anthropic’s newly introduced “thinking time” settings in Claude 3.7 let the model allocate extra computation for especially tough queries—like a graduate-level math question or an involved code analysis—delivering more accurate and thorough responses. Kaplan believes refining these adjustable reasoning budgets will be essential for building trustworthy AI that can rise to human-level tasks without constantly requiring massive resource usage.
Anthropic has taken a notable stance on AI safety, with Kaplan and co-founder Dario Amodei advocating for robust safeguards. Their Constitutional AI framework uses the model itself to help oversee and enforce standards of behavior and ethics. By allowing an AI model to reason about another AI’s outputs, Anthropic aims to build a layered safety system—one that can scale up as models become more capable.
Kaplan acknowledges that interpretability becomes increasingly vital as AI systems grow smarter and more complex. The ultimate goal is to ensure we can “read their minds,” or at least deeply audit crucial aspects of how these models arrive at their decisions. For Kaplan, advances in interpretability and constitutional AI will be critical if we are to trust more powerful systems that may soon operate at a level rivaling (or surpassing) human intelligence.
Even as the technology accelerates, Kaplan emphasizes that no single group should unilaterally decide AI’s future. Anthropic’s “Responsible Scaling Policy” means that while they push forward with new model iterations, they also commit to careful risk assessments, thresholds, and interventions to keep AI development aligned with human values. Rather than a complete slowdown, Kaplan envisions a healthy balance: rapid innovation matched by equally vigorous safety research.
He points out that across the world, many actors—big tech companies, start-ups, research labs, and yes, even governments—are racing toward more advanced AI. Whether it’s a Chinese firm like DeepSeek making large strides or smaller western labs iterating on open-source approaches, the global AI ecosystem has no single arbiter. In Kaplan’s view, this underscores the need for collaborative governance structures and transparent best practices.
In the conversation, Azhar compares AI’s trajectory to groundbreaking shifts like electricity or the smartphone revolution. Kaplan concurs that the economic and labor impacts of advanced AI could be enormous, and the speed at which AI can proliferate magnifies the stakes.
Yet he also counters that there won’t be a “big bang” moment where superintelligence appears overnight. Instead, we’ll see an ongoing, iterative process with each new generation of models inching us closer to capabilities that outstrip our imagination. Amid this progression, societies will have to grapple with difficult questions around job displacement, misinformation, global regulations, and more.
When asked about what most excites him, Kaplan highlights Anthropic’s work on scalable supervision—using AI to help monitor and align more advanced AI. This is the key, he says, to ensuring that as the technology improves, it remains beneficial, transparent, and aligned with human values.
He also sees practical applications multiplying across various industries—particularly in software development, where AI-assisted coding has already delivered clear productivity boosts. Looking forward, Kaplan envisions AI spreading more deeply into knowledge work, from research labs and corporate offices to creative fields and beyond.
With rapid model improvements expected in mere months rather than years, the narrative surrounding AI is shifting: it’s no longer a distant horizon but an accelerating reality. The question now becomes not so much whether we’ll achieve broadly “human-level” intelligence soon, but how we’ll ensure that these powerfully capable systems stay safe, beneficial, and truly aligned with our collective interests.