In a recent blog post, Miles Brundage, former Senior Advisor for AGI (Artificial General Intelligence) readiness at OpenAI, announced his departure from the organization. Brundage’s reflections provide a compelling, and at times unsettling, view into what he describes as a stark reality: the world is simply not ready for AGI. After six years at OpenAI, he is now choosing an independent path, allowing him to address the gaps he sees in global AGI preparedness and advocate for a more expansive approach to AI policy.
Central to Brundage’s reflections is the conviction that neither OpenAI nor any other organization is fully prepared for AGI, and that the broader global ecosystem is far from ready. AGI, as he describes, isn’t a binary achievement but rather a spectrum of powerful capabilities that will require equally sophisticated and adaptive safety protocols. Brundage’s post highlights a crucial and concerning point: the infrastructure, policies, and cooperative frameworks needed to ensure AGI’s safe and ethical development are still dangerously lacking.
This readiness gap is not limited to technology companies but extends to governments and international bodies. Brundage stresses that the world’s approach to AGI requires coordinated global action—a safety-first culture that is still only in its early stages. In his view, AGI readiness is about more than preparing one company for advanced AI capabilities; it’s about ensuring that societies are equipped to manage AGI’s societal impact.
Brundage’s decision to leave OpenAI is rooted in a need for freedom to address AGI policy outside the constraints of industry. While he acknowledges OpenAI’s importance in advancing AI responsibly, he highlights that his role within the organization limited his ability to publish and advocate on all issues he considers vital. Brundage underscores the need for “industry-independent voices” in AI policy, which he believes are essential to shaping a policy ecosystem that isn’t unduly influenced by corporate interests.
Leaving OpenAI allows him to contribute more impartially to global AGI preparedness, especially as he plans to tackle issues that often slip through the cracks within industry settings. His move signals a desire to help build a policy landscape that accounts for all stakeholders—governments, private entities, and civil society—working together to create robust frameworks for AGI’s development and deployment.
Brundage’s blog post lays out an ambitious research roadmap, all of which reflects his belief in the need for a far-reaching policy overhaul. His agenda includes six key focus areas:
Each of these research areas underscores Brundage’s warning that AGI readiness requires immediate and comprehensive action, far beyond what any single organization or country can achieve alone.
Brundage’s reflections challenge the assumption that AGI will simply be absorbed into existing social and regulatory structures. He contends that AGI’s impact on society will be unprecedented, demanding equally unprecedented preparation. While he commends OpenAI’s recent advances in fostering safety culture, he believes that this preparation requires a globally coordinated response to effectively manage the risks.
For Brundage, the question is not whether AGI is coming but how the world will respond to it—and whether that response will be timely and sufficient. He warns against viewing AGI through a “race to the top” lens, arguing that such competition could exacerbate safety risks, particularly in geopolitical hotspots like Taiwan, which plays a critical role in the AI supply chain.
Brundage’s decision to leave OpenAI marks a significant step in his career and a valuable shift for AI policy as a field. His insights on the importance of independence, safety, and cooperation serve as a rallying call for a new era of AI governance—one that ensures the technology benefits humanity without compromising ethical standards or public trust.