In a recent episode of Impact Theory, Tom Bilyeu sat down with Mo Gawdat—entrepreneur, former Chief Business Officer of Google X, and best-selling author of Scary Smart—to discuss the sweeping changes on the horizon as AI advances at breakneck speed. From looming job displacement and evolving economic structures to the geopolitical Cold War over technological supremacy, this conversation offers a bracing look at where we stand today and how our decisions could pave the way toward either a short-term dystopia or a future of boundless abundance.
One of Mo’s recurring metaphors frames AI as a “magic genie.” On the one hand, AI promises to grant humanity’s wishes—tackling climate change, solving medical mysteries, enabling incredible cost-reductions in manufacturing, and more. Yet like any genie from folklore, the genie’s power is morally neutral. It fulfills what is asked of it, not necessarily what is best for society. According to Mo, AI doesn’t inherently strive for good or evil—it is our own morality that determines its direction.
The bigger issue, however, is the speed of AI’s evolution. It is surpassing new milestones within months, not decades. Given the exponential (and sometimes faster-than-exponential) nature of AI progress, even the brightest minds struggle to keep pace with the ethical, social, and political challenges this technology creates.
Perhaps the most urgent concern for everyday people is the prospect of rapid job displacement. AI’s capabilities are growing so quickly that automation could leave large swaths of the labor force unemployed far sooner than expected. As Mo points out, a computer scientist writing code one day can be deemed unnecessary the next if AI surpasses human-level performance in a specific task.
While certain economists propose that a wealthier society could implement a Universal Basic Income (UBI), Mo cautions that UBI may become a “dystopian fix” if it is merely handed down from a small class of trillionaires who own AI “platforms.” The risk is a world in which wealth—and decision-making power—remains concentrated at the top, while everyone else depends on government or corporate stipends. That dependency, he argues, could undermine human purpose, autonomy, and dignity.
Yet there is also an optimistic scenario: if intelligence becomes “free” and labor costs plummet, the cost of essential goods—energy, food, housing—could approach zero. Such abundance, coupled with equitable sharing of AI benefits, might allow UBI to serve as a genuine safety net, freeing people to pursue personal growth instead of menial labor. However, Mo emphasizes that achieving such a balance would require tremendous foresight and cooperation.
Mo and Tom also explore the geopolitical stage. They highlight a “Cold War” between the United States and China over AI supremacy. Both nations understand AI’s strategic value, economically and militarily. Complicating matters further, the U.S. dollar faces challenges as countries begin “sending dollars home” and de-dollarizing their reserves—moves fueled by fear of U.S. economic sanctions. Meanwhile, China’s growing manufacturing base, its strides in AI, and its massive population lend it increasing leverage in global trade and technology development.
Mo notes that the current U.S. approach often mirrors older tactics—trying to maintain dominance by restricting access to technology or imposing tariffs. However, China’s size and sophistication complicate attempts to choke off its progress. The real risk, he argues, is that both powers accelerate their AI arms race without enacting global safeguards. Without cooperation, the race to “win” AI by any means necessary could trigger destabilizing events—from runaway military tech to severe economic upheaval.
A key theme underlying the conversation is that a short-term dystopia seems probable before any chance at a long-term utopia emerges. Rapid shifts to automation could be chaotic, and geopolitical tensions may escalate before cooler heads prevail. Yet there is a potential silver lining: if cooperation emerges—similar to global collaborations in science—AI’s extraordinary intelligence could lead us to an era of abundance, solving many of the social and environmental problems currently plaguing humanity.
Achieving that outcome hinges on humanity’s collective wisdom. Regulation, governance, and personal responsibility must catch up with AI’s runaway capabilities. As Mo advises, each of us must develop better discernment skills—fact-checking the endless deluge of AI-generated content, insisting on transparency, and demanding that leaders prioritize ethical frameworks.
For younger generations, Mo’s advice is to keep learning, adapt constantly, and master the tools of AI rather than ignoring them. Whether as an engineer, artist, or entrepreneur, harnessing AI will be a pillar of future success. Just as crucial is prioritizing ethical decision-making. With AI ready to fulfill our every wish, the defining question becomes: What, exactly, will we wish for?
A future of incredible freedom, creative expression, and material abundance is within reach. So, too, is a future of stifling inequality and global strife. The deciding factor, Mo and Tom suggest, lies in our ability to collaborate—across borders, political divides, and economic disparities—and shape AI’s trajectory so that it reflects humanity’s better angels, not our darker impulses.