This December, OpenAI launched a special “12 Days of Christmas” series, unveiling major product updates and glimpses into the future of artificial intelligence. While each day showcased impressive new capabilities, three announcements clearly rose to the top in terms of excitement and impact. Here’s an in-depth look at the standout trio: ChatGPT Pro, Sora, and the announcement of o3.
What It Is
ChatGPT Pro is a new tier of OpenAI’s flagship conversational AI, designed for people who rely on research-grade intelligence and advanced features every day. Priced at $200 a month, ChatGPT Pro gives users scaled access to OpenAI’s best models—including the latest “o1,” plus o1-mini and GPT-4o—and a handful of exclusive tools. Notably, subscribers can enable “o1 pro mode,” which devotes extra compute to solving the toughest problems in math, programming, science, and more.
Why It Matters
What’s Next
OpenAI has awarded ChatGPT Pro grants to a handful of medical researchers and scientists, with plans to expand these grants more broadly. As the Pro tier evolves, expect new productivity features—potentially involving specialized reasoning modes, advanced voice capabilities, and soon, next-gen models like O3—to roll out first on ChatGPT Pro.
What It Is
Sora is OpenAI’s first widely available video-generation model, introduced on “day three” of the 12 Days series. It can produce short clips from simple text prompts, animate user-provided images, and even transform one video into another style or setting. Built around OpenAI’s “world simulation” research, Sora aims to understand and generate video content with realistic (though still evolving) physics and continuity.
Why It Matters
Access and Pricing
Sora is available at sora.com for ChatGPT Plus and Pro users at no additional charge, though higher tiers get expanded resolution and additional monthly generations. Videos generated in Sora carry a watermark and metadata under the C2PA standard, alerting viewers that these clips are AI-created.
What’s Next
OpenAI envisions Sora as both a playground for creativity and a stepping stone toward AI systems that can interact with the real world. More developer tools—including potential API access—could arrive in 2024, giving app builders ways to integrate AI-powered video into their own products.
What It Is
The final day of announcements brought perhaps the biggest reveal of all: o3 and o3 Mini. These new reasoning models aim to be the direct successors to OpenAI’s breakthrough “o1” series. Early evaluations indicate massive leaps in capability, including near-saturation on older benchmarks, state-of-the-art coding performance, and astonishing improvements on tough new tests.
Key Highlights
Safety Testing First
Though o3 promises a quantum leap for AI reasoning, OpenAI is taking extra safety precautions before a public release. External researchers are invited to apply for early testing to help identify potential harms, exploits, or policy loopholes. Based on the outcome of this period, o3 and o3 Mini are expected to go generally available early next year.
Even though o3 is not yet publicly available for widespread testing, online commentators have already begun speculating that these milestone test scores mark a pivotal shift in AI capability. Some enthusiasts are calling o3 “the new boundary” of machine intelligence, going so far as to describe it as the true beginning of AGI. While official positions remain cautious and emphasize that substantial work and safety validation lie ahead, the excitement underscores just how significant o3’s performance gains could prove to be.