Google Cloud simply wrapped its Subsequent ‘25 occasion in Las Vegas, unveiling a jaw-dropping 229 announcements, spanning all the pieces from superior AI fashions to new methods of connecting your favourite instruments with Google’s agentic ecosystem.
You’d suppose that will be sufficient to seize headlines by itself, however Google additionally teamed up with The Sphere to reimagine The Wizard of Oz utilizing AI. And which may simply be essentially the most mind-bending demo of all of them.
To unpack what occurred at Subsequent ‘25, I spoke with Advertising and marketing AI Institute founder and CEO Paul Roetzer, who attended the occasion, on Episode 144 of The Artificial Intelligence Show.
A Fast Snapshot of Google Cloud Subsequent ‘25
Whereas Google unveiled far too many new merchandise and updates to cowl intimately, right here’s a sampler of what dominated the dialog at Subsequent ’25:
- Gemini 2.5 Professional. Google’s newest heavyweight AI mannequin, now in public preview. It claims superior reasoning and coding capabilities that outpace earlier generations—and presently ranks #1 on the Chatbot Enviornment leaderboard, in accordance with Google’s stats.
- Gemini 2.5 Flash. A speedier, extra cost-efficient variant of Gemini that Google says nonetheless packs sufficient punch for a lot of duties.
- Generative media. Main upgrades to Google’s text-to-image, text-to-audio, and text-to-video fashions (Imagen 3, Chirp 3, Veo 2, and a brand new text-to-music mannequin known as Lyria). The main target? Excessive-quality outputs and quicker, extra exact modifying capabilities—whether or not you’re producing pictures or whole video scenes.
- AI infrastructure. Google flexed its muscle in massive-scale AI coaching and inference. Suppose new GPUs, next-gen TPUs, ultra-fast networking, and storage—principally, an industrial-strength AI spine.
- Agentspace. Google made large updates to its “AI management heart,” which integrates your on a regular basis work apps and knowledge with its strongest fashions and newly improved AI brokers. The thought: allow you to construct, customise, and handle AI-driven workflows—and do all of it inside a single ecosystem.
Appears like sufficient information to final the 12 months, proper? Properly, Google had yet one more trick up its sleeve…
Google AI Brings 1939’s Wizard of Oz into the Future at The Sphere
Think about entering into probably the most superior venues on the planet, solely to witness The Wizard of Oz from 1939 getting an ultra-HD, 360-degree AI makeover—full with out-of-this-world visuals, seats that rumble with thunder, and an immersive surroundings that places you proper in Dorothy’s ruby slippers.
That’s precisely what occurred on the primary night time of Google Cloud Subsequent 25. Roetzer was there, and in accordance with him, it was nothing in need of “loopy.”
The Sphere is an enormous spherical 360-degree occasion area the place Google previewed its work to make use of AI to show The Wizard of Oz into a completely fashionable, immersive expertise.
So how do you convey a basic movie—shot in a small, rectangular body—onto a 160,000-square-foot dome with out it trying horribly stretched and blurry?
In accordance with Roetzer, Google took a number of AI fashions (together with variations of Veo and Imagen) and, with an enormous group of human professionals, pioneered three key strategies:
- Tremendous decision. Sharpening these old-school frames into ultra-high definition imagery that may match The Sphere’s monumental show.
- Outpainting. Filling within the gaps between scenes to seamlessly broaden the unique body.
- Efficiency era. Compositing characters and particulars that merely by no means existed within the authentic movie, so the viewer sees a steady, immersive scene in 360 levels.
The top result’s a mesmerizing preview of what’s potential whenever you combine cutting-edge AI with the following era of immersive experiences. The total reimagined Oz movie debuts at The Sphere in late August. For those who’re planning a visit to Vegas, you may wish to add this to your listing.
“The factor I took away from it was the human-machine collaboration,” says Roetzer.
This wasn’t a matter of handing your entire movie to Gemini and having it determine all of this out. Dozens of the highest minds inside Google DeepMind and Google Cloud labored on this, pushing the bounds of the fashions and creating solely new strategies to make this potential.
Says Roetzer:
“They interviewed one man from Google DeepMind and stated, ‘Hey, when this venture began [two years ago], what did you suppose was inconceivable?’ And he answered: ‘All the things. There was nothing we had been doing that the fashions at that second might truly obtain.’”
Agentspace: The Actual Star of Google Cloud Subsequent ‘25
Wowing the group with The Wizard of Oz apart, the true star of Subsequent ‘25 for a lot of attendees was Agentspace, Google’s hub for constructing, coaching, and orchestrating AI brokers.
“It’s a single area that lets you, in a no-code surroundings, construct brokers to do no matter you wish to do,” says Roetzer. “And it connects to third-party software program and knowledge. So it principally turns into a platform the place you reside and do all the pieces it’s essential do [with AI].”
Which means on a regular basis customers can use Agentspace to construct their very own AI brokers for duties like:
- Researching any subject (with a “Deep Analysis” mode that synthesizes sources and particulars)
- Remodeling prolonged paperwork into shows
- Producing an audio overview of your stories or strategic plans
- Creating an “agent gallery” the place your whole group can uncover and deploy new AI helpers
Principally, it’s the holy grail of letting AI deal with busywork so you possibly can keep targeted on higher-level duties.
“The imaginative and prescient for it’s highly effective,” says Roetzer. “You can see the way it turns into like a management panel principally for a information employee to only have all of the instruments they want proper there.”
The one catch? Agentspace isn’t broadly accessible but. Google’s letting potential customers request entry, however a timeline for common availability continues to be unclear.
Sundar Pichai’s Huge AI Prediction
In a session at Subsequent ‘25, Alphabet and Google CEO Sundar Pichai made a press release that obtained of us buzzing.
He expects the tempo of mannequin developments to proceed for no less than the following 12 to 18 months, says Roetzer, with new main fashions each three to 4 months.
Let that sink in: Each few months, we may even see leaps in capabilities, new specialised fashions, and expansions of current AI frameworks—on prime of all the pieces Google simply introduced.
“That’s simply loopy to consider,” says Roetzer.
For those who’re feeling overwhelmed, you’re not alone. However you’re additionally about to witness an period of unprecedented AI development, particularly from Google’s ecosystem, if Pichai’s prediction holds.