When James Dolan, CEO of Sphere Entertainment, first proposed screening The Wizard of Oz (1939) inside the massive Las Vegas Sphere, even Google’s AI engineers were skeptical. The iconic film, shot on 35mm celluloid, was never intended for a 160,000-square-foot, 16K-resolution wraparound screen—one of the most advanced displays in the world.
Yet, through groundbreaking AI upscaling, generative performance creation, and cinematic expansion, Google’s DeepMind and Cloud teams transformed the classic movie into an immersive experience unlike anything seen before. The result? A 90% AI-enhanced version of The Wizard of Oz, set to debut on August 28, 2024, at the Sphere.
The Impossible Challenge: Adapting a 1939 Film for the Sphere
The Las Vegas Sphere, which opened in 2023, is a technological marvel. Its 18,600-seat auditorium features a 20,000-speaker audio system and a 16K LED screen wrapping 270 degrees around the audience. Until now, it has only hosted live performances (U2, Phish, Dead & Company) and custom-made films like Postcard from Earth.
Showing a pre-digital, 4:3 aspect ratio film on this screen seemed unfeasible.
Key Technical Hurdles:
Resolution Limitations – The original film’s resolution (~2K when scanned) would appear blurry and pixelated on the Sphere’s ultra-high-definition display.
Aspect Ratio Constraints – The movie was shot in 1.37:1, far narrower than the Sphere’s 360-degree canvas.
Missing Visual Data – Many scenes had cropped backgrounds, requiring AI to generate entirely new scenery and characters.
Google’s AI Breakthroughs: How They Did It
To overcome these challenges, Google DeepMind and Google Cloud developed three revolutionary AI techniques:
1. AI Super-Resolution & Detail Reconstruction
Traditional upscaling methods (like bilinear interpolation) simply stretch pixels, leading to blurriness. Google instead used Gemini-powered AI models to:
- Generate new pixels based on contextual understanding of the film.
- Enhance facial details (e.g., the Scarecrow’s straw texture, Dorothy’s freckles).
- Restore lost film grain without introducing digital artifacts.
“There are scenes where the Scarecrow’s nose is like 10 pixels,” said Steven Hickson, Google DeepMind’s AI Research Director. “Making that look natural on a screen this size was one of our biggest challenges.”
2. Generative “Outpainting” – Expanding the Film’s World
Since the Sphere’s screen is wider and taller than the original frame, Google used Imagen 3 (Google’s latest text-to-image model) to:
- Extend backgrounds (e.g., adding more of the Emerald City skyline).
- Introduce off-screen characters (like Uncle Henry, who was originally unseen in Aunt Em’s scenes).
- Reconstruct missing set pieces (such as additional Munchkinland buildings).
“We had to invent new ways for AI to understand 1930s cinematography,” explained Ravi Rajamani, Google Cloud’s Head of Generative AI. “The AI had to learn from just one movie, which is extremely rare in machine learning.”
3. AI-Generated Character Performances
Some scenes required entirely new character movements to fit the Sphere’s expanded canvas. Google’s Veo 2 (a video-generation AI) was trained on the original film to:
- Animate background characters (like additional Winged Monkeys).
- Adjust actor eyelines to match the Sphere’s curved perspective.
- Synthesize lip-synced dialogue for extended scenes.
To ensure authenticity, Oscar-nominated producer Jane Rosenthal (known for The Irishman) consulted on performance accuracy.
Why This Is a Milestone for AI in Film
This project represents a major leap in AI’s role in entertainment:
1. A New Form of “AI Remastering”
Unlike simple colorization or noise reduction, this is a full cinematic reimagining. Future classics (Casablanca, Gone with the Wind) could undergo similar transformations.
2. The Future of Immersive Cinema
The Sphere’s format could redefine moviegoing. Imagine:
- Star Wars with AI-extended space battles.
- Jurassic Park with AI-generated dinosaur vistas.
- Live concerts with AI-rendered stage expansions.
3. Ethical & Industry Implications
While groundbreaking, this raises questions:
- Should AI alter classic films without original filmmakers’ input?
- Will AI replace traditional restoration artists?
- Could studios use this tech to “remake” movies without actors?
James Dolan remains unfazed: “I can’t wait for Hollywood to see this. Their jaws will drop.“
Conclusion: A Glimpse Into Cinema’s AI Future
Google’s AI-powered Wizard of Oz is more than a technical marvel—it’s a proof of concept for the next era of filmmaking. As AI continues evolving, we may see:
- AI-enhanced IMAX versions of classic films.
- Interactive movies where AI generates scenes in real time.
- Fully AI-remastered film libraries for next-gen displays.
For now, audiences will witness a 1939 film reborn through 2024 AI—proving that even the oldest movies can find new magic in the digital age.
“This isn’t just a restoration,” says Thomas Kurian, Google Cloud CEO. “It’s a re-creation. The only other way to do this would be to go back in time and film it again.”
The Las Vegas Sphere debut on August 28 will determine if audiences agree. One thing is certain: The future of cinema will never be the same.
Additional Research & Industry Insights
- Warner Bros. granted rare access to the original film negatives for AI training.
- Disney is reportedly exploring similar AI upscaling for its classic animations.
- NVIDIA’s CEO Jensen Huang has called AI film restoration “the next frontier in computational cinematography.”
- Film preservationists debate whether AI alterations compromise artistic integrity.
This project blurs the line between restoration and reinvention—ushering in a new debate on how far AI should go in reshaping cinematic history.