Fireside Chat
Fireside Chat
Fireside Chat
Fireside Chat
Fireside Chat
Fireside Chat
Building the AI Supercloud
Fireside Chat: Building the AI Supercloud

Overview

This chat unpacked the “AI supercloud”—a new, AI-specialized cloud built for insane scale (think gigawatt-class data centers) and tighter, denser infrastructure than yesterday’s clouds. Roman (Nebius) framed three big workload buckets—frontier pre-training, post-training/fine-tuning, and inference—and argued the platform layer is being rethought so developers can fine-tune, do RL, and run low-cost, high-throughput inference more easily. Open source models are pivotal: teams often prove a use case with closed models, then switch to tuned open models for cost, data leverage, and differentiation. Nebius partners with hyperscalers (e.g., big GPU builds) to fund broader, multi-tenant services, and sees adoption patterns diverge: startups chase speed on AI-specialized clouds; enterprises start on hyperscalers and offload AI-heavy work where capacity, performance, or economics demand. With regulation and sovereignty driving regional build-outs—and on-prem getting harder as chips and facilities evolve—the near-term reality looks hybrid and federated: use the provider that has capacity now, meets compliance needs, and scales with you.

Speakers
Roman Chernin
Nebius
Moderator
Vijay Narayanan
Fellows Fund
Vijay Narayanan
Fellows Fund
Vijay Narayanan
Fellows Fund
Vijay Narayanan
Fellows Fund