The Godmother of AI Wants Everyone to Be a World Builder

September 14, 2024:

According to market-fixated tech pundits and professional skeptics, the artificial intelligence bubble has popped, and winter’s back. Fei-Fei Li isn’t buying that. In fact, Li—who earned the sobriquet the “godmother of AI”—is betting on the contrary. She’s on a part-time leave from Stanford University to cofound a company called World Labs. While current generative AI is language-based, she sees a frontier where systems construct complete worlds with the physics, logic, and rich detail of our physical reality. It’s an ambitious goal, and despite the dreary nabobs who say progress in AI has hit a grim plateau, World Labs is on the funding fast track. The startup is perhaps a year away from having a product—and it’s not clear at all how well it will work when and if it does arrive—but investors have pitched in $230 million and are reportedly valuing the nascent startup at a billion dollars.

Roughly a decade ago, Li helped AI turn a corner by creating ImageNet, a bespoke database of digital images that allowed neural nets to get significantly smarter. She feels that today’s deep-learning models need a similar boost if AI is to create actual worlds, whether they’re realistic simulations or totally imagined universes. Future George R.R. Martins might compose their dreamed-up worlds as prompts instead of prose, which you might then render and wander around in. “The physical world for computers is seen through cameras, and the computer brain behind the cameras,” Li says. “Turning that vision into reasoning, generation, and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.” World Labs calls itself a spatial intelligence company, and its fate will help determine whether that term becomes a revolution or a punch line.

Li has been obsessing over spatial intelligence for years. While everyone was going gaga over ChatGPT, she and a former student, Justin Johnson, were excitedly gabbling in phone calls about AI’s next iteration. “The next decade will be about generating new content that takes computer vision, deep learning, and AI out of the internet world, and gets them embedded in space and time,” says Johnson, who is now an assistant professor at the University of Michigan.

Li decided to start a company early in 2023, after a dinner with Martin Casado, a pioneer in virtual networking who is now a partner at Andreessen Horowitz. That’s the VC firm notorious for its near-messianic embrace of AI. Casado sees AI as being on a similar path as computer games, which started with text, moved to 2D graphics, and now have dazzling 3D imagery. Spatial intelligence will drive the change. Eventually, he says, “You could take your favorite book, throw it into a model, and then you literally step into it and watch it play out in real time, in an immersive way,” he says. The first step to making that happen, Casado and Li agreed, is moving from large language models to large world models.

Li began assembling a team, with Johnson as a cofounder. Casado suggested two more people—one was Christoph Lassner, who had worked at Amazon, Meta’s Reality Labs, and Epic Games. He is the inventor of Pulsar, a rendering scheme that led to a celebrated technique called 3D Gaussian Splatting. That sounds like an indie band at an MIT toga party, but it’s actually a way to synthesize scenes, as opposed to one-off objects. Casado’s other suggestion was Ben Mildenhall, who had created a powerful technique called NeRF—neural radiance fields—that transmogrifies 2D pixel images into 3D graphics. “We took real-world objects into VR and made them look perfectly real,” he says. He left his post as a senior research scientist at Google to join Li’s team.

One obvious goal of a large world model would be imbuing, well, world-sense into robots. That indeed is in World Labs’ plan, but not for a while. The first phase is building a model with a deep understanding of three dimensionality, physicality, and notions of space and time. Next will come a phase where the models support augmented reality. After that the company can take on robotics. If this vision is fulfilled, large world models will improve autonomous cars, automated factories, and maybe even humanoid robots.

Source link