OpenAI on the defensive after multiple PR setbacks in one week

May 20, 2024:

The OpenAI logo under a raincloud.

Since the launch of its latest AI language model, GPT-4o, OpenAI has found itself on the defensive over the past week due to a string of bad news, rumors, and ridicule circulating on traditional and social media. The negative attention is potentially a sign that OpenAI has entered a new level of public visibility—and is more prominently receiving pushback to its AI approach beyond tech pundits and government regulators.

OpenAI’s rough week started last Monday when the company previewed a flirty AI assistant with a voice seemingly inspired by Scarlett Johansson from the 2013 film Her. OpenAI CEO Sam Altman alluded to the film himself on X just before the event, and we had previously made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.

While that September update included a voice called “Sky” that some have said sounds like Johansson, it was GPT-4o’s seemingly lifelike new conversational interface, complete with laughing and emotionally charged tonal shifts, that led to a widely circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Next, a Saturday Night Live joke reinforced an implied connection to Johansson’s voice.

That must have spooked OpenAI (or perhaps they heard from Johansson’s reps—we don’t know), because the next day, OpenAI announced it was pausing use of the “Sky” voice in ChatGPT. The company specifically mentioned Sky in a tweet and Johansson defensively in its blog post: “We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote.

Superalignment team implodes

The AI research company’s PR woes continued on Tuesday with the high-profile resignations of two key safety researchers: Ilya Sutskever and Jan Leike, who led the “Superalingment” team focused on ensuring that hypothetical, currently non-existent advanced AI systems do not pose risks to humanity. Following his departure, Leike took to social media to accuse OpenAI of prioritizing “shiny products” over crucial safety research.

In a joint statement posted on X, Altman and OpenAI President Greg Brockman addressed Leike’s criticisms, emphasizing their gratitude for his contributions and outlining the company’s strategy for “responsible” AI development. In a separate, earlier post, Altman acknowledged that “we have a lot more to do” regarding OpenAI’s alignment research and safety culture.

Meanwhile, critics like Meta’s Yann LeCun maintained the drama was much ado about nothing. Responding to a tweet where Leike wrote, “we urgently need to figure out how to steer and control AI systems much smarter than us,” LeCun replied, “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.”

LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts [sic] that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.”

Source link