A New Group Is Trying to Make AI Data Licensing Ethical

September 5, 2024:

The first wave of major generative AI tools largely were trained on “publicly available” data—basically, anything and everything that could be scraped from the internet. Now, sources of training data are increasingly restricting access and pushing for licensing agreements. With the hunt for additional data sources intensifying, new licensing startups have emerged to keep the source material flowing.

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music-copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. (At least five new members will be announced in the fall.)

The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever.

The DPA, which expects members to adhere to its opt-in rule, sees that route as the far more ethical one. “Artists and creators should be on board,” says Alex Bestall, CEO of Rightsify and the music-data-licensing company Global Copyright Exchange, who spearheaded the effort. Bestall sees opt-in as a pragmatic approach as well as a moral one: “Selling publicly available datasets is one way to get sued and have no credibility.”

Ed Newton-Rex, a former AI executive who now runs the ethical AI nonprofit Fairly Trained, calls opt-outs “fundamentally unfair to creators,” adding that some may not even know when opt-outs are offered. “It’s particularly good to see the DPA calling for opt-ins,” he says.

Shayne Longpre, the lead at the Data Provenance Initiative, a volunteer collective that audits AI datasets, sees the DPA’s efforts to source data ethically as admirable, although he suspects the opt-in standard could be a tough sell, because of the sheer volume of data most modern-day AI models require. “Under this regime, you’re either going to be data-starved or you’re going to pay a lot,” he says. “It could be that only a few players, large tech companies, can afford to license all that data.”

In the paper, the DPA comes out against government-mandated licensing, arguing instead for a “free market” approach in which data originators and AI companies negotiate directly. Other guidelines are more granular. For example, the alliance suggests five potential compensation structures to make sure creators and rights holders are paid appropriately for their data. These include a subscription-based model, “usage-based licensing” (in which fees are paid per use), and “outcome-based” licensing, in which royalties are tied to profit. “These could work for anything from music to images to film and TV or books,” Bestall says.

Source link