New platforms seeks to prevent Big Tech from stealing art

Kin.art's goal is to help create a world where "becoming an artist is as straightforward as a career path as becoming an accountant."

Jan 24, 2024 - 00:30
 0  5
New platforms seeks to prevent Big Tech from stealing art

In the year since OpenAI launched ChatGPT, touching off a global race between a litany of tech corporations to develop and ship more powerful versions of artificial intelligence technology, the relationship between generative AI and art has become clear.

The past year saw the filing of numerous copyright infringement lawsuits against many such AI companies, even as the introduction of generative AI technology has acted as a growing obstruction to the content-creation industries.

The lawsuits, which include those brought by media companies, authors and visual artists, allege that defendants' highly commercialized Large Language Models (LLMs) — like ChatGPT or Midjourney — are built on stolen content, and further, that the output of such models acts as a direct violation of existing copyright law.

Related: Human creativity persists in the era of generative AI

The tech companies, in response, have largely argued that the construction of their AI models — which involves the scraping of data from the internet — is "fair use," a doctrine of copyright law that enables the limited reproduction of copyrighted materials.

The U.S. Copyright Office has not yet clarified how or if the fair use doctrine applies to the training of generative AI models.

"The AI companies are working in a mental space where putting things into technology blenders is always okay," copyright expert James Grimmelmann told TheStreet Jan. 7. "The media companies have never fully accepted that. They've always taken the view that 'if you're training or doing something with our works that generates value we should be entitled to part of it.'"

At the same time, several of these tech companies have made clear that access to copyrighted materials is a key component of constructing their models.

OpenAI said recently that "it would be impossible to train today's leading AI models without using copyrighted materials."

David Holz, the CEO of Midjourney, said in a September interview with Forbes that the company's dataset was built through a "big scrape" of the internet, adding that the company did not attempt to get consent from the creators of the images Midjourney scraped.

"There isn’t really a way to get a hundred million images and know where they’re coming from," he said. "There’s no way to find a picture on the internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it."

Matthew Butterick, one of the lawyers representing the artists, said in response to the release of the exhibit that he has since had interest from artists around the world to join the class action.

Read the list here

Related: George Carlin resurrected – without permission – by self-described 'comedy AI'

Artist protections against scraping

In the midst of this environment of scraped content to power industry-disrupting generative AI models, several tools have been introduced to the artistic community to both protect the business prospects of artists while also preventing scraping.

Nightshade was introduced last week by computer scientists at the University of Chicago as a tool that transforms images into bits of data that can poison AI models.

"It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization," the researchers said.

The goal is not to destroy models, but to make the cost of unauthorized training expensive enough to force tech companies to pursue licensed training instead.

Where Nightshade seeks to dissuade image scraping, a new tool released Tuesday by Kin.art seeks to entirely prevent scraping in the first place.

Related: Senate Judiciary Committee seeks to build new framework to rein in Big Tech

Kin.art: 'The world's first AI-safe portfolio platform'

The platform takes advantage of what it calls a novel approach, developed by the company's CTO, Flor Ronsmans De Vry, which is designed to entirely disrupt AI crawlers, preventing the scraping of images hosted there.

AI datasets, he said, require pairs of images and labels, used in conjunction, to properly train a model. Kin.art's solution disrupts both of those inputs, using image segmentation to prevent a complete piece of artwork from being injected into a dataset, and label fuzzing to prevent the proper labels from being associated with a given image.

“This dual approach guarantees that artists who showcase their portfolios on Kin.art are fully shielded from unauthorized AI training of their work," Ronsmans De Vry said.

The company said in a statement that any artist using Kin.art to host and sell their artwork will automatically benefit from the platform's AI protection features.

“You can think of Kin.art as the first line of defense for your artwork,” Ronsmans De Vry said in a statement. “While other tools such as Nightshade and Glaze try to mitigate the damage from your artwork already being included in a dataset, Kin.art prevents it from happening to begin with.”

Kin.art, according to the company, is the "world's first AI-safe portfolio platform for artists."

Related: Marc Benioff and Sam Altman at odds over core values of tech companies

Kin.art: On a mission to support artists

Kin.art began as an effort to create a streamlined platform for artists to host their portfolios and to aggregate and organize commission requests while protecting and securing transactions. The team was in the midst of building the platform when issues of data scraping and copyright infringement by AI models became clear, and decided to integrate an AI shield into the platform.

"Ever since we started our company, our goal has always been to create a world where becoming an artist is as straightforward a career path as becoming an accountant," Ronsmans De Vry told TheStreet. "More specifically, we use our expertise in the tech world to help artists make a living doing what they love."

The platform was developed alongside a focus group of artists.

The team, led by CEO Mai Akiyoshi, COO Ben Yu and Ronsmans De Vry, has launched several other products in the space prior to developing and launching Kin.art.

"Our mission has always been to support artists to the best of our abilities, and we keep that same philosophy with the Kin.art brand," Ronsmans De Vry said, adding that the list of artists whose work was used to train Midjourney's models further reaffirmed the company's belief that "we should try to protect artists at all costs."

Contact Ian with AI stories via email, [email protected], or Signal 732-804-1223.

Related: Veteran fund manager picks favorite stocks for 2024

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow