‘Can AI be an artist?’ A Sotheby’s auction tests the answer, while human artists protest AI training

‘Can AI be an artist?’ A Sotheby’s auction tests the answer, while human artists protest AI training

Hello and welcome to Eye on AI. In today’s edition…The embrace (and protest) of AI art reaches a new level; Hugging Face is caught hosting thousands of malware-infected models; Anthropic’s updated Claude model marks a step toward AI assistants; and Character.Ai faces a lawsuit over a src4-year-old’s suicide.

Global auction house Sotheby’s is gearing up for its first-ever auction of an artwork created by an “AI artist.”

The work—a portrait of AI pioneer Alan Turing titled “AI God”—was created by a humanoid robot using a combination of AI algorithms, cameras in her (the robot presents female) eyes, and a robotic arm. Unlike most AI art that’s generated digitally by text-to-images models, Ai-Da, as the humanoid robot is called, actually painted the canvas as well, according to CBS News.

This isn’t the first artwork created by an AI model to go to auction with a major auction house. In 20src8, Christie’s auctioned off a work called “Portrait of Edmond de Belamy” that was created by an AI model, printed to canvas, and sold for $432,500, way above initial estimates. But it is a first for Christie’s rival Sotheby’s—and the humanoid robot actually painting the portrait adds another layer to AI acting as an artist. Sotheby’s expects the work to sell for between $src20,000 and $src80,000.

AI-generated art has flooded the internet, and recent shows like Art Basel have included exhibitions that feature AI in some way. The former has amounted to AI slop taking over social media feeds, while these art exhibitions—including the “AI God” painting being auctioned—largely feel like stunts. Still, the Sotheby’s listing represents a major embrace of AI art for the auction house and stance on the debate around if AI can be credited as an artist or inventor at a time when criticism of the concept is heating up—and when software companies are increasingly trying to cash in. 

Creatives stand up against AI art

Since the beginning of the generative AI boom, artists have been launching copyright lawsuits against AI companies, denouncing their work being used to train models, and voicing concerns that AI art will devalue their work. Yesterday, perhaps the largest collective action was taken when more than src5,000 visual artists, writers, musicians and other creatives signed an open letter against using creative works for training AI models.

Ed Newton-Rex, the former head of audio at Stability AI who resigned last year over the use of copyrighted content for model training, and who organized the open letter, told The Guardian that artists are “very worried” about the use of the works and impact of AI art. 

“There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two—sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third—training data— for free,” he said. 

Yesterday, a former OpenAI researcher who spent four years at the company and was responsible for gathering and organizing web data to train ChatGPT told the New York Times the company broke copyright laws. He left the company in August because he “no longer wanted to contribute to technologies that he believed would bring society more harm than benefit” and is among the first employees from a major AI company to speak out publicly against the use of copyrighted data to train models.

AI art for profit 

Beyond the model developers, there’s also a slew of companies in the digital image creation and editing space that are using these models to boost their own businesses and profits. Adobe has integrated AI tools based on these models into its programs like Photoshop. Canva, the company trying to disrupt Adobe, has also leaned heavily into AI, partnering with OpenAI and adding new generative AI features throughout its product. Many are included only in the paid subscription, and the company is counting on them to get users to upgrade from its free offering, especially as Canva increasingly goes after business customers. 

Since art is intended to push boundaries, remark on our world, and make us question, a work like AI-da’s “AI God” is doing exactly what art is about. But as the artists protesting the use of their work to train models know, the experimentation with AI and art doesn’t start or end there. Just as quickly as AI art is headlining Sotheby’s, it’s also being embraced by commercial companies.

The role AI will play in art isn’t an easy question. The U.S. patent office earlier this year ruled that AI cannot legally be considered an inventor, and whether or not AI can be considered an artist will continue to be up for debate. Either way, Newton-Rex and the artists pushing back against their work being used to train models have a point. The other pieces of the AI supply chain are worth big money and skyrocketing companies like Nvidia to unprecedented market caps. Why should the data—the works of real people—be available for free, no consent required?

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS Security researchers discover thousands of malicious models on Hugging Face. Hackers are setting up fake accounts posing as companies like Meta, SpaceX, and Ericsson to lure downloads of the infected software. For example, one model that claims to be from 23andMe installs malicious code that hunts for AWS credentials in order to steal cloud resources, according to researchers at Protect AI, which found over 3,000 malicious models. HuggingFace said it’s verified the profiles of tech companies since 2022 and scans for unsafe code. You can read more in Forbes .

Anthropic releases a new capability for Claude Sonnet 3.5 that enables the model “to use computers the way people do.” Called Computer Use, the API lets Claude perceive what’s on a computer interface, scroll, move a cursor, click buttons, zoom, type, and more to perform actions. It’s a step toward AI virtual assistants that could go off on their own and complete multi-step tasks for users, which Anthropic and other tech companies are all racing to build. The updated version of Claude Sonnet 3.5 with Computer Use is now available to developers, but Anthropic notes in its blog post that the capability it’s still “experimental.” 

Google makes it tech for watermarking and detecting AI-generated text generally available. Called Synth ID Text, the technology was previously only available to developers but is now available for anyone to download from Hugging Face or Google’s Responsible GenAI Toolkit. SynthID Text has been integrated with Google’s Gemini models since this spring, but the company warns it does still pose limitations and does not perform as well with short text, translated text, or with responses to factual questions. You can read more from TechCrunch .

A Florida mother plans to sue Character.AI after her src4-year-old son became obsessed with a chatbot before his suicide. Sewell Setzer III spent months talking to a chatbot on the platform and formed an emotional attachment, messaging the bots dozens of times a day with updates on his life, according to the New York Times . His family and friends saw him get sucked deeper into his phone, lose interest in his previous hobbies, and go straight into his room where he chatted with the bot for hours at a time. His parents brought him to several sessions with a therapist, but it was in the chatbot to which he confided his thoughts of suicide. Setzer killed himself while chatting with the bot in February. 

Longtime OpenAI policy researcher resigns to “publish freely” in the non-profit sector. Miles Brundage, OpenAI’s head of policy and senior advisor for AGI readiness, announced on X that he’s leaving the company after six years to pursue independent policy research and advocacy and that the team working on preparedness for AGI will be disbanded. In his post, Brundage encouraged employees at the company to “remember that their voices matter.” “OpenAI has a lot of difficult decisions ahead, and won’t make the right decisions if we succumb to groupthink,” he wrote. His departure marks yet another high-level resignation for the company. You can read more from Fortune’ s David Meyer here .

FORTUNE ON AI How Europe’s tech-shy Fortune 500 is embracing AI —By Ryan Hogg

SoftBank, Mastercard, and Anthropic cyber chiefs sound alarms on AI phishing and deepfakes—but those aren’t the only things keeping them up at night — By Sharon Goldman

Tim Cook makes another trip to China as local users gripe about Apple’s AI delays in the world’s largest smartphone market — By Lionel Lim

Chipotle just released an AI recruiting tool to gain an edge in the ‘competitive labor market’ — By Brit Morse and Emma Burleigh

AI CALENDAR Oct. 28-30: Voice & AI, Arlington, Va.

Nov. src9-22: Microsoft Ignite, Chicago

Dec. 2-6: AWS re:Invent, Las Vegas

Dec. 8-src2: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia

Dec. 9-src0: Fortune Brainstorm AI, San Francisco (register here )

EYE ON AI NUMBERS $src,000 That’s how much people found guilty of violating Singapore’s new law banning the use of deepfakes and AI-manipulated media in online election advertisements could be fined. Alternatively they could receive src2 months jail time. Social media companies can be fined up to $src,000,000. The law covers both deceptive audio and video and covers a wide range of manipulations, from entirely fabricated media to subtle changes like changing the pauses in a candidate’s speech.

This is the online version of Eye on AI, Fortune’s weekly newsletter on how AI is shaping the future of business. Sign up for free.

Read More

Leave a Reply