Technology

What Are Art and Intellectual Property in the Age of Generative AI?

nadla / iStock via Getty Images

Much of the discussion and argument about generative artificial intelligence revolves around the technology’s threat to extinguishing the human race. The reality is likely more mundane but represents a more serious (and real, not theoretical) threat. (These are the states where AI is creating the most jobs.)

In late May, the Center for AI Safety published this statement on risk along with signatures from hundreds of researchers and “other notable figures”:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

While this is good for headlines, it focuses on a doomsday scenario that uses terminology like “existential risk” to gin up fear, along with an implied promise that none of the signatories will allow this to happen.

This sideshow masks the “real threat from artificial intelligence,” according to Emily Bender and Alex Hanna:

Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.

Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Another example of real threats mentioned by Bender and Hanna is the current actors’ and writers’ strike, “where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.”

At the same time, traditional media like newspapers, books and TV are seeking payment for the use of their copyrighted material to train generative AI models like ChatGPT and Bard. On Sunday, tech analyst Benedict Evans published an essay on AI and intellectual property, noting the difference between training a large language model (LLM) and infringing on copyright protection.

A website may be included in the sample used to train a program like ChatGPT, “but the training data is not the model.” Evans continues:

LLMs are not databases. They deduce or infer patterns in language by seeing vast quantities of text created by people – we write things that contain logic and structure, and LLMs look at that and infer patterns from it, but they don’t keep it. So ChatGPT might have looked at a thousand stories from the New York Times, but it hasn’t kept them.

Setting limits on how AI can affect the issues Bender and Hanna talk about needs to happen. Actors and writers present a different problem. Their words and work may or may not be included in training LLMs. As Evans points out, LLMs do not need a single, specific book or play or movie to generate the model.

So, while there is little evidence yet that generative AI can create art, that day is coming. What is the intellectual property of that art? Who owns it? Does it matter? There is another big argument on its way.

 

Thank you for reading! Have some feedback for us?
Contact the 24/7 Wall St. editorial team.