Terry College: AI Art May Lead to More Lawsuits

Merritt Melancon

Friday, May 9th, 2025

Where’s the line between homage and copyright infringement?

It may depend on whether the artist is human or an artificial intelligence system.

Mike Schuster, an associate professor of legal studies at the Terry College of Business, argues that a pervasive public bias against art created with generative artificial intelligence may lead to more copyright lawsuits and legal awards for copyright plaintiffs.  

“There is a robust literature on whether AI-generated work should be protected by copyright,” Schuster said. “There’s also a robust literature on whether copyrighted work can be used as training data for AI … We’re asking something different. We wanted to study the downstream effects of AI use. Specifically, are people more likely to find that I’m infringing on another artist’s copyrights if I use AI to create art?”

Schuster, who studies patent and copyright law, worked with his co-author, University of Miami assistant professor Joseph Avery, who studies the public’s perception of AI, to see how lay people reacted to copyright infringement cases involving AI-generated art. They found people were more willing to see references to other artists’ styles as copyright infringements when the image was generated by AI. 

“The idea is that people may apply the law differently when AI is involved — even if AI’s use is legally irrelevant,” Schuster said. “We were able to document that these biases exist, and then taken to their logical downstream effect, we can expect additional lawsuits to be filed where they wouldn’t otherwise be. And we can further expect people to disproportionately lose lawsuits they might have won if weren’t for these biases.”

Their findings will be published in the University of California Irvine Law Reviewthis summer.

To date, there haven’t been enough cases brought against AI-generated artwork to see a clear trend in practice, so he and Avery turned to an experimental model.  

“In theory, this is something that we’d like to look up in historical precedent or case law, but it’s just not there yet,” Schuster said. “This is something completely new.”

To determine if people saw AI and human-generated art differently, Schuster and Avery created an experimental scenario in which an up-and-coming artist sues a large brand for copying some of her work in an advertising campaign.

One-half of the 397 participants were told the brand used a generative AI system to create ads in the young artist’s style; the other half were told the brand hired a human graphic designer to create the ads.

The group told the brand used AI to design was significantly more likely to say the artist should bring a lawsuit than if told a human graphic designer created the ads. The AI group was also more likely to find the brand infringed on the artist’s copyright if they were on the jury.

Participants were also willing to levy heftier legal damages against the brand when using AI to create the offending ads, with an average award of $7,719 compared to $5,931 in damages against the brand using a human graphic designer to violate copyright.

Usually, for a court to find copyright infringement, a work’s artistic expression must be copied from a copyrighted work. However, the offending ads in the experiment only copied the artist’s style. Still, the majority of participants told the ads were AI-created were ready to find the company liable for copyright infringement.

That’s something companies need to think about as they move forward, Schuster said. He expects if the number of copyright suits brought against AI creators increases and the courts end up finding plaintiffs, people will slow their use of AI-generated images for marketing.

“Is it bad for society if people stop using cost-efficient methods of creative expression because they’re afraid they’re going to get sued?” Schuster said. “I don’t know. If you ask somebody who is a human author, yeah, they’re going say, ‘Oh, that’s a great thing. We don’t want machines replacing us.’ But if you ask a technophile, they’ll say, ‘Oh, no, that’s terrible.”

“We’re not trying to take any stance on whether using AI to create images is good policy or bad policy, but we are saying these findings should inform policy moving forward,” Schuster said.