It’s sue season as AI firms face lawsuits
- AI firms continue to face mounting lawsuits
- As AI capabilities improve, some feel the technology is merely copying their work or using it without their permission.
Ever since AI started developing content, the doors were opened for authors, writers, creators, and such to sue technology companies for copyright infringement. Even before AI, failure to attribute, acknowledge, or get permission to use existing content has resulted in some heavy lawsuits being filed.
While there is no denying that AI is capable of creating content that is almost as good as the work of real talent, the reality is that the technology is only capable of doing this by learning from the best. However, in this case, learning from the best and making a profit out of it is not ethically acceptable to many.
As AI capabilities improve, more authors, publishers and content creators are beginning to feel that the technology is merely copying their work or using it without their permission. Over the past few months alone, more AI firms have been facing lawsuits, not just from authors and such, but also from publishing companies and even Elon Musk himself.
Last year, AI firms announced plans to watermark AI content for safety purposes. The watermark will also serve as an identifier for AI-generated content. However, there are still some arguments about this, especially since AI is generating content based on images and texts that have previously been produced by real talent.
Here’s a look at some of the most recent AI lawsuits filed against AI firms.
NVIDIA
Reuters reported that Nvidia has been sued by three authors who said it used their copyrighted books without permission to train its NeMo, opening a new tab AI platform. Brian Keene, Abdi Nazemian and Stewart O’Nan said their works were part of a dataset of about 196,640 books that helped train NeMo to simulate ordinary written language before being taken down in October “due to reported copyright infringement.”
Open AI
Open AI is already facing several lawsuits on copyright infringement from various individuals. The most recent lawsuit was filed by Elon Musk. According to a report by Bloomberg, Musk is accusing OpenAI of breaching its funding agreement by prioritizing profits over the benefit of humanity.
Apart from Musk, Open AI is also being sued by the Authors Guild of America. In a class-action suit, renowned writers claim the company’s large language models (LLM) engage in systematic theft on a mass scale.
Journalist and nonfiction author Julian Sancton has also sued Open AI, claiming the company used his work without permission to train its generative AI tools.
Media companies Raw Story Media Inc., The Intercept Media Inc. and AlterNet Media Inc. also filed lawsuits. The news organizations claim OpenAI and co-defendant Microsoft violated the 1998 Digital Millennium Copyright Act by stripping away copyrighted information when they trained ChatGPT.
Microsoft
Both Microsoft and Open AI are being sued by the New York Times after the publication claims the AI firms has used millions of Times articles to build out its AI tool. The suit alleges chatbots like ChatGPT “seek to free-ride” on the Times’s content and threaten to stifle its revenue.
Another journalist, Nicholas Gage and author Nicholas Basbanes have also filed a proposed class action lawsuit against Microsoft and OpenAI in January, claiming the companies wrongfully used their work to train AI models. Gage has written investigative stories for the New York Times and the Wall Street Journal. Basbanes has written books about the history of publishing.
In Europe, more than 30 European media organizations sued Google in the Netherlands seeking US$2.3 billion and accusing the search giant’s advertising business of violating antitrust laws.
Anthropic
AWS-funded Anthropic is being sued by a group of top music publishers. The complaint alleges the AI firm used copyrighted lyrics from at least 500 songs, and that its Claude AI chatbot disseminates them on its platform.
How can AI firms deal with this?
While the lawsuits are ongoing, the reality is that AI is only going to get more sophisticated and create even better content as LLMs continue to learn and be developed. Just as an AI learns to improve productivity at work, the same models will also learn how to eventually create content based on its own work.
However, users need to remember that beneath all the content generated by AI, the foundation work and original ideas were designed, created and generated by real humans who used their own imagination and ideas. This is something technology will never be able to replicate at this point in time.
AI firms continue to claim that their models train on existing data that are publicly available. The reality is, even this publicly available data could soon end up being copyright protected if attributions are not properly made.
At the end of the day, if the AI does not credit it properly, the content can be considered plagiarism and lawsuits could be inevitable.
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network