Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3199

Associated Incidents

Incident 5613 Report
OpenAI Alleged by Lawsuit Violated Users' Privacy Rights by Training AI on Private Info without Informed Consent

Loading...
Class Action Complaint
clarksonlawfirm.com · 2023

Introduction

On October 19, 2016, University of Cambridge Professor of Theoretical Physics Stephen Hawking predicted, “Success in creating AI could be the biggest event in the history of our civilization. But it could also be the last, unless we learn how to avoid the risks.” Professor Hawking described a future in which humanity would choose to either harness the huge potential benefits or succumb to the dangers of AI, emphasizing “the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”

The future Professor Hawking predicted has arrived in just seven short years. Using stolen and misappropriated personal information at scale, Defendants have created powerful and wildly profitable AI and released it into the world without regard for the risks. In so doing, Defendants have created an AI arms race in which Defendants and other Big Tech companies are onboarding society into a plane that over half of the surveyed AI experts believe has at least a 10% chance of crashing and killing everyone on board. Humanity is now faced with the two Frostian roads Professor Hawking predicted we would have to choose between: One leads to sustainability, security, and prosperity; the other leads to civilizational collapse.

This class action lawsuit arises from Defendants’ unlawful and harmful conduct in developing, marketing, and operating their AI products, including ChatGPT-3.5, ChatGPT-4.0, Dall-E, and Vall-E (the “Products”), which use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge. Furthermore, Defendants continue to unlawfully collect and feed additional personal data from millions of unsuspecting consumers worldwide, far in excess of any reasonably authorized use, in order to continue developing and training the Products.

Defendants’ disregard for privacy laws is matched only by their disregard for the potentially catastrophic risk to humanity. Emblematic of both the ultimate risk—and Defendants’ open disregard—is this statement from Defendant OpenAI’s CEO Sam Altman: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

Defendants’ Products, and the technology on which they are built, undoubtedly have the potential to do much good in the world, like aiding life-saving scientific research and ushering in discoveries that can improve the lives of everyday Americans. With that potential in mind, Defendant OpenAI was originally founded as a nonprofit research organization with a single mission: to create and ensure artificial intelligence would be used for the benefit of humanity. But in 2019, OpenAI abruptly restructured itself, developing a for-profit business that would pursue commercial opportunities of staggering scale.

As a result of the restructuring, OpenAI abandoned its original goals and principles, electing instead to pursue profit at the expense of privacy, security, and ethics. It doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children—essentially every piece of data exchanged on the internet it could take—without notice to the owners or users of such data, much less with anyone’s permission.

Without this unprecedented theft of private and copyrighted information belonging to real people, communicated to unique communities, for specific purposes, targeting specific audiences, the Products would not be the multi-billion-dollar business they are today. OpenAI used the stolen data to train and develop the Products utilizing large language models (LLMs) and deep language algorithms to analyze and generate human-like language that can be used for a wide range of applications, including chatbots, language translation, text generation, and more. Defendants’ Products’ sophisticated natural language processing capabilities allow them to, among other things, carry on human-like conversations with users, answer questions, provide information, generate next text on demand, create art, and connect emotionally with people, all like a “real” human.

Once trained on stolen data, Defendants saw the immediate profit potential and rushed the Products to market without implementing proper safeguards or controls to ensure that they would not produce or support harmful or malicious content and conduct that could further violate the law, infringe rights, and endanger lives. Without these safeguards, the Products have already demonstrated their ability to harm humans, in real ways.

A nontrivial number of experts claim the risks to humanity presented by the Products outweigh even those of the Manhattan Project’s development of nuclear weapons. Historically, the unchecked release of new technologies without proper safeguards and regulations has caused chaos. Now again, we face imminent and unreasonable risks of the very fabric of our society unraveling, at the hands of profit-driven, multibillion-dollar corporations.

Powerful companies, armed with unparalleled and highly concentrated technological capabilities, have recklessly raced to release AI technology with disregard for the catastrophic risk to humanity in the name of “technological advancement.” As the National Security Commission noted in its Final Report on AI, “the U.S. government is a long way from being ‘AI-ready.’”

Experts believe that without immediate legal intervention this will lead to scenarios where AI can act against human interests and values, exploit human beings8 without regard for their well-being or consent, and/or even decide to eliminate the human species as a threat to its goals. As Geoffrey Everest Hinton—the seminal figure in the development of the technology on which the Products run—put it: “The alarm bell I’m ringing has to do with the existential threat of them taking control… I used to think it was a long way off, but now I think it’s serious and fairly close.” He is not alone.

While the downsides are nearly unimaginable, the upsides are similarly archetypeshattering. Defendant OpenAI’s technology is already valued at tens of billions of dollars, and its reach into every public and private industry continues apace. The Products only reached the level of sophistication they have today due to training on stolen, misappropriated data, and Defendants continue to misappropriate data, scraping from the internet without any notice or consent, as well as taking personal information from the Products’ 100+ million registered users without their full knowledge and consent.

Additionally, the Products are increasingly being incorporated into an ever-expanding roster of applications and websites, through either API or plug-ins. Through integration of Defendants’ AI in nearly every possible product and industry, Defendants created and continue to create economic dependency within our society, deploying the tech directly into the hands of society and embedding it into the fundamental infrastructure as quickly as possible. As posed by Center for Humane Technology Cofounders Tristan Harris and Aza Raskin in their carefully crafted critique of the rapid deployment of AI, “Do you think that once [these industries] discover some problem that they [will] just withdraw or retract it from society? No, increasingly, the government, militaries [and others], are rapidly building their whole next systems and raising venture capital to build on top of this layer of society… That’s not testing it with society, that is onboarding humanity onto an untested plane… It’s one thing to test, it’s another thing to create economic dependency.”

The head of the alignment team and safety at Open AI directly acknowledges these risks, postulating, “before we scramble to deeply integrate large language models everywhere in the economy, can we pause and think whether it is wise to do so? This is quite immature technology, and we don’t understand how it works. If we are not careful, we are setting ourselves up for a lot of correlated failures.”

Such aggressive deployment of Defendants’ AI is reckless, without the proper safeguards in place. “No matter how tall the skyscraper of benefits that AI assembles for us… if those benefits land in a society that does not work anymore, because banks have been hacked, and people’s voices have been impersonated, and cyberattacks have happened everywhere and people don’t know what’s true [… or] what to trust, […] how many of those benefits can be realized in a society that is dysfunctional?”

Through their AI Products, integrated into every industry, Defendants collect, store, track, share, and disclose Private Information of millions of users (“Users”), including: (1) all details entered into the Products; (2) account information users enter when signing up; (3) name; (4) contact details; (5) login credentials; (6) emails; (7) payment information for paid users; (8) transaction records; (9) identifying data pulled from users’ devices and browsers, like IP addresses and location, including geolocation of the users; (10) social media information; (11) chat log data; (12) usage data; (13) analytics; (14) cookies; (15) key strokes; and (16) typed searches, as well as other online activity data. Defendants, through the Products, unlawfully obtain access to and intercept this information from the individual users of applications and devices that have integrated ChatGPT-4—including but not limited to user locations and image-related data obtained through Snapchat, user financial information through Stripe, musical tastes and preferences through Spotify, user patterns and private conversation analysis through Slack and Microsoft Teams, and even private health information obtained through the management of patient portals such as MyChart.

All of this personal information is captured in real time. Together with Defendants’ scraping of our digital footprints—comments, conversations we had online yesterday, as well as 15 years ago—Defendants now have enough information to create our digital clones, including the ability to replicate our voice and likeness and predict and manipulate our next move using the technology on which the Products were built. They can also misappropriate our skill sets and encourage our own professional obsolescence. This would obliterate privacy as we know it and highlights the importance of the privacy, property, and other legal rights this lawsuit seeks to vindicate.

Defendants must not only be enjoined from their ongoing violations of the privacy and property rights of millions, but they must also be required to take immediate action to implement proper safeguards and regulations for the Products, their users, and all of society, such as:

(i) Transparency: OpenAI should open the “black box,” to clearly and precisely disclose the data it is collecting, including where and from whom, in clear and conspicuous policy documents that are explicit about how this information is to be stored, handled, protected, and used;

(ii) Accountability: The developers of ChatGPT and the other AI Products should be responsible for Product actions and outputs and barred from further commercial deployment absent the Products’ ability to follow a code of human-like ethical principles and guidelines and respect for human values and rights, and until Plaintiffs and Class Members are fairly compensated for the stolen data on which the Products depend;

(iii) Control: Defendants must allow Product users and everyday internet users to opt out of all data collection and they should otherwise stop the illegal taking of internet data, delete (or compensate for) any ill-gotten data, or the algorithms which were built on the stolen data, and before any further commercial deployment, technological safety measures must be added to the Products that will prevent the technology from surpassing human intelligence and harming others.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd