Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6866

Associated Incidents

Incident 13244 Report
Pieces Technologies' Clinical AI Systems Allegedly Marketed With Misleading Performance Claims

Loading...
Attorney General Ken Paxton Reaches Settlement in First-of-its-Kind Healthcare Generative AI Investigation
texasattorneygeneral.gov · 2024

Texas Attorney General Ken Paxton secured a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology company called Pieces Technologies. The agreement resolves allegations that the company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its products. 

At least four major Texas hospitals have been providing their patients’ healthcare data in real time to Pieces so that its generative AI product can “summarize” patients’ condition and treatment for hospital staff. An investigation conducted by the Texas Attorney General found that Pieces made deceptive claims about the accuracy of its healthcare AI products, putting the public interest at risk. Pieces developed a series of metrics to claim that its healthcare AI products were “highly accurate,” including advertising and marketing the accuracy of its products and services by claiming an error rate or “severe hallucination rate” of  “<1 per 100,000.” 

Attorney General Paxton’s investigation found that these metrics were likely inaccurate and may have deceived hospitals about the accuracy and safety of the company’s products. 

As part of the settlement, Pieces has agreed to accurately disclose the extent of its products’ accuracy. Pieces has also agreed to ensure that the hospital staff using its generative AI products to treat patients understand the extent to which they should or should not rely on its products.

“AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk,” said Attorney General Paxton. “Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.”

To read the agreement, click here.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd