Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

Legal professionals

Incidents Harmed By

Incident 7042 Report
Study Highlights Persistent Hallucinations in Legal AI Systems

2024-05-23

Stanford University’s Human-Centered AI Institute (HAI) conducted a study in which they designed a "pre-registered dataset of over 200 open-ended legal queries" to test AI products by LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI). The researchers found that these legal models hallucinate in 1 out of 6 (or more) benchmarking queries.

More

Incidents involved as Deployer

Incident 7042 Report
Study Highlights Persistent Hallucinations in Legal AI Systems

2024-05-23

Stanford University’s Human-Centered AI Institute (HAI) conducted a study in which they designed a "pre-registered dataset of over 200 open-ended legal queries" to test AI products by LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI). The researchers found that these legal models hallucinate in 1 out of 6 (or more) benchmarking queries.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Law firms

Incidents involved as Deployer
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More
Entity

Organizations requiring legal research

Incidents involved as Deployer
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More
Entity

Thomson Reuters

Incidents involved as Developer
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More
Entity

LexisNexis

Incidents involved as Developer
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More
Entity

Clients of lawyers

Incidents Harmed By
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More
Entity

Legal system

Incidents Harmed By
  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

  • Incident 704
    2 Reports

    Study Highlights Persistent Hallucinations in Legal AI Systems

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df