Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4170

Associated Incidents

Incident 8116 Report
AI-Powered Transcription Services Allegedly Leak Confidential Workplace Discussions

Loading...
Some AI Assistants Have This Big Flaw: They Talk Too Much
inc.com · 2024

The weird thing about AI tech is that it's so new and so exciting that it's still fun and novel to include it in everyday experiences like building a social media post, or into everyday business processes like generating content or automating boring office tasks. It's so new, in fact, that it's easy to overlook that AI isn't perfect, and that it has numerous flaws. A recent Washington Post report highlights a particularly terrifying aspect of AI that may never have occurred to new users --- it can leak confidential information to people who shouldn't have access to it. And not just any information --- key secrets about your company, its plans, its finances, and, perhaps, exactly what you think of Steve from accounts and all his annoying habits.

The report centers on how AI is replacing some workplace tasks that would have previously been done by office assistants. It seems like a perfect solution, really --- digital AI assistants may be more reliable, they don't take vacation breaks, and you don't necessarily have to pay much to use them. Taking notes from meetings is one task frequently delegated to assistants, and now there are a horde of AI tools that can take on the job.

The Post relates a story from a researcher and engineer called Alex Bilzerian who was using one of these tools, Otter.ai, recently during a Zoom meeting with some investors. When the meeting was done, Otter auto-emailed him with a transcription of the meeting generated by an AI that had digested the chat. That sounds incredibly useful: no need for note-taking! No to-do lists or memos. But Bilzerian was stunned to discover that the transcript contained the investors' chat that happened after he'd left the meeting. Including discussion of "strategic failures and cooked metrics." The financiers apologized when he mentioned the issue, but their unexpected criticism caused Bilzerian to kill the deal.

Otter.AI explained how its privacy controls can be adjusted to change the details of information sharing, in a response to Bilzerian's post about the scandal on X. Though the post was a simple bit of corporate, ah, behind-covering, it lacked any sincere apology. More significantly, it highlighted that unlike established digital office tools, AI tools can be like the digital Wild West, free from regulation, open, and able to do things that surprise you --- both positively and negatively. And sometimes when a leader rushes to embrace AI as an example of "the next big thing to boost your business" they can be naive about the risks of AI tech. 

Meeting transcription services like Otter are one obvious way an AI tool can learn confidential information and then share it in unexpected and possibly compromising ways. But even chatbots or tools like Microsoft's new AI Recall system can leak information if you've given it permission to train itself on your data, a provision that might be buried in the tool's myriad terms and conditions.

That data can then, under some circumstances, pop up later on and be displayed to a totally different user. And, more than that, by giving some AI tools permission to access your data (which might be needed if, say, you're asking it to analyze and digest scads of financial information), your information may not just be lodged with that particular tool's provider. The Post notes that Otter.ai, for example, "shares user information with third parties, including AI services that provide back-end support for Otter, advertising partners and law enforcement agencies when required."
There's an old saying about never putting anything on the internet you wouldn't feel comfortable your grandma seeing. Maybe we can modify that to suit the AI-powered office of 2024: Never share information with an AI tool that you wouldn't feel comfortable seeing reported in the pages of the Washington Post.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd