Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 4022

関連インシデント

インシデント 7731 Report
Chatbot in Workplace Training at Bunbury Prison Reveals Real Names in Sexual Harassment Case

Loading...
AI chatbot blamed for psychosocial workplace training gaffe at Bunbury prison
abc.net.au · 2024

The psychosocial safety training company that used the full name of an alleged sexual harassment victim in a course at her former workplace says artificial intelligence (AI) is to blame.

Psychosocial Leadership trainer Charlotte Ingham said she used Microsoft's Copilot chatbot to generate examples of psychosocial hazards employees might face at Bunbury prison, where she was delivering the course.

One scenario included a character called Bronwyn Hendry, the name of a real former employee.

"I walked in there thinking I had a fictional scenario," Ms Ingham said. 

"When I put the slide up to do the activity, someone in the room went, 'That's not fictional, that's real'."

Ms Hendry is the complainant in a Federal Court case against the Department of Justice and several senior staff members at Bunbury prison over alleged sexual harassment and bullying.

"I had no idea [the chatbot] would use real people's names," Ms Ingham said. 

"I mean, should I have known?"

Ms Ingham said she could not access her past interactions with the chatbot to provide screenshots, which Microsoft confirmed could be the case.

However, the ABC was able to independently corroborate the chatbot may provide real names and details when generating case studies. 

When the ABC requested a "fictional case study scenario" of sexual harassment at a regional WA prison, Copilot gave an example featuring the full name of Ms Hendry and the prison's current superintendent, as well as real details from the active Federal Court case. 

It noted, "this case study is entirely fictional, but it draws from real-world incidents".

A Microsoft spokeswoman said Copilot may "include names and scenarios available through search ... if prompted to create a case study based on a specific situation".

Alleged victim calls training 'contradictory' 

Ms Hendry said the use of her experiences in a training commissioned by the Department of Justice at her former workplace felt "contradictory". 

"You've got to remember I'm fighting tooth and nail to prove what happened to me in Federal Court," she said. 

"It's very triggering."

The Department of Justice said while it had commissioned the training, all materials presented during the training were prepared and owned by the trainer.

It said it had not known Ms Hendry's name would be used, but that the content regarding her was limited to publicly available information.

"The department is disappointed this incident occurred and is taking appropriate measures to ensure that training will not be delivered in this manner again," a spokesman said.

Ms Hendry said that was not good enough.

"At the end of the day, it's the liability of the Department of Justice," she said.

"They procured her. They paid her for her consultancy. They should have done those checks and balances."

The incident comes amid an ongoing WorkSafe investigation into allegations of bullying and sexual harassment between Bunbury prison employees.

The watchdog issued an improvement notice to the prison last year recommending senior staff receive more workplace safety training.

AI expert warns companies to tread carefully

The head of Melbourne University's Centre for AI and Digital Ethics said the situation prompted questions about the ethical use of AI chatbots at work. 

Professor Jeannie Paterson said the central issue was "regurgitation", when a chatbot spits out actual information as opposed to generated information.

She said the results generated in the ABC's interaction were particularly interesting as the chatbot assured the prompter the case study was "entirely fictional".

"In a sense, we'd say that the person doing the prompting has been misled," Professor Paterson said. 

"Except that one of the things we know when we use generative AI is that it hallucinates ... it can't be relied on."

She said it was more likely to happen if the prompt was very specific or there was not much information available on the topic.

"That's why I would say firms shouldn't say, 'Don't use it'. Firms should say, 'Here's our policy on using it'," she said.

"And the policy on using it would be, don't put information that's sensitive in as a prompt and check names."

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd