Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5349

Associated Incidents

Incident 11095 Report
Year-long AI Surveillance Pilot in Two South Australian Aged Care Facilities Reportedly Overwhelmed Staff with False Positives

Loading...
South Australia's aged care AI trial produced 12,000 false alarms
itnews.com.au · 2022

The Australian-first project was intended to pilot the use of cameras and AI to aid monitoring of residents under care, with a view to making the lives of staff easier.

However, a review of the pilot by PwC [pdf] showed the technology produced false positives at such a rate that alert fatigue among staff set in, and at least one actual incident - a resident falling over - went unresponded to.

The technology was programmed to detect four key incident types, defined as "falls, assist, call for help and/or screams".

However, PwC found there were concerns from the outset "that the way in which these events had been programmed were not well aligned to the common movement patterns of residents at the sites."

In addition, the system was tuned to be overly sensitive to noise levels in facilities, and was unable to distinguish between inanimate objects and people until it was patched.

The end result was a flood of "false alerts" that overwhelmed onsite staff and facility managers.

PwC said that "a threshold of 10 false alerts per day were anticipated by SA Health and the pilot sites".

On average, the number of false alerts a day was triple that amount, and exceeded 12,000 across two sites over the year-long trial.

"A high percentage of these alerts were sit-fall events which involved staff performing a bend to knee (crouching) motion" to mobilise a resident, the review found.

Across the trial, the AI algorithm flagged "movements or sounds that are reasonably expected in residential care" as problematic and repeatedly raised alerts.

While the algorithm did become better at detecting actually problematic events, "it still initiated a high number of false alerts per month across the two sites" even as the 12-month pilot wound down, PwC found.

"In these final months of the pilot, staff were no longer able to respond to every alert," it said. 

"There was at least one instance where staff did not respond to an alert that turned out to be a 'true' resident fall event."

PwC's review expresses caution at the conclusions that should be drawn from its work.

For example, it did not review the capability of the technology itself, and adds that the approach taken to piloting the technology may have under-estimated the time needed to make it suitable for an aged care setting.

"More comprehensive, contextual testing prior to piloting the technology across an entire residential care site may support improved implementation," the consultancy advised.

It added that, "if using AI as part of the surveillance system, then the time taken to train the AI to the context of use should not be underestimated."

It also noted that residents were generally unconcerned at the false positives, did not find them disruptive, and in some cases were comforted by the additional attention.

However, PwC said the end of the 12-month trial was ultimately inconclusive as to whether the technology made any material difference to the quality and safety of aged care.

In televised comments coinciding with the publication of the PwC report, South Australia's Minister for Health and Wellbeing Chris Picton said, "It was an absolute[ly] botched rollout of this trial that happened over the past year."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd