Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1429

Associated Incidents

Incident 1115 Report
Amazon Flex Drivers Allegedly Fired via Automated Employee Evaluations

Loading...
Amazon Is Using Algorithms And A.I. To Terminate Flex Drivers By E-mail
jalopnik.com · 2021

A lengthy Bloomberg report says that Amazon Flex drivers are supervised, and often fired by algorithms and e-mails rather than human beings. If you’re not familiar with it, Amazon Flex is basically Postmates but for delivering Amazon packages—basically, you can sign up to deliver packages with your own vehicle.

Flex started in 2015 as a way for Amazon to get its packages out the same day to local customers (and of course, use gig workers instead of hiring—ED), growing to now include same-day grocery delivery. Algorithms monitor Flex drivers and how they do their jobs, grading them on a rating system that drivers obsess over—and for good reason. From Bloomberg:

But the moment they sign on, Flex drivers discover algorithms are monitoring their every move. Did they get to the delivery station when they said they would? Did they complete their route in the prescribed window? Did they leave a package in full view of porch pirates instead of hidden behind a planter as requested? Amazon algorithms scan the gusher of incoming data for performance patterns and decide which drivers get more routes and which are deactivated. Human feedback is rare. Drivers occasionally receive automated emails, but mostly they’re left to obsess about their ratings, which include four categories: Fantastic, Great, Fair or At Risk.

Drivers say there is often no way to challenge or actually speak to anyone if a bad rating or dismissal is given. Emails are automated. If dismissed as a driver, one can appeal through arbitration. But it costs $200, a price that discourages many from going through with it.

Drivers say that the algorithms don’t take into account the real-world problems they face, like say, failing to deliver a package to a building they’re locked out of. Internally, the company thinks the system is a success. An Amazon spokesperson told Bloomberg:

“We have invested heavily in technology and resources to provide drivers visibility into their standing and eligibility to continue delivering, and investigate all driver appeals.”

The report is worth reading, take a look.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd