Flagged Incident List
The following incident reports have been flagged by users and are pending review by editors.
title: Is Starbucks shortchanging its baristas?
authors: Aimee Picchi
submitters: Catherine Olsson
Some employees at the coffee chain say it isn't living up to promises to improve the company's labor practices
For Starbucks (SBUX) barista Kylei Weisse, working at the coffee chain helps him secure health insurance and some extra money while he studies at Georgia Perimeter College. What it doesn't provide is the kind of stable schedule that the company promised its workers last year. "It's the wild inconsistency" of the hours that's a problem, Weisse, 32, said. "We're supposed to get them 10 days in advance, which often happens, but there's no guarantee. If our manager doesn't get it to us on time, we just have to deal with it." That became a problem recently when Weisse's manager gave him only a few days notice on his work hours, which ended up conflicting with an anatomy and physiology exam at his college. Weisse ended up paying another worker $20 to take his shift so he could take the exam. The short notice is especially frustrating because of Starbucks' vow last year to post employees' schedules at least 10 days in advance, as well as the company's insistence that workers provide at least one-month notice when they need to take a day off. What's behind Starbucks price increases? Weisse isn't alone in complaining that Starbucks isn't living up to its promises to overhaul its labor practices for its roughly 130,000 baristas. That vow followed an article last year by The New York Times that detailed how workers were struggling to manage childcare and other obligations when the company provided only a few days notice about their schedules. About half of roughly 200 Starbucks baristas in a recent survey said they are still receiving their schedule with less than one week's notice. Others also reported being asked to handle "clopens," split shifts in which employees work a closing shift late into the evening and then an early opening shift the following morning. The company last year promised to end the practice. Of course, Starbucks isn't alone in using "just-in-time" scheduling, with the retail and restaurant industry increasingly turning to software that allows them to change work schedules at the last minute, depending on whether business picks up or slows down. But it is Starbucks that has become a lightning rod on the issue given its vows to improve how it treats employees and its own emphatic claims to valuing workers, whom it labels "partners." "Starbucks has the values and wants to do right by their employees," said Carrie Gleason, director of the Fair Workweek Initiative at the Center for Popular Democracy, an advocacy group focused on workers'rights, and a co-author of the group's new report on the company's labor practices. "However, since last year when the company recognized there was a serious problem with the way it scheduled workers and pledged to reform, still so many of the same issues persist." Starbucks didn't respond to requests for comment on the study or on baristas' reports of labor practices that are failing to meet the company's stated goals. In an internal memo this week published by Time, Starbucks executive Cliff Burrows wrote that the company couldn't validate the survey, but added that "the findings suggest, contrary to the expectations we have in place, that some partners are receiving their schedules less than one week in advance and that there is a continuing issue with some partners working a close and then an opening shift the following morning." He asks store managers "to go the extra mile to ensure partners have a consistent schedule." Starbucks ends "race together" campaign amid public backlash To be sure, some Starbucks workers are receiving at least 10 days notice on their work hours, with the survey finding that about one-third receive two weeks notice and another 18 percent get their schedules three weeks in advance. But that leaves almost half of workers who only receive one week's notice, making it more difficult from them manage other obligations, such as school, family commitments or other jobs. Clopens remain a problem, as well. About 60 percent of workers who have to handle a clopen receive seven or fewer hours of rest between a closing and an opening shift, the study found. That's prompted one former Starbucks employee to start a petition to end the practice of scheduling clopens. Ciara Moran noted in her petition that she sometimes was only able to get four or five hours of sleep on the days she was scheduled for clopens. She said she quit her job because she doubted whether it was possible to get ahead given the demands on workers. Even if Starbucks stuck with its policy of providing eight hours between shifts, that's not enough time, especially given that many workers in the service sector have long commutes, the study said. Another issue singled out by the report is Starbucks' practices on sick time. Since paid time off is only available to workers with at least a year on the job, about 40 percent of employees in the survey said they had dealt with barriers in taking sick days. In a perfect world, Weisse said he'd like to receive his schedule either a month or a
title: Why Google 'Thought' This Black Woman Was a Gorilla
authors: Manoush ZomorodiJen PoyantJackie SnowAriana TobinMiranda Katz
This is a story about an incident that happened to 22-year-old web developer Jacky Alciné, the racist slur that caught him off-guard, and the machines behind it.
Share this on Twitter (Opens in a new window) Share this on Facebook (Opens in a new window) Share this on Twitter (Opens in a new window) Share this on Facebook (Opens in a new window) Google had a major PR disaster on its hands thanks to "deep learning." ( Jacky Alcine/Twitter )
title: City Council News: Motions Introduced to Curb Navigational Apps such as WAZE
authors: Bel Air Association
submitters: Catherine Olsson
Another Motion to curb traffic navigation apps has been introduced out of concern traffic apps have been clogging side streets with drivers and sending them into evacuated areas during emergencies. Councilmember Paul Koretz (CD5) introduced a Motion immediately after December’s Skirball Fire that would direct the LAFD and Department of Transportation to report on efforts to coordinate with navigat...
Another Motion to curb traffic navigation apps has been introduced out of concern traffic apps have been clogging side streets with drivers and sending them into evacuated areas during emergencies. Councilmember Paul Koretz (CD5) introduced a Motion immediately after December’s Skirball Fire that would direct the LAFD and Department of Transportation to report on efforts to coordinate with navigation app developers to prevent directing drivers into evacuated areas. Last Tuesday, Councilmember Paul Krekorian (CD2) wants answers from the Department of Transportation to explain the current City partnership with these navigational apps and what efforts have been made by these app companies to address neighborhood concerns, etc. The Motion was seconded by Councilmember Paul Koretz.
title: Who is Stanislav Petrov, the man who literally saved the world from nuclear...
authors: Atlanta Journal-Constitution
submitters: Roman Yampolskiy
Stanislav Petrov, 77, passed away on May 19, 2017. His death in his home in the Moscow suburbs was little noted at the time. Petrov, however, is one of a few humans who can say they literally saved the world. The veteran was a lieutenant colonel of the Soviet Union Air Defense Forces. On Sept. 26, 1983, the Soviets' early-warning nuclear missile detection system was triggered. The system said the United States had fired five ballistic missiles at the Soviet Union. Petrov was the duty officer at the co
Stanislav Petrov, 77, passed away on May 19, 2017. His death in his home in the Moscow suburbs was little noted at the time. Petrov, however, is one of a few humans who can say they literally saved the world. The veteran was a lieutenant colonel of the Soviet Union Air Defense Forces. On Sept. 26, 1983, the Soviets' early-warning nuclear missile detection system was triggered. The system said the United States had fired five ballistic missiles at the Soviet Union. Petrov was the duty officer at the co...
title: 2010 Flash Crash
authors: Wikipedia Editors
The May 6, 2010, Flash Crash, also known as the Crash of 2:45, the 2010 Flash Crash or simply the Flash Crash, was a United States trillion-dollar stock market crash, which started at 2:32 p.m. EDT and lasted for approximately 36 minutes.
The DJIA on May 6, 2010 (11:00 AM - 4:00 PM EDT) The May 6, 2010, Flash Crash, also known as the Crash of 2:45, the 2010 Flash Crash or simply the Flash Crash, was a United States trillion-dollar stock market crash, which started at 2:32 p.m. EDT and lasted for approximately 36 minutes.:1 Stock indices, such as the S&P 500, Dow Jones Industrial Average and Nasdaq Composite, collapsed and rebounded very rapidly. The Dow Jones Industrial Average had its second biggest intraday point drop (from the opening) up to that point, plunging 998.5 points (about 9%), most within minutes, only to recover a large part of the loss. It was also the second-largest intraday point swing (difference between intraday high and intraday low) up to that point, at 1,010.14 points. The prices of stocks, stock index futures, options and exchange-traded funds (ETFs) were volatile, thus trading volume spiked.:3 A CFTC 2014 report described it as one of the most turbulent periods in the history of financial markets.:1 When new regulations put in place following the 2010 Flash Crash proved to be inadequate to protect investors in the August 24, 2015 flash crash—"when the price of many ETFs appeared to come unhinged from their underlying value"—ETFs were put under greater scrutiny by regulators and investors. On April 21, 2015, nearly five years after the incident, the U.S. Department of Justice laid "22 criminal counts, including fraud and market manipulation" against Navinder Singh Sarao, a trader. Among the charges included was the use of spoofing algorithms; just prior to the Flash Crash, he placed thousands of E-mini S&P 500 stock index futures contracts which he planned on canceling later. These orders amounting to about "$200 million worth of bets that the market would fall" were "replaced or modified 19,000 times" before they were canceled. Spoofing, layering, and front running are now banned. The Commodity Futures Trading Commission (CFTC) investigation concluded that Sarao "was at least significantly responsible for the order imbalances" in the derivatives market which affected stock markets and exacerbated the flash crash. Sarao began his alleged market manipulation in 2009 with commercially available trading software whose code he modified "so he could rapidly place and cancel orders automatically." Traders Magazine journalist, John Bates, argued that blaming a 36-year-old small-time trader who worked from his parents' modest stucco house in suburban west London for sparking a trillion-dollar stock market crash is a little bit like blaming lightning for starting a fire" and that the investigation was lengthened because regulators used "bicycles to try and catch Ferraris." Furthermore, he concluded that by April 2015, traders can still manipulate and impact markets in spite of regulators and banks' new, improved monitoring of automated trade systems. As recently as May 2014, a CFTC report concluded that high-frequency traders "did not cause the Flash Crash, but contributed to it by demanding immediacy ahead of other market participants.":1 Some recent peer-reviewed research shows that flash crashes are not isolated occurrences, but have occurred quite often. Gao and Mizrach studied US equities over the period of 1993–2011. They show that breakdowns in market quality (such as flash crashes) have occurred in every year they examined and that, apart from the financial crisis, such problems have declined since the introduction of Reg NMS. They also show that 2010, while infamous for the Flash Crash, was not a year with an inordinate number of breakdowns in market quality. Background [ edit ] On May 6, 2010, U.S. stock markets opened and the Dow was down, and trended that way for most of the day on worries about the debt crisis in Greece. At 2:42 p.m., with the Dow down more than 300 points for the day, the equity market began to fall rapidly, dropping an additional 600 points in 5 minutes for a loss of nearly 1,000 points for the day by 2:47 p.m. Twenty minutes later, by 3:07 p.m., the market had regained most of the 600-point drop.:1 At the time of the Flash Crash, in May 2010, high-frequency traders were taking advantage of unintended consequences of the consolidation of the U.S. financial regulations into Regulation NMS, designed to modernize and strengthen the United States National Market System for equity securities.:641 The Reg NMS, promulgated and described by the United States Securities and Exchange Commission, was intended to assure that investors received the best price executions for their orders by encouraging competition in the marketplace, created attractive new opportunities for high-frequency-traders. Activities such as spoofing, layering and front-running were banned by 2015. This rule was designed to give investors the best possible price when dealing in stocks, even if that price was not on th
title: 2018 in Review: 10 AI Failures
submitters: Catherine Olsson
Last December Synced compiled its first “Artificial Intelligence Failures” recap of AI gaffes from the previous year. AI has achieved remarkable progress, and many scientists dream of creating the…
Last December Synced compiled its first “Artificial Intelligence Failures” recap of AI gaffes from the previous year. AI has achieved remarkable progress, and many scientists dream of creating the Master Algorithm proposed by Pedro Domingos — which can solve all problems envisioned by humans. It’s unavoidable however that researchers, fledgling technologies and biased data will also produce blunders not envisioned by humans. That’s why a review of AI failures is necessary and meaningful: The aim of the article is not to downplay or mock research and development results, but to take a look at what went wrong with the hope we can do better next time. Synced 10 AI failures of 2018. Chinese billionaire’s face identified as jaywalker Traffic police in major Chinese cities are using AI to address jaywalking. They deploy smart cameras using facial recognition techniques at intersections to detect and identify jaywalkers, whose partially obscured names and faces then show up on a public display screen. The AI system in the southern port city of Ningbo however recently embarrassed itself when it falsely “recognized” a photo of Chinese billionaire Mingzhu Dong on an ad on a passing bus as a jaywalker. The mistake went viral on Chinese social media and Ningbo police apologized. Dong was unfazed, posting on Weibo: “This is a trivial matter. Safe travel is more important.” CloudWalk Deep Learning Researcher Xiang Zhou told Synced the algorithm’s lack of live detection was the likely problem. “Live detection at this distance is challenging, recognizing an image as a real person is pretty common now.” Chinese billionaire Mingzhu Dong’s face on a public display screen. Uber self-driving car kills a pedestrian In the first known autonomous vehicle-related pedestrian death on a public road, an Uber self-driving SUV struck and killed a female pedestrian on March 28 in Tempe, Arizona. The Uber vehicle was in autonomous mode, with a human safety driver at the wheel. So what happened? Uber discovered that its self-driving software decided not to take any actions after the car’s sensors detected the pedestrian. Uber’s autonomous mode disables Volvo’s factory-installed automatic emergency braking system, according to US National Transportation Safety Board preliminary report on the accident. In the wake of the tragedy Uber suspended self-driving testing in North American cities, and Nvidia and Toyota also stopped their self-driving road tests in the US. Eight months after the accident Uber announced plans to resume self-driving road tests in Pittsburgh, although the company’s self-driving future remains uncertain. ABC 15 screenshot of deadly Uber accident. IBM Watson comes up short in healthcare “This product is a piece of shit” wrote a doctor at Florida’s Jupiter Hospital regarding IBM’s flagship AI program Watson, according to internal documents obtained by Stat. Originally a question-answering machine, IBM has been exploring Watson’s AI capabilities across a broad range of applications and processes, including healthcare. In 2013 IBM developed Watson’s first commercial application for cancer treatment recommendation, and the company has secured a number of key partnerships with hospitals and research centers over the past five years. But Watson AI Health has not impressed doctors. Some complained it gave wrong recommendations on cancer treatments that could cause severe and even fatal consequences. After spending years on the project without significant advancements, IBM is reportedly downsizing Watson Health and laying off more than half the division’s staff. Amazon AI recruiting tool is gender biased Amazon HR reportedly used an AI-enabled recruiting software between 2014 and 2017 to help review resumes and make recommendations. The software was however found to be more favorable to male applicants because its model was trained on resumes submitted to Amazon over the past decade, when many more male candidates were hired. The software reportedly downgraded resumes that contain the word “women” or implied the applicant was female, for example because they had attended a women’s college. Amazon has since abandoned the software. The company did not deny using the tool to produce recommendations, but said it was never used to evaluate candidates. DeepFakes reveals AI’s unseemly side Last December several porn videos appeared on Reddit “featuring” top international female celebrities. User “DeepFakes” employed generative adversarial networks to swap celebrities’ faces with those of the porn stars. While face-swapping technology has been under development for years, DeepFakes’ method showed that anyone with enough facial images could now produce their own highly convincing fake videos. Realistic-looking fake videos of well-known people flooded the Internet through 2018. While the method is not technically a “failure,” its potential dangers are serious and far-reaching: if video evidence is no longer credible, this could further enc
title: Facial recognition catches China’s air con queen Dong Mingzhu jaywalking, but turns out to be ...
authors: South China Morning Post
submitters: Catherine Olsson
While China has moved ahead of the rest of the world in making facial recognition technology a part of people’s everyday life, there are systems in the country that need tweaking. Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo that displays images of people caught jaywalking by...
While China has moved ahead of the rest of the world in making facial recognition technology a part of people’s everyday life, there are systems in the country that need tweaking. Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo that displays images of people caught jaywalking by...
title: Security guard robot ends it all by throwing itself into a watery grave
authors: Sebastian Anthony
Knightscope K5 security bot shows your job is probably safe from automation. For now.
The automation revolution, where mostÂ of our jobs are replaced by robots and we spend the rest of our days floating around on rubber rings sipping piÃ±a coladas, has hit a snag: a Knightscope K5 security bot appears to have fallen down some stairs and drowned itself in a water feature. The scene, which took place at the mixed-use Washington Harbour development in Washington DC, was captured by Bilal Farooqui on Twitter. One local office worker reported that the K5 robot had only been patrolling the complex for a few days. Knightscope said in a statement that the "isolated incident" was under investigation, and that a new robot would be delivered to Washington Harbour this week for free. We first wrote about the Dalek-like K5Â back in 2014. The first bots were deployed on campuses and shopping complexes near the company's headquarters in Mountain View, California. By December 2016 the company had 15 security bots actively deployed in California, and earlier in 2017 itÂ disclosedÂ that it is "presently deploying its technology with 24 clients" in "five states and three time zones." The K5, which is equipped with lots and lots of sensors, is ostensibly an interesting piece of high-tech kit. It has a 360-degree video camera array, sensitive microphones, air quality sensors, and even thermal imaging capabilities. The cameras can apparently scan up to 1,500 car number plates per minute; the microphones can detect gun shots and other notable sounds. Autonomous mobilityÂ is provided by a mix of lidar, radar, and the video camera arrayâ€”but given that it missed the steps down into the Washington Harbour water feature, perhaps the software needs tweaking. Knightscope's K5 can't yet make arrests, taser a criminal, or wade into a burning building, but it can phone the cops or fire off a loud siren. Basically, if your shopping centre or office complex had five patrolling security guards, the idea is to replace four of them with robots and leave the fifth in a central office that can respond to any issues. In practice, the K5 has had a fairly rough few years. In 2016, one of the robots, which weigh over 300 pounds (135kg), ran over over a 16-month-old child at the Stanford Shopping Center. Knightscope called it a "freakish accident," suspended the bots while it carried out an investigation, and formally apologised to the kid's family. In April this year, right outside the Knightscope offices,Â a drunk man beat up one of the botsÂ as it was patrolling the car park. "Security guard," along with fast-food Â and factory workers , is fairly high up the list of jobs that will eventually be replaced by autonomous systems. That the K5 fell into a fountain after three years of commercial use is a little disappointing from a technological standpoint. And now, I leave you with some more photos from the Washington Harbour security bot, which decided to end it all after just a few days of service. RIP. It's a fun day here at @gmmb. The super high-tech security robot at our office complex has had a mishap. pic.twitter.com/nhRshrJA9w â€” Greg Pinelo (@gregpinelo) July 17, 2017 It's ok security robot. It's a stressful job, we've all been there. pic.twitter.com/LQbnntbCRm â€” âœ¨ðŸ’–Sparkle OpsðŸ’–âœ¨ (@SparkleOps) July 17, 2017 Now read our in-depth feature on the incoming automation revolution...
title: The 'suicidal robot' that drowned in a fountain didn't kill itself after all
authors: Fiona Keating
A security robot called Steve did not commit suicide by falling into a fountain in Washington DC, but due to a tragic accident. After retrieving data from Steve the roboguard’s black box discovered that the accident was caused not by suicide but by skidding on a “loose brick surface”, according to the machine’s manufacturer, Knightscope. A technical error led to the K5 robot’s
A security robot called Steve did not commit suicide by falling into a fountain in Washington DC, but due to a tragic accident. After retrieving data from Steve the roboguard’s black box discovered that the accident was caused not by suicide but by skidding on a “loose brick surface”, according to the machine’s manufacturer, Knightscope. A technical error led to the K5 robot’s demise when an algorithm did not detect the uneven surface, leading to Steve tumbling into the fountain and drowning. We’ll tell you what’s true. You can form your own view. From 15p €0.18 $0.18 USD 0.27 a day, more exclusives, analysis and extras. A replacement machine was sent to the office complex within 48 hours. Washington Harbour also received a month’s credit as well as an all-inclusive service policy include maintenance costs, according to Reuters. The manufacturer of the security robot downplayed the incident, saying it was an “isolated” incident which would lead to improvements. “Developing state-of-the-art autonomous technology must be done in real-world environments,” said the Silicon Valley-based maker. “It is not commercially reasonable to be developed in constrained laboratory settings.” According to the company, the K5 Autonomous Data Machine which is about five feet tall, was on a mission to map out the grounds of the complex when it tumbled down some stairs and toppled into the fountain. The incident went viral on Twitter after an employee at the Washington Harbour complex, Bilal Farooqui posted a picture of the robot with news of Steve’s ill-fated dip in the fountain. “We were promised flying cars, instead we got suicidal robots,” he wrote. Photos of the submerged robot were shared, showing bemused staff wading into the water. A woman in a black dress looks on pensively. Other tweets suggested that the robot had quit its job and it was a “win” for humans. The K5 has a 360-degree video camera, several microphones, air quality sensors and has thermal imaging capabilities. The Dalek-like machine can scan up to 1,500 number plates per second, while the microphones can detect gun shots, according to Ars Technica. Shape Created with Sketch. Meet the robots - the strange creations of Boston Dynamics Show all 6 left Created with Sketch. right Created with Sketch. Shape Created with Sketch. Meet the robots - the strange creations of Boston Dynamics 1/6 The LS3 is a four-legged, intelligent robot with an array of cameras and sensors that allow it, for example, to follow a leader of all kinds of terrain Boston Dynamics via YouTube 2/6 The eeriest of all the Boston Dynamics robots, the PetMan could - among other things - test chemical suits Boston Dynamics via YouTube 3/6 PetMan's big brother, Atlas may be used as a first response to disaster situations Reuters 4/6 RHex has been developed to cross the trickiest terrain and can be controlled from 700 metres away Boston Dynamics via YouTube 5/6 Built for speed, Cheetah can run at 28.3mph - and can prove it on the treadmill Boston Dynamics via YouTube 6/6 The remote-control car with a difference - it can jump 30ft in the air (wouldn't you want one for Christmas?) Boston Dynamics via YouTube 1/6 The LS3 is a four-legged, intelligent robot with an array of cameras and sensors that allow it, for example, to follow a leader of all kinds of terrain Boston Dynamics via YouTube 2/6 The eeriest of all the Boston Dynamics robots, the PetMan could - among other things - test chemical suits Boston Dynamics via YouTube 3/6 PetMan's big brother, Atlas may be used as a first response to disaster situations Reuters 4/6 RHex has been developed to cross the trickiest terrain and can be controlled from 700 metres away Boston Dynamics via YouTube 5/6 Built for speed, Cheetah can run at 28.3mph - and can prove it on the treadmill Boston Dynamics via YouTube 6/6 The remote-control car with a difference - it can jump 30ft in the air (wouldn't you want one for Christmas?) Boston Dynamics via YouTube In 2016, the K5 was involved in an accident when it ran over a 16-month-old child at the Stanford Shopping Centre, leaving him bruised and shaken. "The robot hit my son's head and he fell face down on the floor. The robot did not stop, it kept moving forward," Tiffany Cheng, the toddler's mother, to ABC 7. "He was crying like crazy." Ironically, Knightscope said that Steve was being repaired and is “tentatively scheduled to patrol of all places, a major aquarium.”
title: Security Robot Commits Suicide In Fountain Because The World Is Terrible
authors: IFL Science
The robot was a Knightscope K5, employed as a security robot at communications agency GMMB in Washington DC.
In this day and age of climate change-denying presidents, lack of healthcare, and anti-vaxxers, it’s easy to get downtrodden about the world. For one robot, things got a bit too much, and it appears to have thrown itself into a fountain in a somber act of robot suicide. The robot was a Knightscope K5, employed as a security robot at communications agency GMMB in Washington DC. These robots are equipped with cameras to provide a more physical presence than a regular security camera. They’re also designed to move autonomously, which seems to be how this robot ended up in a fountain. A few posts on Twitter documented the final moments of this K5 robot, as it ended up face down in a watery grave. RIP K5. We barely knew ye. Or maybe it was just being baptized. - - - On the company’s website, they highlight some of the key features of the robot. These include “forensic capabilities”, such as recording license plates and something to do with wireless signals. It also boasts an upcoming “gun detection” feature, however useful that might be. The K5 is getting compared to R2-D2 in a lot of places, but for us, we like to think of it more as those security droids in Wall-E. Or perhaps some sort of Dalek. Hey, Doctor Who is pretty popular at the moment, right? This isn’t the first time K5 has run into trouble. In April, a drunk man managed to knock down one of the 135-kilogram (300-pound) robots. And last year, a K5 accidentally ran over a 16-month-old toddler and then drove off, in what’s surely one of the first robotic hit and runs. "The robot hit my son's head and he fell down facing down on the floor and the robot did not stop and it kept moving forward," the child’s mom, Tiffany Teng, told ABC7 news back then. - Knightscope hasn’t responded to this latest incident yet, so it’s unclear what the damage to the robot will be, and whether it’s salvageable. We do know that these robots are employed at minimum wage though (we’re not joking, check out the video below), so hopefully a replacement can be found. Yesterday we also had Elon Musk warning us about the problem artificial intelligence poses to the future of human civilization. Based on this latest evidence, we might not have too much to worry about just yet. But just remember, no matter how bad the world might seem, there’s always someone for you to talk to. Even if you’re a robot.
title: These Algorithms Look at X-Rays-and Somehow Detect Your Race
authors: Tom Simonite
Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung
Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don’t look for on such scans: a patient’s race. The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren’t sure what cues the algorithms they created use to predict a person’s race. Evidence that algorithms can read race from a person’s medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms. The images included patients who identified as Black, white, and Asian. For each type of scan, the researchers trained algorithms using images labeled with a patient’s self-reported race. Then they challenged the algorithms to predict the race of patients in different, unlabeled images. Radiologists don’t generally consider a person’s racial identity—which is not a biological category—to be visible on scans that look beneath the skin. Yet the algorithms somehow proved capable of accurately detecting it for all three racial groups, and across different views of the body. For most types of scan, the algorithms could correctly identify which of two images was from a Black person more than 90 percent of the time. Even the worst performing algorithm succeeded 80 percent of the time; the best was 99 percent correct. The results and associated code were posted online late last month by a group of more than 20 researchers with expertise in medicine and machine learning, but the study has not yet been peer reviewed. The results have spurred new concerns that AI software can amplify inequality in health care, where studies show Black patients and other marginalized racial groups often receive inferior care compared to wealthy or white people. Machine-learning algorithms are tuned to read medical images by feeding them many labeled examples of conditions such as tumors. By digesting many examples, the algorithms can learn patterns of pixels statistically associated with those labels, such as the texture or shape of a lung nodule. Some algorithms made that way rival doctors at detecting cancers or skin problems; there is evidence they can detect signs of disease invisible to human experts. Judy Gichoya, a radiologist and assistant professor at Emory University who worked on the new study, says the revelation that image algorithms can “see” race in internal scans likely primes them to also learn inappropriate associations. “We have to educate people about this problem and research what we can do to mitigate it.” Judy Gichoya, radiologist and assistant professor, Emory University Medical data used to train algorithms often bears traces of racial inequalities in disease and medical treatment, due to historical and socioeconomic factors. That could lead an algorithm searching for statistical patterns in scans to use its guess at a patient’s race as a kind of shortcut, suggesting diagnoses that correlate with racially biased patterns from its training data, not just the visible medical anomalies that radiologists look for. Such a system might give some patients an incorrect diagnosis or a false all-clear. An algorithm might suggest different diagnoses for a Black person and white person with similar signs of disease. “We have to educate people about this problem and research what we can do to mitigate it,” Gichoya says. Her collaborators on the project came from institutions including Purdue, MIT, Beth Israel Deaconess Medical Center, National Tsing Hua University in Taiwan, University of Toronto, and Stanford. Previous studies have shown that medical algorithms have caused biases in care delivery, and that image algorithms may perform unequally for different demographic groups. In 2019, a widely used algorithm for prioritizing care for the sickest patients was found to disadvantage Black people. In 2020, researchers at the University of Toronto and MIT showed that algorithms trained to flag conditions such as pneumonia on chest x-rays sometimes performed differently for people of different sexes, ages, races, and types of medical insurance. Paul Yi, director of the University of Maryland’s Intelligent Imaging Center, who was not involved in the new study showing algorithms can detect race, describes some of its findings as “eye opening,” even “crazy.” Radiologists like him don’t typically think about race when interpreting scans, or even know how a patient self-identifies. “Race is a social construct and not in itself a biological phenotype, even though it can be associated with differences in anatomy,” Yi says. Frustratingly, the authors of the new study could not figure out how exactly their models could so accurately detect a patient’s self-reported race. They say that will likely make it harder to pick up biases in such algorithms. Follow-on experiments showed that the algorithms were not making predictions based on particular patches of anatomy, or visual features that might be associated with race due to social and environmental factors such as body mass index or bone density. Nor did age, sex, or specific diagnoses that are associated with certain demographic groups appear to be functioning as clues. The fact that algorithms trained on images from a hospital in one part of the US could accurately identify race in images from institutions in other regions appears to rule out the possibility that the software is picking up on factors unrelated to a patient’s body, says Yi, such as differences in imaging equipment or processes. Whatever the algorithms were seeing, they saw it clearly. The software could still predict patient race with high accuracy when x-rays were degraded so that they were unreadable to even a trained eye, or blurred to remove fine detail. Luke Oakden-Rayner, a coauthor on the new study and director of medical imaging research at Royal Adelaide Hospital, Australia, calls the AI ability the collaborators uncovered “the worst superpower.” He says that despite the unknown mechanism, it demands an immediate response from people developing or selling AI systems to analyze medical scans. A database of AI algorithms maintained by the American College of Radiology lists dozens for analyzing chest imagery that have been approved by the Food and Drug Administration. Many were developed using standard data sets used in the new study that trained algorithms to predict race. Although the FDA recommends that companies measure and report performance on different demographic groups, such data is rarely released. Oakden-Rayner says that such checks and disclosures should become standard. “Commercial models can almost certainly identify the race of patients, so companies need to ensure that their models are not utilizing that information to produce unequal outcomes,” he says. Yi agrees, saying the study is a reminder that while machine-learning algorithms can help human experts with practical problems in the clinic, they work differently than people. “Neural networks are sort of like savants, they’re singularly efficient at one task,” he says. “If you train a model to detect pneumonia, it’s going to find one way or another to get that correct answer, leveraging whatever it can find in the data.”
title: Discrimination through optimization: How Facebook's ad delivery can lead to skewed outcomes
authors: Muhammad AliPiotr SapiezynskiMiranda BogenAleksandra KorolovaAlan MisloveAaron Rieke
The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers
The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target---or exclude---particular groups of users seeing their ads, comparatively little attention has been paid to the implications of the platform's ad delivery process, comprised of the platform's choices about which users see which ads. It has been hypothesized that this process can "skew" ad delivery in ways that the advertisers do not intend, making some users less likely than others to see particular ads based on their demographic characteristics. In this paper, we demonstrate that such skewed delivery occurs on Facebook, due to market and financial optimization effects as well as the platform's own predictions about the "relevance" of ads to different groups of users. We find that both the advertiser's budget and the content of the ad each significantly contribute to the skew of Facebook's ad delivery. Critically, we observe significant skew in delivery along gender and racial lines for "real" ads for employment and housing opportunities despite neutral targeting parameters. Our results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive. This underscores the need for policymakers and platforms to carefully consider the role of the ad delivery optimization run by ad platforms themselves---and not just the targeting choices of advertisers---in preventing discrimination in digital advertising.
title: Microsoft made an AI algorithm to predict pregnancies of specific preteen girls
authors: Lewis White
In one of the most WTF uses of artificial technology yet, Microsoft has created one of the most bizarre algorithms ever. In an effort to show off its advancements in AI, the computer software company
In one of the most WTF uses of artificial technology yet, Microsoft has created one of the most bizarre algorithms ever. In an effort to show off its advancements in AI, the computer software company has crafted an AI to predict the impregnation of teenage girls. Yeah, bet you wish you never read that. Microsoft AI predicts specific teenage pregnancy ------------------------------------------------ Via [Wired](https://www.wired.com/story/argentina-algorithms-pregnancy-prediction/), Microsoft presented an AI algorithm to the Argentinian Congress back in 2018 that could predict teenage pregnancies. Presented during a period when the government was debating the decriminalisation of abortion, the tech giant created an AI forged in the fires of dystopia. The algorithm was developed to predict the lines of lower-income “future teens”. Microsoft’s AI would take the name and addresses of preteen girls and predict the next “five or six years” of their lives. Microsoft's data would be used to determine which girls were “86% predestined to have an adolescent pregnancy”. The AI’s database was built on the data of “200,000 residents in the city of Salta, including 12,000 women and girls between the ages of 10 and 19.” Wired reports that the tech giant sent “territorial agents” to citizens’ houses to question them, including young girls. These agents asked questions, took photos and recorded GPS locations of the participants. The surveys consisted of low-income families in Argentina. Additionally, a large section of the database consisted of migrant families that moved to the region from places such as South America. This is viewed as a success --------------------------- The Microsoft pregnancy prediction algorithm is viewed as a success by the company. According to the report, spokespeople for the company claimed that the Argentinian project was “one of the pioneering cases in the use of AI data”. That may indeed be the case. However, it's also an example of AI algorithms being used in an incredibly creepy and dangerous way. Additionally, talk of this algorithm has been kept on the down low, likely because its an off-putting foray into eugenics for the Big Tech company. Even now, years after its inception, there's no word on whether or not the project has been terminated. Additionally, there's no data on what the Argentinian government is planning to do with the girls that have been marked for “predestined” teenage pregnancies. Only one update has happened since the use of this algorithm: abortion has been decriminalised in Argentina. This means that those who do end up facing teenage pregnancy have a way out if they choose to. However, we don't know if Microsoft's systems affected this change or not.
title: Two Years On, A Father Is Still Fighting Tesla Over Autopilot And His Son's Fatal Crash
authors: Ryan Felton
In early 2016, 48-year-old Gao Jubin had high hopes for the future, with plans to ease up on running a logistics company and eventually turn over control of the business to his son, Gao Yaning. But his plans were abruptly altered when Yaning died at the wheel of a Tesla Model S that crashed on a highway in China while the car—Jubin believes—was traveling in Autopilot, the name for Tesla’s suite of driving aids. Now, Jubin says, “Living is worse than death.”
In early 2016, 48-year-old Gao Jubin had high hopes for the future, with plans to ease up on running a logistics company and eventually turn over control of the business to his son, Gao Yaning. But his plans were abruptly altered when Yaning died at the wheel of a Tesla Model S that crashed on a highway in China while the car—Jubin believes—was traveling in Autopilot, the name for Tesla’s suite of driving aids. Now, Jubin says, “Living is worse than death.” The last time Jubin saw Yaning, it was Jan. 20, 2016, days before the Chinese New Year. Jubin’s family had just left a wedding reception, and his son was in high spirits. The ceremony was for his girlfriend’s brother, so he’d spent the day keeping busy and helping out when needed. Given that the wedding involved his potential future-in laws, Chinese tradition meant Yaning should enthusiastically help out with planning the ceremony, Jubin told Jalopnik this month, and he did. “He did a lot of logistics of the wedding, preparing transport and accommodating the guests,” Jubin said. Following the ceremony, Yaning arranged transportation for his parents to get home. He took Jubin’s Tesla Model S and went to meet some friends. Before departing, Jubin said, his son offered up a word of caution: “Be careful driving.” In retrospect, it was an omen. On Yaning’s way home, the Model S crashed into the back of a road-sweeping truck while traveling on a highway in the northeastern province of Hebei. At the time, his family believes, the car was driving in Autopilot, which allows those cars to automatically change lanes after the driver signals, manage speed on roads and brake to avoid collisions. Yaning, 23, died shortly after the crash. Local police [reported](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855) there was no evidence the car’s brakes were applied. The crash in China came at a precarious time for Tesla. In the summer of 2016, the automaker and its semi-autonomous system were facing intense scrutiny after it revealed a Florida driver had died earlier that year [in a crash involving a Model S cruising in Autopilot](https://jalopnik.com/first-fatal-tesla-autopilot-crash-sparks-nhtsa-investig-1782916450). An investigation by U.S. auto regulators was already underway. But no one knew about Yaning’s death—which happened months before the Florida crash—until that September, when [Jubin first went public about a lawsuit he’d filed](https://jalopnik.com/the-first-fatal-tesla-autopilot-crash-may-have-happened-1786626985#_ga=2.221012888.1248647947.1519132051-427793550.1518120488) against Tesla over the crash. Police had concluded Yaning was to blame for the crash, but Jubin’s suit accused Tesla and its sales staff of exaggerating Autopilot’s capabilities. He asked a local court in Beijing to [authorize an independent investigation](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855) to officially conclude that Autopilot was engaged. The suit [sought an apology from Tesla](https://www.ft.com/content/80c45ad6-7ef0-11e6-bc52-0c7211ef3198) for how it promoted the feature. Multiple U.S. news outlets covered Jubin’s case after [_Reuters_ wrote about the suit](https://www.reuters.com/article/us-tesla-crash/chinese-man-blames-tesla-autopilot-function-for-sons-crash-idUSKCN11K232) in 2016, but interest almost immediately faded. The case, however, is still ongoing, according to Jubin, who spoke with Jalopnik this month by Skype in his first interview with a U.S. media outlet. He hopes the suit will bring more attention to the system’s limited capabilities and force Tesla to change the way it deploys the technology before it’s refined. (A federal lawsuit filed last year in the U.S. echoed that concern, [alleging the automaker uses drivers](https://jalopnik.com/angry-owners-sue-tesla-for-using-them-as-beta-testers-o-1794503348) as “beta testers of half-baked software that renders Tesla vehicles dangerous.” Tesla called the suit “inaccurate” and [said](https://jalopnik.com/angry-owners-sue-tesla-for-using-them-as-beta-testers-o-1794503348) it was a “disingenuous attempt to secure attorney’s fees posing as a legitimate legal action.”) In September 2016, [the company said](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855) the extensive damage made it impossible to determine whether Autopilot was engaged. Following the incident, Tesla updated the system so that if a driver ignores repeated warnings to resume control of the vehicle, they would be locked out from using Autopilot during the rest of the trip. Even if the system was engaged during the collision, Tesla told the _Wall Street Journal_, Autopilot warns drivers to keep their hand on the steering wheel, which is reinforced by repeated warnings to “take over at any time.” Yaning, [Tesla said](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855), took no action even though the road sweeper “was visible for nearly 20 seconds.” A Tesla spokesperson said in a statement to Jalopnik that “We were deeply saddened to learn that the driver of the Model S lost his life in the incident.” A police investigation, the spokesperson said, found the “main cause of the traffic accident” was Yaning’s failure “to drive safely in accordance with operation rules,” while the secondary cause was the street-sweepers had “incomplete safety facilities.” “Since then, Tesla has been cooperating with an ongoing civil case into the incident, through which the court has required that a third-party appraiser review the data from the vehicle,” the statement said. “While the third-party appraisal is not yet complete, we have no reason to believe that Autopilot on this vehicle ever functioned other than as designed.” Within the first year of introducing Autopilot, in October 2015, Tesla faced intense criticism for labeling the feature as such. [Regulators across the globe expressed concern that the name is misleading](https://www.usatoday.com/story/money/cars/2016/10/17/misleading-tesla-under-fire-over-autopilot-name/92304978/). Tesla has always maintained that Autopilot is only a driving aid—not a replacement—and that [motorists must pay attention to the road while it’s engaged](https://jalopnik.com/tesla-driver-who-slammed-into-parked-firetruck-on-calif-1822332668). But Tesla’s Autopilot messaging has, at times, been [criticized as conflicting and ultimately confusing](https://www.consumerreports.org/tesla/tesla-autopilot-too-much-autonomy-too-soon/). And Tesla drivers continue to push the system to its limits, with some producing [videos](https://jalopnik.com/smoking-weed-while-your-tesla-drives-on-autopilot-isn-t-1820390972#_ga=2.190671787.1248647947.1519132051-427793550.1518120488) [that show Autopilot](https://jalopnik.com/stop-doing-this-shit-with-semi-autonomous-cars-1822090627#_ga=2.237257216.1248647947.1519132051-427793550.1518120488) being used in ways the automaker wouldn’t officially endorse. In the months following the crash, concerns over Autopilot seemed to dissipate, as carmakers ratcheted up efforts to develop [semi-autonomous driving aids of their own](https://jalopnik.com/cadillacs-super-cruise-makes-the-self-driving-future-se-1819153551#_ga=2.194913069.1248647947.1519132051-427793550.1518120488) that could challenge Tesla. Autonomy, the auto industry’s way of thinking went, was the way of the future. The National Highway Traffic Safety Administration, in early 2017, also gave Tesla some peace of mind, after it [cleared the automaker and Autopilot in the fatal Florida crash](https://jalopnik.com/feds-close-the-tesla-autopilot-death-investigation-with-1791378911#_ga=2.245667396.1174511965.1519656602-427793550.1518120488). But the U.S. National Transportation Safety Board later pinned blame partially on [Autopilot and an over-reliance by the motorist on Tesla’s driving aids](https://jalopnik.com/limits-of-teslas-autopilot-and-driver-error-cited-in-fa-1803806982#_ga=2.245667396.1174511965.1519656602-427793550.1518120488). (The NTSB can only make safety recommendations, while NHTSA’s authorized to order recalls or issue fines.) In the Florida crash, NTSB Chairman Robert Sumwalt [said](https://www.bloomberg.com/news/articles/2017-09-12/tesla-probe-focuses-on-restricting-autopilot-to-certain-roads), “Tesla’s system worked as designed. But it was designed to perform limited tasks in a limited range of environments. The system gave far too much leeway to the driver to divert his attention to something other than driving.” There are few known crashes [involving Tesla drivers that had Autopilot engaged](https://www.theguardian.com/technology/2017/apr/03/the-customer-is-always-wrong-tesla-lets-out-self-driving-car-data-when-it-suits), but given how relatively new autonomous technology is for commercially available cars, regulators have taken interest in how they function, [even for minor accidents](https://jalopnik.com/a-human-driver-crashed-into-las-vegass-driverless-shutt-1820269207). The legal system has yet to seriously weigh in. A federal lawsuit [over Tesla’s Autopilot rollout is pending](https://jalopnik.com/angry-owners-sue-tesla-for-using-them-as-beta-testers-o-1794503348); the first case over an accident involving an autonomous car—filed by a motorcyclist [against General Motors](https://jalopnik.com/motorcyclist-sues-gm-over-crash-with-self-driving-chevy-1822358606#_ga=2.248272327.1174511965.1519656602-427793550.1518120488)—emerged only in January. Tesla [previously said](https://electrek.co/2016/10/07/tesla-issues-thorough-response-following-harsh-critique-of-autopilot-by-german-authorities/) that “any vehicle can be misused.” “For example, no car prevents someone from driving at very high speeds on roads where that is not appropriate or from using cruise control on city streets,” [the automaker said](https://electrek.co/2016/10/07/tesla-issues-thorough-response-following-harsh-critique-of-autopilot-by-german-authorities/). “In contrast, Tesla has taken many steps to prevent a driver from using Autopilot improperly.” But Tesla owners have continued to post examples of the system being misused, raising concerns that some either don’t understand Autopilot’s limitations, or rely on it far too much. Those concerns were rekindled after a [Tesla driver in January slammed into the back of a parked firetruck](https://jalopnik.com/tesla-driver-who-slammed-into-parked-firetruck-on-calif-1822332668#_ga=2.70750800.1174511965.1519656602-427793550.1518120488) on a California freeway while his Model S was reportedly driving in Autopilot. In response, the NTSB and NHTSA [both launched new investigations into the use of Autopilot](https://jalopnik.com/tesla-driver-who-slammed-into-parked-firetruck-on-calif-1822332668#_ga=2.70750800.1174511965.1519656602-427793550.1518120488), highlighting some of the criticisms Jubin first raised after his son’s crash two years ago. In particular, does Tesla do enough to ensure drivers won’t misuse Autopilot? Officially, it hasn’t been concluded if Autopilot was engaged at the time of Yaning’s crash. [Tesla claimed](https://www.nytimes.com/2016/09/15/business/fatal-tesla-crash-in-china-involved-autopilot-government-tv-says.html) Jubin’s car’s damage made it physically incapable of transmitting log data, so the company had “no way of knowing whether or not Autopilot was engaged at the time of the crash.” But Jubin believes he has more than enough evidence—recorded in-car video footage, a report from an expert who examined the clips, anecdotal comments from other Tesla owners—to prove it. The formal inspection hasn’t been completed, but Jubin’s attorney, Cathrine Guo, told Jalopnik on Tuesday that Tesla’s U.S. headquarters has turned over a document that decodes data produced on a SD memory card installed in the Model S. The document, Guo said by email, recorded the actions of both the car and driverand shows that Autopilot was on at the time of the crash. When asked for a response, Tesla referred to the spokesperson’s statement, which said: “While the third-party appraisal is not yet complete, we have no reason to believe that Autopilot on this vehicle ever functioned other than as designed.” In the dashcam clips, the Model S appears to drive smoothly, centered in lanes. “Even when the road was rugged and bumpy,” Jubin said, “it didn’t change course.” Jubin also spoke with Tesla owners and experts and he said that they agreed that Autopilot must’ve been engaged. [And at an initial court hearing](https://www.ft.com/content/80c45ad6-7ef0-11e6-bc52-0c7211ef3198), Jubin’s attorney said the speed of the Model S remained consistent for eight minutes before the collision. Jubin said he found a professor at Beijing Jiaotong University who’s an expert in autonomous driving. After reviewing the video and accompanying documents, Jubin said, “The professor felt that ... Autopilot had to be the cause of the accident.” Jubin’s attorney also said that Yaning hadn’t been drinking that day. A judge from the Chaoyang District People’s Court of Beijing recently granted Jubin’s request for a third-party inspection to confirm whether Autopilot was engaged, and the probe could begin as early as this month, attorney Guo, said. “Based on all the information I have,” Jubin said through his attorney, who translated the conversation. “I have no doubt that accident \[was\] caused by the Autopilot.” Throughout the nearly two hour interview with Jalopnik, Jubin had to pause several times to compose himself. Yaning was a “very kind, selfless, altruistic” individual, he said. “When he was a kid, when he was playing with his friends, he always took snacks from home and shared them,” Jubin said. “When some of the kids didn’t get it, he would come back home and get more.” “He had been very understanding of us parents, helping us feed his little sister, or washing her clothes,” he went on. “Every time the four of us went out, it was like three parents with a kid... it was a really happy, beautiful family, and all of a sudden it was gone, like that.” Yaning’s family grew up in Handan city, in the Hebei Province of China, where Jubin worked in the coal trade. Eventually, Jubin said, he started a logistics company and took on some public service work. Jubin said his son spent two years in the army. He returned home and, eventually, applied for a local college to study business administration. The plan, Jubin said, was for him to one day takeover the logistics firm. “I did not expect everything was in vain,” he said. Yaning’s last moments alive were captured on a camera that was apparently installed on the dashboard of Model S. The lens looked out through the windshield and onto the road. Jubin’s attorney provided Jalopnik about two-and-a-half minutes of footage that was [first published in September 2016](https://jalopnik.com/the-first-fatal-tesla-autopilot-crash-may-have-happened-1786626985#_ga=2.88614168.2004418494.1519654358-1646309174.1503498445) by Chinese state broadcaster CCTV. The video [shows](https://youtu.be/1rsIek9AFug) the white sedan cruising along a four-lane highway in relatively clear weather. At one point, Yaning can be heard jubilantly singing, as the car merges from the middle to the left lane. The car continues along the road, appearing to travel at the same speed. At one point, a car ahead of the Model S moves into the center lane, leaving an orange street sweeper straddling the road’s left shoulder directly in Yaning’s path. Autopilot is designed to adjust speeds using adaptive cruise control. If, say, an object appears in the car’s path, an escalating series of visual and audio warnings are supposed to go off, signaling for the driver to resume control of the wheel. Based on the video, no warning alert went off and the Model S never slowed before Yaning slammed into the truck. Tesla vehicles come equipped with automatic emergency braking technology [regardless if Autopilot is turned on](https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF), which is supposed to alert the driver of a potential hazard. If the driver doesn’t react in time, the car should automatically apply its brakes. But Tesla’s owner manual states that Autopilot isn’t adept when it comes to recognizing stationary vehicles at high speeds, as [_Wired_ notes](https://www.wired.com/story/tesla-autopilot-why-crash-radar/). “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead,” according to the manual. Jubin had been home at the time of the crash. In the interview, Jubin said a Tesla customer service representative called and told him “we detected your airbag exploded.” “I said, ‘My son was driving today. I don’t know the details, you should hang up and call my son, and find out what is going on,’” Jubin said. Shortly after, Yaning’s friend who was driving near him at the time of the crash in a separate car called Jubin and said he’d been in a serious accident. Jubin and his wife immediately left and drove to the site of the accident. When they arrived, Jubin discovered first responders addressing a gruesome scene. “I got there and saw the car was in pieces, and a lot of blood was streaming down to the ground from the car,” he said. “The blood covered a big area on the ground. We felt very sad. It was a very bad feeling, and my wife \[was\] praying in my car for my son’s safety. We got to the hospital and the doctor told me they tried and couldn’t save him.” The intervening months took a toll. He suspended work at the logistics company and let every employee go. Jubin’s now focused solely on the lawsuit and taking care of his family. “He’s gone and we don’t know how to carry on,” Jubin said. “Living is worse than death.” Jubin’s main complaint is what he called Tesla’s misleading advertising of Autopilot. Since introducing the system in 2015, the automaker has [taken flak from consumer advocates](https://www.consumerreports.org/tesla/tesla-autopilot-too-much-autonomy-too-soon/), as videos showed Tesla drivers [reading or even sleeping](https://www.reuters.com/article/us-tesla-autopilot-social/sleeping-driver-terrified-granny-among-tesla-autopilot-users-on-youtube-idUSKCN0ZH5R8?il=0) while their cars were in motion. Jubin said Tesla is to blame for how some customers have perceived the capabilities of Autopilot. In particular, he pointed to a conversation he had with Yaning after purchasing the Model S. Yaning, he said, explained that a Tesla salesperson told him that Autopilot can virtually handle all driving functions. “If you are on Autopilot you can just sleep on the highway and leave the car alone; it will know when to brake or turn, and you can listen to music or drink coffee,” Jubin said, summarizing the salesperson’s purported remarks. This tracks with reporting after Yaning’s death went public. Some of Tesla’s Chinese sales staff, for instance, took their hands off the wheel during Autopilot demonstrations, according to a report [from _Reuters_](https://jalopnik.com/tesla-takes-away-autopilot-term-from-its-chinese-websit-1785292163#_ga=2.185413993.1174511965.1519656602-427793550.1518120488). (Tesla’s Chinese sales staff were [later told to make the limitations of Autopilot clear](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855).) But Jubin said his son was “misled” by salespeople who oversold Autopilot’s capabilities. It continued even after Yaning’s death, he claimed. “When I was at a Tesla retail store, they were still advertising, and online too, how you can sleep or drink coffee and everything,” he said. After Jubin initially filed his suit in July 2016, [Tesla removed Autopilot and a Chinese term for “self-driving”](https://jalopnik.com/tesla-takes-away-autopilot-term-from-its-chinese-websit-1785292163) from its China website and marketing materials. The phrase _zi dong jia shi,_ means the car can drive itself, the _Wall Street Journal_ [reported at the time](https://www.wsj.com/articles/family-of-driver-killed-in-tesla-crash-in-china-seeks-court-investigation-1474351855). Tesla changed that to _zi dong fu zhu jia shi_, meaning a driver-assist system. “When you hear autonomous driving, or self-driving, when you hear it described as that, as safe, especially on expressways, it’s totally different from the description of assisted autonomous driving,” said Guo, Jubin’s attorney. “That’s one of the reasons we sued Tesla.” Automakers are currently testing fully-autonomous cars, but no one in the industry expects them to be available to buy for years to come. Jubin’s supportive of the movement toward autonomy, but he urged drivers around the world to be cautious and fully understand their limitations. “I hope more Tesla owners become aware of it,” he added, “and avoid accidents like this.” The suit initially asked for about $2,000 in compensation for the family’s grief over Yaning’s death. But the complaint has since been amended, and now asks for 5 million yuan (roughly $750,000). If he prevails, Jubin said, he plans to use some of the money to start a charity fund “to warn more Tesla owners not to use Autopilot.” “We hope there would be no more tragic families like ours,” he said. Jubin believes the industry’s technological advancement toward fully-autonomous driving is certain, but he feels Autopilot is too premature for release. “Tesla should release the feature after it’s fully developed,” Jubin said, “not in the process of perfecting it.” **UPDATE:** In a response to this story sent after publishing, a Tesla spokesperson said that the driver’s father told Tesla personnel that his son knew Autopilot very well and had read the owner’s manual for Model S “over and over again.” Furthermore, the automaker asked Jalopnik to note that the car warns Autosteer is in Beta, requires hands on steering wheel, should not be used on roads with sharp turns and questionable lane markings, and refers drivers to the manual, which additionally states the driver is responsible for minding the system.