Incident 121: Autonomous Kargu-2 Drone Allegedly Remotely Used to Hunt down Libyan Soldiers

Suggested citation format

Perkins, Kate. (2020-03-27) Incident Number 121. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Editors
121
2
2020-03-27
Sean McGregor, Khoa Lam

Incidents Reports

A screenshot from a promotional video advertising the Kargu drone. In the video, the weapon dives toward a target before exploding.

Last year in Libya, a Turkish-made autonomous weapon—the STM Kargu-2 drone—may have “hunted down and remotely engaged” retreating soldiers loyal to the Libyan General Khalifa Haftar, according to a recent report by the UN Panel of Experts on Libya. Over the course of the year, the UN-recognized Government of National Accord pushed the general’s forces back from the capital Tripoli, signaling that it had gained the upper hand in the Libyan conflict, but the Kargu-2 signifies something perhaps even more globally significant: a new chapter in autonomous weapons, one in which they are used to fight and kill human beings based on artificial intelligence.

The Kargu is a “loitering” drone that can use machine learning-based object classification to select and engage targets, with swarming capabilities in development to allow 20 drones to work together. The UN report calls the Kargu-2 a lethal autonomous weapon. It’s maker, STM, touts the weapon’s “anti-personnel” capabilities in a grim video showing a Kargu model in a steep dive toward a target in the middle of a group of manikins. (If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill. The UN report heavily implies they were, noting that lethal autonomous weapons systems contributed to significant casualties of the manned Pantsir S-1 surface-to-air missile system, but is not explicit on the matter.)

Many people, including Steven Hawking and Elon Musk, have said they want to ban these sorts of weapons, saying they can’t distinguish between civilians and soldiers, while others say they’ll be critical in countering fast-paced threats like drone swarms and may actually reduce the risk to civilians because they will make fewer mistakes than human-guided weapons systems. Governments at the United Nations are debating whether new restrictions on combat use of autonomous weapons are needed. What the global community hasn’t done adequately, however, is develop a common risk picture. Weighing risk vs. benefit trade-offs will turn on personal, organizational, and national values, but determining where risk lies should be objective.

It’s just a matter of statistics.

At the highest level, risk is a product of the probability and consequence of error. Any given autonomous weapon has some chance of messing up, but those mistakes could have a wide range of consequences. The highest risk autonomous weapons are those that have a high probability of error and kill a lot of people when they do. Misfiring a .357 magnum is one thing; accidentally detonating a W88 nuclear warhead is something else.

There are at least nine questions that are important to understanding where the risks are when it comes to autonomous weapons.

How does an autonomous weapon decide who to kill? Landmines—in some sense an extremely simple autonomous weapon—use pressure sensors to determine when to explode. The firing threshold can be varied to ensure the landmine does not explode when a child picks it up. Loitering munitions like the Israeli Harpy typically detect and home in on enemy radar signatures. Like with landmines, the sensitivity can be adjusted to separate civilian from military radar. And thankfully, children don’t emit high-powered radio waves.

But what has prompted international concern is the inclusion of machine learning-based decision-making as was used in the Kargu-2. These types of weapons operate on software-based algorithms “taught” through large training datasets to, for example, classify various objects. Computer vision programs can be trained to identify school buses, tractors, and tanks. But the datasets they train on may not be sufficiently complex or robust, and an artificial intelligence (AI) may “learn” the wrong lesson. In one case, a company was considering using an AI to make hiring decisions until management determined that the computer system believed the most important qualification for job candidates was being named Jared and playing high school lacrosse. The results wouldn’t be comical at all if an autonomous weapon made similar mistakes. Autonomous weapons developers need to anticipate the complexities that could cause a machine learning system to make the wrong decision. The black box nature of machine learning, in which how the system makes decisions is often opaque, adds extra challenges.

What role do humans have? Humans might be able to watch for something going wrong. In human-in-the-loop configurations, a soldier monitors autonomous weapon activities, and, if the situation appears to be headed in a horrific direction, can make a correction. As the Kargu-2’s reported use shows, a human-off-the-loop system simply does its thing without a safeguard. But having a soldier in the loop is no panacea. The soldier may trust the machine and fail to adequately monitor its operation. For example, Missy Cummings, the director of Duke University’s Human and Autonomy Laboratory, finds that when it comes to autonomous cars, “drivers who think their cars are more capable than they are may be more susceptible to increased states of distractions, and thus at higher risk of crashes.”

Of course, a weapon’s autonomous behavior may not always be on—a human might be in, on, or off the loop based on the situation. South Korea has deployed a sentry weapon along the demilitarized zone with North Korea called the SGR A-1 that reportedly operates this way. The risk changes based on how and when the fully autonomous function is flipped on. Autonomous operation by default obviously creates more risk than autonomous operation restricted only to narrow circumstances.

What payload does an autonomous weapon have? Accidentally shooting someone is horrible, but vastly less so than accidentally detonating a nuclear warhead. The former might cost an innocent his or her life, but the latter may kill hundreds of thousands. Policymakers may focus on the larger weapons, recognizing the costs of mistake, potentially reducing the risks of autonomous weapons. However, exactly what payloads autonomous weapons will have is unclear. In theory, autonomous weapons could be armed with guns, bombs, missiles, electronic warfare jammers, lasers, microwave weapons, computers for cyber-attack, chemical weapons agents, biological weapons agents, nuclear weapons, and everything in between.

What is the weapon targeting? Whether an autonomous weapon is shooting a tank, a naval destroyer, or a human matters. Current machine learning-based systems cannot effectively distinguish a farmer from a solider. Farmers might hold a rifle to defend their land, while soldiers might use a rake to knock over a gun turret. But even adequate classification of a vehicle is difficult too, because various factors may inhibit an accurate decision. For example, in one study, obscuring the wheels and half of the front window of a bus caused a machine learning-based system to classify the bus as a bicycle. A tank’s cannon might make it easy to distinguish from a school bus in an open environment, but not if trees or buildings obscure key parts of the tank, like the cannon itself.

How many autonomous weapons are being used? More autonomous weapons means more opportunities for failure. That’s basic probability. But when autonomous weapons communicate and coordinate their actions, such as in a drone swarm, the risk of something going wrong increases. Communication creates risks of cascading error in which an error by one unit is shared with another. Collective decision-making also creates the risk of emergent error in which correct interpretation adds up to a collective mistake. To illustrate emergent error, consider the parable of the blind men and the elephant. Three blind men hear a strange animal, an elephant, had been brought to town. One man feels the trunk and says the elephant is thick like a snake. Another feels the legs and says it’s like a pillar. A third feels the elephant’s side and describes it as a wall. Each one perceives physical reality accurately, if incompletely, but their individual and collective interpretations of that reality are incorrect. Would a drone swarm conclude the elephant is an elephant, a snake, a pillar, a wall, or something else entirely?

Where are autonomous weapons being used? An armed, autonomous ground vehicle wandering a snow-covered Antarctic glacier has almost no chance of killing innocent people. Not much lives there and the environment is mostly barren with little to obstruct or confuse the vehicle’s onboard sensors. But the same vehicle wandering the streets of New York City or Tokyo is another matter. In cities, the AI system would face many opportunities for error: trees, signs, cars, buildings, and people all may jam up correct target assessment.

Sea-based autonomous weapons might be less prone to error just because it may be easier to distinguish between a military and a civilian ship, with fewer obstructions, than it is to do the same for a school bus and an armored personnel carrier. Even the weather matters. One recent study found foggy weather reduced the accuracy of an AI system used to detect obstacles on roads to 58 percent compared to 92 percent in clear weather. Of course, bad weather may also hinder humans in effective target classification, so an important question is how AI classification compares to human classification.

How well tested is the weapon? Any professional military would verify and test whether an autonomous weapon works as desired before putting soldiers and broader strategic goals at risk. However, the military may not test for all the complexities that may confound an autonomous weapon, especially if those complexities are unknown. Testing will also be based on anticipated uses and operational environments, which may change as the strategic landscape changes. An autonomous weapon robustly tested in one environment may break down when used in another. Seattle has a lot more foggy days than Riyadh, but far fewer sandstorms.

How have adversaries adapted? In a battle involving autonomous weapons, adversaries will seek to confound operations, which may not be very difficult. OpenAI—a world-leading AI company—developed a system that can classify an apple as a Granny Smith with 85.6 percent confidence. Yet, tape a piece of paper that says “iPod” on the apple, and the machine vision system concludes with 99.7 percent confidence the apple is an iPod. In one case, AI researchers changed a single pixel on an image, causing a machine vision system to classify a stealth bomber as a dog. In war, an opponent could just paint “school bus” on a tank or, more maliciously, “tank” on a school bus and potentially fool an autonomous weapon.

How widely available are autonomous weapons? States and non-state actors will naturally vary in their risk tolerance, based on their strategies, cultures, goals, and overall sensitivity to moral trade-offs. The easier it is to acquire and use autonomous weapons, the more the international community can expect the weapons to be used by apocalyptic terrorist groups, nefarious regimes, and groups that are just plain insensitive to the error risk. As Stuart Russell, a professor of computer science at the University of California, Berkeley, likes to note: “[W]ith three good grad students and possibly the help of a couple of my robotics colleagues, it will be a term project to build a weapon that could come into the United Nations building and find the Russian ambassador and deliver a package to him.” Fortunately, technical acumen, organization, infrastructure, and resource availability will limit how sophisticated autonomous weapons are. No lone wolf terrorist will ever build an autonomous F-35 in his garage.

Autonomous weapon risk is complicated, variable, and multi-dimensional—the what, where, when, why, and how of use all matter. On the high-risk end of the spectrum are autonomous nuclear weapons and the use of collaborative, autonomous swarms in heavily urban environments to kill enemy infantry; on the low-end are autonomy-optional weapons used in unpopulated areas as defensive weapons and only used when death is imminent. Where states draw the line depend on how their militaries and societies balance risk of error against military necessity. But to draw a line at all requires a shared understanding of where risk lies.

Was a flying killer robot used in Libya? Quite possibly

A military drone that attacked soldiers during a battle in Libya’s civil war last year may have done so without human control, according to a recent report commissioned by the United Nations.

The drone, which the report described as “a lethal autonomous weapons systems,” was powered by artificial intelligence and used by forces backed by the government based in Tripoli, the capital, against enemy militia fighters as they ran away from rocket attacks.

The fighters “were hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems,” according to the report, which did not say whether there were any casualties or injuries.

The weapons systems, it said, “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect a true ‘fire, forget and find’ capability.”

The United Nations declined to comment on the report, which was written by a panel of independent experts. The report has been sent to a U.N. sanctions committee for review, according to the organization.

The drone, a Kargu-2, was used as soldiers tried to flee, the report said.

“Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems,” according to the report, which was written by the U.N. Panel of Experts on Libya and released in March. The findings about the drone attack, described briefly in the 548-page document, were reported last month by The New Scientist and by the Bulletin of the Atomic Scientists, a nonprofit organization.

Human-operated drones have been used in military strikes for over a decade. President Barack Obama for years embraced drone strikes as a counterterrorism strategy, and President Donald J. Trump expanded the use of drones in Africa.

Nations like China, Russia and Israel also operate drone fleets, and drones were used in the war between Azerbaijan and Armenia last year.

Experts were divided about the importance of the findings in the U.N. report on Libya, with some saying it underscored how murky “autonomy” can be.

Zachary Kallenborn, who studies drone warfare, terrorism and weapons of mass destruction at the University of Maryland, said the report suggested that for the first time, a weapons systems with artificial intelligence capability operated autonomously to find and attack humans.

“What’s clear is this drone was used in the conflict,” said Mr. Kallenborn, who wrote about the report in the Bulletin of the Atomic Scientists. “What’s not clear is whether the drone was allowed to select its target autonomously and whether the drone, while acting autonomously, harmed anyone. The U.N. report heavily implies, but does not state, that it did.”

But Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, said that the report does not say how independently the drone acted, how much human oversight or control there was over it, and what specific impact it had in the conflict.

“Should we talk more about autonomy in weapon systems? Definitely,” Ms. Franke said in an email. “Does this instance in Libya appear to be a groundbreaking, novel moment in this discussion? Not really.”

She noted that the report stated the Kargu-2 and “other loitering munitions” attacked convoys and retreating fighters. Loitering munitions, which are simpler autonomous weapons that are designed to hover on their own in an area before crashing into a target, have been used in several other conflicts, Ms. Franke said.

“What is not new is the presence of loitering munition,” she said. “What is also not new is the observation that these systems are quite autonomous. How autonomous is difficult to ascertain — and autonomy is ill-defined anyway — but we know that several manufacturers of loitering munition claim that their systems can act autonomously.”

The report indicates that the “race to regulate these weapons” is being lost, a potentially “catastrophic” development, said James Dawes, a professor at Macalester College in St. Paul, Minn., who has written about autonomous weapons.

“The heavy investment militaries around the globe are making in autonomous weapons systems made this inevitable,” he said in an email.

So far, the A.I. capabilities of drones remain far below those of humans, said Mr. Kallenborn. The machines can easily make mistakes, such as confusing a farmer holding a rake for an enemy soldier holding a gun, he said.

Human rights organizations are “particularly concerned, among other things, about the fragility or brittleness of the artificial intelligence system,” he said.

Professor Dawes said countries may begin to compete aggressively with each other to create more autonomous weapons.

“The concern that these weapons might misidentify targets is the least of our worries,” he said. “More significant is the threat of an A.W.S. arms race and proliferation crisis.”

The report said the attack happened in a clash between fighters for the Tripoli-based government, which is supported by Turkey and officially recognized by the United States and other Western powers, and militia forces led by Khalifa Hifter, who has received backing from Russia, Egypt, the United Arab Emirates, Saudi Arabia and, at times, France.

In October, the two warring factions agreed to a cease-fire, raising hopes for an end to years of shifting conflict.

The Kargu-2 was built by STM, a defense company based in Turkey that describes the weapon as “a rotary wing attack drone” that can be used autonomously or manually.

The company did not respond to a message for comment.

Turkey, which supports the government in Tripoli, provided many weapons and defense systems, according to the U.N. report.

“Loitering munitions show how human control and judgment in life-and-death decisions is eroding, potentially to an unacceptable point,” Mary Wareham, the arms advocacy director at Human Rights Watch, wrote in an email. She is a founding coordinator of the Campaign to Stop Killer Robots, which is working to ban fully autonomous weapons.

Ms. Wareham said countries “must act in the interest of humanity by negotiating a new international treaty to ban fully autonomous weapons and retain meaningful human control over the use of force.”

Libyan Fighters Attacked by a Potentially Unaided Drone, UN Says