A United Nations report published in March of this year details a 2020 incident in Libya where a ‘killer drone’ self-activated to hunt a human target without prior command from a person during an armed conflict between the military and a group of civilians.
According to the British news outlet The Star, the drone, a Kargu-2 quadcopter produced by Turkish military technology company STM, was deployed in March 2020 during a conflict between Libyan government forces and a dissident military faction led by Khalifa Haftar, commander of the Libyan National Army.
The drone is equipped with explosives and can track a target and attack it by exploding on impact with the target.
The Kargu-2s have the ability to operate in ‘highly effective’ mode and thus can be activated without the need for human command, the release explained.
As the military faction was retreating from the site, the drone came out to ‘hunt’ them from behind. However, the report does not indicate any fatalities or if the attacks ended with any human or material damage.
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the report says.
Experts voiced concern over the existence of these ‘autonomous’ drones as they cannot always identify or interpret visual target data correctly.
Zak Kallenborn, a terrorism expert based in Maryland in the U.S., said that this could well be the first time drones have autonomously attacked humans.
“How brittle is the object recognition system, how often does it misidentify targets?” asked Kallenborn, alluding to the danger of humans continuing to create these robots that can seemingly attack humans by ‘choice.’
According to The New York Post, in 2018, the United Nations Convention on Conventional Weapons met to ban the production and use of these ‘killer drones’ equipped with artificial intelligence, but the United States and Russia blocked the initiative, and the commission was shut down.
Two years later, the first incident of a drone hunting a human without being ordered to do so is reported.
The question of whether to use ‘self-sufficient’ drones for armed conflict is not a new discussion.
Being able to attack a military base, a ship, or outright win a war without sacrificing human lives seems like something any military would want.
However, this incident highlights the question of what happens when artificial intelligence ‘becomes independent’ and no longer needs humans—could it turn against them?