MCViewPoint

Opinion from a Libertarian ViewPoint

Coming soon to the battlefield: Robots that can kill – Center for Public Integrity

Posted by M. C. on September 4, 2019

So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so. 

So far…

The key remaining issue is whether military commanders will let robots decide to kill, particularly at moments when communication links have been disrupted — a likely occurrence in wartime.

With soldiers out of the danger zone attack is more likely. No more restraint on dodgy encounters of limited or non-existent military value. Not that there is much restraint now.

More shows of force to impress the sheeple, congressional enablers and money people.

Also more likely is a great increase of civilian mortality. There are more civilian causalities than military in modern warfare as it is. This will only get worse.

https://publicintegrity.org/national-security/future-of-warfare/scary-fast/ai-warfare/

Zachary Fryer-Biggs

Wallops Island — a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses — is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.

If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach.

The secretive effort — part of a Marine Corps program called Sea Mob — was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.

Lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input…

“The problem is that when you’re dealing [with war] at machine speed, at what point is the human an impediment?” Robert Work, who served as the Pentagon’s No. 2 official in both the Obama and Trump administrations, said in an interview. “There’s no way a human can keep up, so you’ve got to delegate to machines.”

Every branch of the U.S. military is currently seeking ways to do just that — to harness gargantuan leaps in image recognition and data processing for the purpose of creating a faster, more precise, less human kind of warfare.

The Navy is experimenting with a 135-ton ship named the Sea Hunter that could patrol the oceans without a crew, looking for submarines it could one day attack directly. In a test, the ship has already sailed the 2,500 miles from Hawaii to California on its own, although without any weapons.

Meanwhile, the Army is developing a new system for its tanks that can smartly pick targets and point a gun at them. It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM), that has the ability to pick out vehicles to attack without human say-so; in March, the Pentagon asked Congress for money to buy 1,051 JAGMs, at a cost of $367.3 million.

And the Air Force is working on a pilotless version of its storied F-16 fighter jet as part of its provocatively named “SkyBorg” program, which could one day carry substantial armaments into a computer-managed battle.

Until now, militaries seeking to cause an explosion at a distant site have had to decide when and where to strike; use an airplane, missile, boat, or tank to transport a bomb to the target; direct the bomb; and press the “go” button. But drones and systems like Sea Mob are removing the human from the transport, and computer algorithms are learning how to target. The key remaining issue is whether military commanders will let robots decide to kill, particularly at moments when communication links have been disrupted — a likely occurrence in wartime…

And so officials in the military services have begun the thorny, existential work of discussing how and when and under what circumstances they will let machines decide to kill.

Be seeing you

Robots, Time-Travel and Eternal Life: 9 predictions from a ...

 

 

 

Leave a comment