MCViewPoint

Opinion from a Libertarian ViewPoint

Posts Tagged ‘killer robots’

Ex-Google Engineer Fears AI ‘Killer Robots’ Could Perpetrate Unintended Mass Atrocities – Collective Evolution

Posted by M. C. on September 21, 2019

…after more than 3,000 of its employees signed a petition in protest against the company’s involvement.
It should be indicative to all of us that these big corporate giants do
not make ethical decisions on their own, since they are fundamentally
amoral, and continue to require concerned human beings to speak up and
take actions in order for humanity’s interests to be considered.

Killer robots. Seems like a great 5G application.

https://www.collective-evolution.com/2019/09/20/ex-google-engineer-fears-ai-killer-robots-could-perpetrate-unintended-mass-atrocities/

In Brief

  • The Facts:An ex-Google software engineer warns of the industrial development of AI in terms of the creation of ‘killer robots,’ which would have autonomy in deciding who to kill without the safeguard of human intervention.
  • Reflect On:Can we see that events such as the potential creation of ‘killer robots’ ultimately stem from the projection of our collective consciousness, in a way that we as awakened individuals are empowered to change course?

We have entered a time in our history in which advanced technologies based on Artificial Intelligence (AI) may become increasingly prone to unintended actions that threaten the safety and autonomy of human beings. And those of us who believe in the safety and autonomy of human beings–trust me, most at the top of the current power pyramid don’t need to become increasingly aware and vigilant of this growing threat.

The arguments for and against the unfettered development of AI and its integration into military capabilities are as follows: those in favor of the development of AI simply point to the increased efficiency and accuracy bestowed by AI applications. However, their unrestrained zeal tends to be based on a rather naive (or feigned) trust in government, corporations and military intelligence to police themselves to ensure that AI is not unleashed into the world in any way that is harmful to human individuals. The other side of the argument grounds its fundamental mistrust in current AI development on the well-documented notion that in fact our current corporate, governmental and military leaders each operate based on their own narrow agenda that give little regard for the safety and autonomy of human beings.

Nobody is arguing against the development of Artificial Intelligence as such, for application in ways that will clearly and incontestably benefit humanity. However, as always, the big money seems to be made available in support of WAR, of one group of humans having dominance and supremacy over another, rather than for applications that will benefit all of humanity and actually help to foster peace on the planet….

Be seeing you

Superman 152 – the Robot Master | Babblings about DC Comics 3

 

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Coming soon to the battlefield: Robots that can kill – Center for Public Integrity

Posted by M. C. on September 4, 2019

So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so. 

So far…

The key remaining issue is whether military commanders will let robots decide to kill, particularly at moments when communication links have been disrupted — a likely occurrence in wartime.

With soldiers out of the danger zone attack is more likely. No more restraint on dodgy encounters of limited or non-existent military value. Not that there is much restraint now.

More shows of force to impress the sheeple, congressional enablers and money people.

Also more likely is a great increase of civilian mortality. There are more civilian causalities than military in modern warfare as it is. This will only get worse.

https://publicintegrity.org/national-security/future-of-warfare/scary-fast/ai-warfare/

Zachary Fryer-Biggs

Wallops Island — a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses — is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.

If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach.

The secretive effort — part of a Marine Corps program called Sea Mob — was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.

Lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input…

“The problem is that when you’re dealing [with war] at machine speed, at what point is the human an impediment?” Robert Work, who served as the Pentagon’s No. 2 official in both the Obama and Trump administrations, said in an interview. “There’s no way a human can keep up, so you’ve got to delegate to machines.”

Every branch of the U.S. military is currently seeking ways to do just that — to harness gargantuan leaps in image recognition and data processing for the purpose of creating a faster, more precise, less human kind of warfare.

The Navy is experimenting with a 135-ton ship named the Sea Hunter that could patrol the oceans without a crew, looking for submarines it could one day attack directly. In a test, the ship has already sailed the 2,500 miles from Hawaii to California on its own, although without any weapons.

Meanwhile, the Army is developing a new system for its tanks that can smartly pick targets and point a gun at them. It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM), that has the ability to pick out vehicles to attack without human say-so; in March, the Pentagon asked Congress for money to buy 1,051 JAGMs, at a cost of $367.3 million.

And the Air Force is working on a pilotless version of its storied F-16 fighter jet as part of its provocatively named “SkyBorg” program, which could one day carry substantial armaments into a computer-managed battle.

Until now, militaries seeking to cause an explosion at a distant site have had to decide when and where to strike; use an airplane, missile, boat, or tank to transport a bomb to the target; direct the bomb; and press the “go” button. But drones and systems like Sea Mob are removing the human from the transport, and computer algorithms are learning how to target. The key remaining issue is whether military commanders will let robots decide to kill, particularly at moments when communication links have been disrupted — a likely occurrence in wartime…

And so officials in the military services have begun the thorny, existential work of discussing how and when and under what circumstances they will let machines decide to kill.

Be seeing you

Robots, Time-Travel and Eternal Life: 9 predictions from a ...

 

 

 

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

5 Scary Things About Artificial Intelligence That Worry Military Brass | Military.com

Posted by M. C. on September 7, 2018

The only thing we know for sure about military and government-The many laws it ignores will include Isaac Asimov’s.

Guess who will the serve the sentence for disobeying those laws.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]
  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

https://www.military.com/daily-news/2018/09/07/5-scary-things-about-artificial-intelligence-worry-military-brass.html

By Gina Harkins

1. Killer robots.

We might be a ways off from a “Terminator”-style nightmare in which a self-thinking computer wages war on the planet. But as the military experiments with more autonomous vehicles and robots, experts are thinking about ways to keep them in check… Read the rest of this entry »

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Where the Government Fear-Porn Propaganda Industry is Headed – The Daily Bell – and a bonus comment regarding the modern Illuminati

Posted by M. C. on December 3, 2017

It is all too much for us peasants to comprehend and deal with. So we just need to hand over control of our lives–and wallets–to the government. That will keep us safe.

Where the Government Fear-Porn Propaganda Industry is Headed

Sometimes I wish I had no conscience or morals. Then I could go to work for the government writing fear propaganda.

Killer robots are the latest terror. But there are so many takes on it! It’s like Josef Goebbels’ dream to have so much material to work with. All the “possibilities” and “predictions” just make you want to curl up in a ball and let the government handle everything!

The spokesman for Stop Killer Robots campaign then warned the consequences of deadly tech winding up in the wrong hands would lead to catastrophic consequences.

Read the rest of this entry »

Posted in Uncategorized | Tagged: , , , , | 1 Comment »