Opinion from a Libertarian ViewPoint

Bipartisan Bill Aims To Prevent AI From Launching Nuclear Weapons

Posted by M. C. on May 1, 2023

What organization is most likely to ignore their own law and screw things up in the process?

Tyler Durden's Photo


Authored by Brett Wilkins via Common Dreams, 

In the name of “protecting future generations from potentially devastating consequences,” a bipartisan group of U.S. lawmakers on Wednesday introduced legislation meant to prevent artificial intelligence from launching nuclear weapons without meaningful human control.

The Block Nuclear Launch by Autonomous Artificial Intelligence Act—introduced by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.)—asserts that “any decision to launch a nuclear weapon should not be made” by AI.

The proposed legislation acknowledges that the Pentagon’s 2022 Nuclear Posture Review states that current U.S. policy is to “maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the president to initiate and terminate nuclear weapon employment.”

The bill would codify that policy so that no federal funds could be used “to launch a nuclear weapon [or] select or engage targets for the purposes of launching” nukes.

“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” Markey asserted in a statement. “We need to keep humans in the loop on making life-or-death decisions to use deadly force, especially for our most dangerous weapons.”

Buck argued that “while U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited.”

According to the 2023 AI Index Report—an annual assessment published earlier this month by the Stanford Institute for Human-Centered Artificial Intelligence—36% of surveyed AI experts worry about the possibility that automated systems “could cause nuclear-level catastrophe.”

The report followed a February assessment by the Arms Control Association, an advocacy group, that AI and other emerging technologies including lethal autonomous weapons systems and hypersonic missiles pose a potentially existential threat that underscores the need for measures to slow the pace of weaponization.

See the rest here

Be seeing you


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: