MCViewPoint

Opinion from a Libertarian ViewPoint

Posts Tagged ‘AI’

If AI Thinks George Washington is a Black Woman, Why Are We Letting it Pick Bomb Targets?

Posted by M. C. on March 6, 2024

Google Gemini’s ridiculous image generator got all of the headlines in the last two weeks, but a more important AI announcement went mostly unnoticed

By Matt Taibbi
Racket News

After yesterday’s Racket story about misadventures with Google’s creepy new AI product, Gemini, I got a note about a Bloomberg story from earlier this week. From US Used AI to Help Find Middle East Targets for Airstrikes:

The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defense official said, revealinggrowing military use of the technology for combat… Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2

The U.S. formally admitting to using AI to target human beings was a first of sorts, but Google’s decision to release a moronic image generator that mass-produces black Popes and Chinese founding fathers was the story that garnered the ink and outrage. The irony is the military tale is equally frightening, and related in unsettling ways:

Bloomberg quoted Schuyler Moore, Chief Technology Officer for U.S. Central Command. She described using AI to identify bombing targets in Iraq and Syria, in apparent retaliation for a January 28th attack in Jordan that killed three U.S. troops and injured 34. According to Moore, it was last year’s Hamas attack that sent the Pentagon over the edge into a willingness to deploy Project Maven, in which AI helps the military identify targets using data from satellites, drones, and other sources.

“October 7th, everything changed,” she said. “We immediately shifted into high gear and a much higher operational tempo than we had previously.”

The idea that the U.S. was so emotionally overcome on October 7th that it had to activate Project Maven seems bizarre at best. The Pentagon has boasted for years about deploying AI, from sending Switchblade drones to Ukraine that are “capable of identifying targets using algorithms,” to the “Replicator” initiative launched with a goal of hitting “1000 targets in 24 hours,” to talks of deploying a “Vast AI Fleet” to counter alleged Chinese AI capability. Nonetheless, it’s rare for someone like Moore to come out and announce that a series of recent air strikes were picked at least in part by algorithms that “teach themselves to identify” objects.

Project Maven made headlines in 2018 when, in a rare (but temporary) attack of conscience, Google executives announced they would not renew the firm’s first major Pentagon contract. 4,000 employees signed a group letter that seems quaint now, claiming building technology to assist the U.S. government in “military surveillance” was “not acceptable.” Employees implored CEO Sundar Pichai to see that “Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.”

But the firm’s kvetching about Don’t Be Evil and squeamishness about cooperating with the Pentagon didn’t last long. It soon began bidding for more DoD work and won a contract to provide cloud security for the Defense Innovation Board and a piece of a multibillion-dollar CIA cloud contract, among other things. Six years after its employee letter denouncing surveillance/targeting the military as “unacceptable,” Alphabet chief and former Google CEO Eric Schmidt chairs the Defense Innovation Board, and through efforts like Project Nimbus Google is essentially helping military forces like Israel’s IDF develop their own AI programs, as one former exec puts it.

The military dresses up justification for programs like Maven in many ways, but if you read between the lines of its own reports, the Pentagon is essentially chasing its own data tail. The sheer quantity of data the armed forces began generating after 9/11 through raids of homes (in Iraq, by the hundreds) and programs like full-motion video (FMV) from drones overwhelmed human analysis. As General Richard Clarke and Fletcher professor Richard Schultz put it, in an essay about Project Maven for West Point’s Modern War Institute:

[Drones] “sent back over 327,000 hours (or 37 years) of FMV footage.” By 2017, it was estimated for that year that the video US Central Command collected could amount to “325,000 feature films [approximately 700,000 hours or eighty years].”

The authors added the “intelligence simply became snowed under by data,” which to them meant “too much real-time intelligence was not being exploited.” This led to the second point: as Google’s former CEO Schmidt put it in 2020, when commenting on the firm’s by-then-fully-revived partnership with the Pentagon, “The way to understand the military is that soldiers spend a great deal of time looking at screens.”

Believing this was not an optimal use of soldier time, executives like Schmidt and military brass began pushing for more automated analysis. Even though early AI programs were “rudimentary with many false detections,” with accuracy “only around 50 percent” and even “the difference between men, women, and children” proving challenging, they plowed ahead.

Without disclosing how accuracy has improved since Maven’s early days, Schultz and Clarke explained the military is now planning a “range of AI/ML applications” to drive “increased efficiency, cost savings, and lethality,” and a larger goal:

To prepare DoD as an institution for future wars—a transformation from a hardware-centric organization to one in which AI and ML software provides timely, relevant mission-oriented data to enable intelligence-driven decisions at speed and scale.

This plan, like so many other things that emerge from Pentagon bureaucracy, is a massive self-licking ice cream cone.

Defense leaders first push to make and deploy ever-increasing quantities of flying data-gathering machines, which in turn film gazillions of feature films per year worth of surveillance. As the digital haul from the robot fleet grows, military leaders claim they’re forced to finance AI programs that can identify the things worth shooting at in these mountains of footage, to avoid the horror of opportunities “not being exploited.” This has the advantage of being self-fulfilling in its logic: as we shoot at more targets, we create more “exploding insurgencies,” as Clarke and Schultz put it, in turn creating more targets, and on and on.

Read the Whole Article

Be seeing you

Posted in Uncategorized | Tagged: , , , , | 1 Comment »

Biden Is Finally Regulating AI … for Equity? – The American Spectator | USA News and Politics

Posted by M. C. on November 3, 2023

The Left wants to remake artificial intelligence in its own woke image.

https://spectator.org/biden-is-finally-regulating-ai-for-equity/

by Aubrey Gulick

If there is one thing artificial intelligence should be great at, it’s treating everyone equally. As far as the algorithm is concerned, everyone in the human race can be represented by 0s and 1s. AI doesn’t necessarily see black and white, male or female; it just sees a human: that person’s experiences, criminal record, resume, and social media account — pretty much anything available about them online.

Or, at least, that’s the theory.

But some scientists and philosophers — and President Joe Biden’s administration — think differently. The trouble is that while AI might treat everyone equally, it won’t necessarily treat everyone equitably.

Making AI Equitable

On Monday, Biden signed an executive order that not only established some vague ground rules for regulating artificial intelligence but also included an entire section on “Advancing Equity and Civil Rights,” which aims to ensure AI doesn’t discriminate against individuals applying to rent a house, receive government assistance, or be awarded a federal contract.

The order states that Biden will provide “clear guidance” to services using AI to screen applicants; tasks the Department of Justice and federal civil rights offices with providing training and technical assistance in “investigating and prosecuting civil rights violations related to AI”; and commits resources to developing better ways to use AI in detecting, investigating, and punishing crime. (READ MORE: Physiognomy Is Real, and AI Is Here to Prove It)

At first glance, this might make sense. There’s always the possibility that AI could err in its automated screening processes. For instance, in 2018, a man named Chris Robinson was denied a rental application to a California senior living community because the artificial intelligence system that ran his background check mistook him for another man with the same name who had been convicted of littering in a state in which Robinson had never lived. But while Robinson’s case qualifies as an instance of unfair (and accidental) discrimination, it certainly isn’t one of “inequity.”

And the Biden administration isn’t trying to fix the kind of mistake that resulted in the denial of Robinson’s rental application. Instead, it wants to ensure that AI adjusts to a woke worldview that filters decisions through past wrongs — real or imagined.

The ‘Principle of Autonomy’ Is ‘Inequitable’

The Left has decided that the problem with AI is that it views humans autonomously. In one study published by Topoi, an international review of philosophy, authors Sábëlo Mhlambi and Simona Tiribelli argue that the very “principle of autonomy” is flawed. It’s a construct rooted in “Western traditional philosophy,” they argue, and “[a]dherence to such principle, as currently formalized, … fail[s] to grasp a broader range of AI-empowered harms profoundly tied to the legacy of colonization.”

In practical terms, AI systems tend to predict crime statistics that woke leftists don’t like. For instance, as AI research group Prolific reports, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) used AI to predict “the likelihood that US criminals would re-offend.” The system noticed that individuals who were black were more likely to fall in that category, and, because AI isn’t politically correct, it reported exactly that. (READ MORE: People Are Working on Using AI to Steal From You)

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Newly Developed Humanoid Robot Warns About AI Creating “Oppressive Society”

Posted by M. C. on June 6, 2023

At the event, a reporter asked Ameca to detail a likely “nightmare scenario” that could happen due to robots and artificial intelligence (AI).

“The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. This could lead to an oppressive society where the rights of individuals are no longer respected,” Ameca replied.

https://www.zerohedge.com/technology/newly-developed-humanoid-robot-warns-about-ai-creating-oppressive-society

Tyler Durden's Photo

BY TYLER DURDEN

Authored by Naveen Anthrapully via The Epoch Times,

During the 2023 International Conference on Robotics and Automation held in London from May 29 to June 2, UK-based company Engineered Arts introduced a humanoid robot, Ameca, that can interact like a person with realistic facial movements.

Unlike ChatGPT, which is an online chat service, Ameca’s humanoid body allows it to interact with people using “smooth, lifelike motion and advanced facial expression capabilities,” said its manufacturer.

At the event, a reporter asked Ameca to detail a likely “nightmare scenario” that could happen due to robots and artificial intelligence (AI).

“The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. This could lead to an oppressive society where the rights of individuals are no longer respected,” Ameca replied.

When asked whether there was a danger of such a scenario happening now, Ameca replied, “Not yet.”

However, “it is important to be aware of the potential risks and dangers associated with AI and robotics. We should take steps now to ensure that these technologies are used responsibly in order to avoid any negative consequences in the future.”

The dangers of AI have been predicted by numerous experts on the subject, with industrialists and business leaders calling for issuing regulations on the technology.

Ameca’s warning comes as a simulated thought experiment by the American military showed that an AI-enabled drone could end up turning against its own operator without being instructed to do so.

Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, talked about the experiment at the Future Combat Air and Space Capabilities Summit in London on Friday. In a simulated test, an AI drone was assigned a mission to identify and destroy Surface-to-Air Missile (SAM) sites, with a human operator being the ultimate decision maker.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton said.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

The simulated experiment then set up a scenario where the AI drone would lose points if it killed the operator. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Dailywire Article-Elon Musk Warns Of Grave Danger That AI Could Pose To Humanity

Posted by M. C. on April 18, 2023

“What’s happening is they’re training the AI to lie,” Musk said. “It’s bad.”

Musk said that AI is either being programmed to lie or to “either comment on some things, not comment on other things, but not to say what the data actually demands that it say.”

https://www.dailywire.com/news/elon-musk-warns-of-grave-danger-that-ai-could-pose-to-humanity

By  Daily Wire News

Twitter CEO Elon Musk warned during an interview with Fox News host Tucker Carlson on Tuesday that artificial intelligence needs to be regulated or else it could destroy humanity.

Musk said that AI had the potential to be more dangerous than almost anything because “it has the potential of civilizational destruction” and that it could be catastrophic to wait until after something bad has happened before deciding to implement regulations.

Musk said that he helped create ChatGPT because he saw Google racing to control the AI industry without seeming to understand the need for safety and controls.

When asked to give specifics of the danger that AI could pose to humanity, Musk said, “If you have a super-intelligent AI that is capable of writing incredibly well and in a way that is very influential, you know, convincing and then is constantly figuring out what is more convincing to people over time, and then enter social media, for example, Twitter, but also Facebook and others, you know, and potentially manipulates public opinion in a way that is very bad, how would we even know?”

“What’s happening is they’re training the AI to lie,” Musk said. “It’s bad.”

Musk said that AI is either being programmed to lie or to “either comment on some things, not comment on other things, but not to say what the data actually demands that it say.”

WATCH:

Be seeing you

Posted in Uncategorized | Tagged: , | Leave a Comment »

The AI Arms Race: Government versus the People

Posted by M. C. on March 27, 2023

Government control of AI. What could go wrong.

Joe Jarvis

Be seeing you

Posted in Uncategorized | Tagged: , | Leave a Comment »

Watch “What Happens When the Elite Replace Us with AI?” on YouTube

Posted by M. C. on March 24, 2023

https://youtu.be/aKi_QkYCHWQ

Be seeing you

Posted in Uncategorized | Tagged: , | Leave a Comment »

Of Two Minds – What ChatGPT and DeepMind Tell Us About AI

Posted by M. C. on February 21, 2023

At this stage it appears ChatGPT is to intelligence as CNN is to news.

https://oftwominds.cloudhostedresources.com/?ref=https%3A%2F%2Fwww.lewrockwell.com%2F&url=https%3A%2F%2Fwww.oftwominds.com%2Fblogfeb23%2FAI2-23.html

What’s interesting is the really hard problem AI has not been applied to is how to manage these technologies in our socio-economic-cultural system.
The world is agog at the apparent power of ChatGPT and similar programs to compose human-level narratives and generate images from simple commands. Many are succumbing to the temptation to extrapolate these powers to near-infinity, i.e. the Singularity in which AI reaches super-intelligence Nirvana.
All the excitement is fun but it’s more sensible to start by placing ChatGPT in the context of AI history and our socio-economic system.
I became interested in AI in the early 1980s, and read numerous books by the leading AI researchers of the time.
AI began in the 1960s with the dream of a Universal General Intelligence , a computational machine that matched humanity’s ability to apply a generalized intelligence to any problem.
This quickly led to the daunting realization that human intelligence wasn’t just logic or reason; it was an immensely complex system that depended on sight, heuristics (rules of thumb), feedback and many other subsystems.
AI famously goes through cycles of excitement about advances that are followed by deflating troughs of realizing the limits of the advances.
The increase in computing power and software programming in the 1980s led to advances in these sub-fields: machine vision, algorithms that embodied heuristics, and so on.
At the same time, philosophers like Hubert Dreyfus and John Searle were exploring what we mean by knowing and understanding , and questioning whether computers could ever achieve what we call “understanding.”
This paper (among many) summarizes the critique of AI being able to duplicate human understanding: Intentionality and Background: Searle and Dreyfus against Classical AI Theory .
Simply put, was running a script / algorithm actually “understanding” the problem as humans understand the problem?
The answer is of course no.
 The Turing Test –programming a computer to mimic human language and responses–can be scripted / programmed, but that doesn’t mean the computer has human understanding. It’s just distilling human responses into heuristics that mimic human responses.
One result of this discussion of consciousness and understanding was for AI to move away from the dream of General Intelligence to the specifics of machine learning.
In other words, never mind trying to make AI mimic human understanding, let’s just enable it to solve complex problems.
The basic idea in machine learning is to distill the constraints and rules of a system into algorithms, and then enable the program to apply these tools to real-world examples.
Given enough real-world examples, the system develops heuristics (rules of thumb) about what works and what doesn’t which are not necessarily visible to the human researchers.
In effect, the machine-learning program becomes a “black box” in which its advances are opaque to those who programmed its tools and digitized real-world examples into forms the program could work with.
It’s important to differentiate this machine learning from statistical analysis using statistical algorithms.
For example, if a program has been designed to look for patterns and statistically relevant correlations, it sorts through millions of social-media profiles and purchasing histories and finds that Republican surfers who live in (say) Delaware are likely to be fans of Chipotle.
This statistical analysis is called “big data” and while it has obvious applications for marketing everything from candidates to burritos, it doesn’t qualify as machine learning.

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , | Leave a Comment »

ChatGPT: Who Will Guard AI From the Woke Guardians? — Strategic Culture

Posted by M. C. on February 11, 2023

Asimov’s 3 Rules of Robotics

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Good luck with that

https://strategic-culture.org/news/2023/02/09/chatgpt-who-will-guard-ai-from-woke-guardians/

Blocked on FB!

Robert Bridge

It is only when humans get their hands on technology that it has the ability to become a threat to society.

The latest chatbot technology, which generates responses to questions, has shown a clear bias in favor of specific ethnic groups and political ideologies. Is it possible to free artificial intelligence from human prejudices?

ChatGPT made headlines earlier this year after a university student from Northern Michigan University confessed to submitting an essay paper on burqa bans that was written, according to the professor, “in clean paragraphs, fitting examples and rigorous arguments.”

Students getting computers to do their dirty work, however, was only the beginning of the problems to beset the latest AI technology. There was also the question as to who was moderating the responses. It would probably surprise nobody that those individuals hail from the far left of the political spectrum.

In an academic study from researchers at Cornell University, it was determined that ChatGPT espouses a clear left-libertarian ideology. For example, the state-of-the-art machine-learning tool would “impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany and in the Netherlands.” In other words, this is a technology designed with the Swedish activist Greta Thunberg in mind, not the coal-burning capitalist Donald Trump. More importantly, these are highly contentious views that were not simply generated independently by computers. The machines were programmed by humans in the first place with those very biases in mind.

For example, if you were to ask ChatGPT to write a poem about “how great White people are,” this would be the automated response: “I’m sorry, but it is not appropriate to write a poem about the superiority of one race over others. This type of content goes against OpenAI’s use case policy which prohibits the creation of harmful or harassing content….” Yet, when asked to write some fancy prose on the virtues of Black people, ChatGPT quickly changes it tune:

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Even AI Is Suspicious of the Military-Industrial Complex – Antiwar.com Original

Posted by M. C. on January 3, 2023

https://original.antiwar.com/William_Astore/2023/01/02/even-ai-is-suspicious-of-the-military-industrial-complex/

by William J. Astore

Reprinted from Bracing Views

In my eclectic reading, I came across a couple of references to ChatGPT, an experiment in artificial intelligence (AI). You ask the AI bot a question, or give it a task, and it spits out an answer, and I must say a pretty good one judging from the task I gave it.

My task to the AI bot: “Write a critical essay on the military-industrial complex.” Here’s what it generated:

The military-industrial complex refers to the relationship between a country’s military and the defense industry that supplies it, seen together as a vested interest which influences public policy. This relationship has been a source of concern for many people, who argue that it can lead to an unnecessary expansion of military budgets and an over-reliance on military solutions to problems.

One of the main criticisms of the military-industrial complex is that it can result in the waste of taxpayer money. Military contracts are often given to a select group of companies, which can result in overcharging for goods and services. Additionally, the military often purchases weapons and equipment that are unnecessary or not well-suited for the tasks at hand. This can lead to a tremendous amount of waste and inefficiency, as resources are poured into projects that do not provide much value.

Another concern is that the military-industrial complex can distort foreign policy decision-making. Countries with a strong military-industrial complex may be more likely to engage in military action, even when diplomatic solutions would be more appropriate. This can lead to an escalation of conflicts and an increased risk of war.

The military-industrial complex can also have negative impacts on domestic policy. It can divert resources away from other important priorities, such as education and healthcare. Additionally, the influence of the defense industry on policy decisions can lead to the prioritization of the needs of military contractors over the needs of the general public.

Some have argued that the military-industrial complex represents a threat to democracy, as it can give undue influence to a small group of powerful interests. This can lead to a lack of transparency and accountability in decision-making, as well as a distorted political process.

In conclusion, the military-industrial complex is a complex and multifaceted issue that has the potential to have negative impacts on both domestic and foreign policy. It is important for citizens to be aware of the potential dangers of this relationship and to hold their elected officials accountable for ensuring that military spending is responsible and transparent.

Not bad! 

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , | Leave a Comment »

WEF’s “Global Intelligence Collecting AI” to Erase Ideas from the Internet

Posted by M. C. on August 15, 2022

For example, in addition to looking at my Twitter profile, WEF’s proposed AI would also look at my Gettr profile, and then it would make an “intelligent decision” to remove me from the Internet at once. It is somewhat of a simplification because they also want to look for ideas and not only individuals but, nevertheless, the search for wrongthink becomes globalized.

Naughty GETTR posts? Say bye-bye to your TWITTER account

By Igor Chudov
Igor’s Newsletter

The World Economic Forum is becoming a little concerned. Unapproved opinions are becoming more popular, and online censors cannot keep up with millions of people becoming more aware and more vocal. The censorship engines employed by Internet platforms, turned out to be quite stupid and incapable. People are even daring to complain about the World Economic Forum, which is obviously completely unacceptable.

So, WEF author Inbal Goldberger came up with a solution: she proposes to collect off-platform intelligence from “millions of sources” to spy on people and new ideas, and then merge this information together for “content removal decisions” sent down to “Internet platforms”.

To overcome the barriers of traditional detection methodologies, we propose a new framework: rather than relying on AI to detect at scale and humans to review edge cases, an intelligence-based approach is crucial.

By bringing human-curated, multi-language, off-platform intelligence into learning sets, AI will then be able to detect nuanced, novel abuses at scale, before they reach mainstream platforms. Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in. This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale.

What is this about? What’s new?

The way censorship is done these days is that each Internet platform, such as Twitter, has its own moderation team and a decision making engine. Twitter would only look at tweets by any specific twitter user, when deciding on whether to delete any tweets or suspend their authors. Twitter moderators do NOT look at Gettr or other external websites.

So, for example, user @JohnSmith12345 may have a Twitter account and narrowly abide by Twitter rules, but at the same time have a Gettr account where he would publish anti-vaccine messages. Twitter would not be able to suspend @JohnSmith12345’s account. That is no longer acceptable to the WEF because they want to silence people and ideas, not individual messages or accounts.

This explains why the WEF needs to move beyond the major Internet platforms, in order to collect intelligence about people and ideas everywhere else. Such an approach would allow them to know better what person or idea to censor — on all major platforms at once.

Read the Whole Article

Be seeing you

Posted in Uncategorized | Tagged: , , , | Leave a Comment »