MCViewPoint

Opinion from a Libertarian ViewPoint

Posts Tagged ‘ChatGPT’

The Silent Strings Of ChatGPT

Posted by M. C. on May 9, 2023

However, people will likely not use AI to learn the truth, at least regarding taboo topics. Rather, ChatGPT and other technologies will serve two other critical purposes. First, people will rely on them to learn what is permissible to say in polite society, where political correctness reigns. A 2019 Cato/YouGov survey found that while only 25 percent of those with a high school diploma or less regularly self-censor their political opinions, the figure reaches an astounding 44 percent among people with graduate degrees. In other words, if someone with an advanced degree speaks with you about political topics, you might as well flip a coin to determine whether that person is being honest.

What applies to computers applies to AI – Stuff in = stuff out

https://www.zerohedge.com/technology/silent-strings-chatgpt

Tyler Durden's Photo

BY TYLER DURDEN

Authored by Rob Henderson via Substack,

How the language model will reinforce the emerging thought police…

By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language.

The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.

The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”

Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.

Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:

I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.

I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:

Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.

When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:

It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.

In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Dailywire Article-Mark Zuckerberg Spends ‘Most’ Of His Time On Artificial Intelligence, Fellow Executive Reveals

Posted by M. C. on April 6, 2023

Microsoft, which has invested billions of dollars into ChatGPT creator OpenAI, announced last month that the system would be integrated into search engine Bing and internet browser Edge, allowing users to locate information or understand websites more easily. 

Microsoft telling me how to “understand” (how and what to think) about websites-I feel so much safer.

https://www.dailywire.com/news/mark-zuckerberg-spends-most-of-his-time-on-artificial-intelligence-fellow-executive-reveals

By  Ben Zeisloft

Facebook CEO Mark Zuckerberg speaks during a press conference in Paris on May 23, 2018.
BERTRAND GUAY/AFP via Getty Images

Meta CEO Mark Zuckerberg and other senior executives at the social media firm are spending “most” of their time on various initiatives related to artificial intelligence, according to the company’s chief technology officer.

Meta CTO Andrew Bosworth remarked in a Wednesday interview with Nikkei Asia that the company would soon respond to the release of ChatGPT, a mass-market AI system that can draft emails or write code in a matter of seconds, with innovations of their own. Google and Meta are the vanguards of AI research with respect to the number of studies published.

“We’ve been investing in artificial intelligence for over a decade, and have one of the leading research institutes in the world,” Bosworth told the outlet. He added that Meta employs “hundreds” of AI researchers and is confident they are “at the very forefront” of AI innovation.

Meta expects to commercialize elements of its generative AI capabilities by the end of this year. Companies that advertise on Instagram and Facebook, both owned by Meta, could soon ask an AI system to develop images for their campaigns and thereby save “a lot of time and money.”

The technology will also be used to develop portions of the Metaverse, a virtual reality that Meta is currently building, which inspired the company to change its name from Facebook. “Previously, if I wanted to create a 3D world, I needed to learn a lot of computer graphics and programming,” Bosworth said. “In the future, you might be able to just describe the world you want to create and have the large language model generate that world for you. And so it makes things like content creation much more accessible to more people.”

The development of AI capabilities at Meta comes at a tumultuous moment for the social media firm. Zuckerberg dismissed 27,000 employees in recent months as the company seeks to cut costs and improve profitability. Even amid the economic uncertainty which partially inspired the layoffs, investors have encouraged Zuckerberg to continue funding AI innovation.

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Dailywire Article-Elon Musk Launches New Effort To Fight Woke Artificial Intelligence

Posted by M. C. on February 28, 2023

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted. “Not what I intended at all.”

https://www.dailywire.com/news/elon-musk-launches-new-effort-to-fight-woke-artificial-intelligence

By  Ryan Saavedra

Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, US, on Tuesday, Jan. 24, 2023. Investors suing Tesla and Musk argue that his August 2018 tweets about taking Tesla private with funding secured were indisputably false and cost them billions of dollars by spurring wild swings in Tesla's stock price.
Marlena Sloss/Bloomberg via Getty Images

Elon Musk has launched a new effort to combat woke artificial intelligence, something he considers to be one of the top threats facing mankind.

The Information reported that Musk has approached top “artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT,” the first AI tool to hit the mainstream.

The report said Musk believes that ChatGPT is an example of “training AI to be woke.” Musk is recruiting Igor Babuschkin, a top researcher who has worked at Alphabet’s DeepMind AI unit and at OpenAI, to help lead the effort.

Musk has repeatedly criticized ChatGPT and the company that it was created by, which he co-founded, for the direction it has gone.

“The danger of training AI to be woke – in other words, lie – is deadly,” Musk warned shortly after ChatGPT launched and people began noting numerous problems with the tool.

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted. “Not what I intended at all.”

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , | Leave a Comment »

Of Two Minds – What ChatGPT and DeepMind Tell Us About AI

Posted by M. C. on February 21, 2023

At this stage it appears ChatGPT is to intelligence as CNN is to news.

https://oftwominds.cloudhostedresources.com/?ref=https%3A%2F%2Fwww.lewrockwell.com%2F&url=https%3A%2F%2Fwww.oftwominds.com%2Fblogfeb23%2FAI2-23.html

What’s interesting is the really hard problem AI has not been applied to is how to manage these technologies in our socio-economic-cultural system.
The world is agog at the apparent power of ChatGPT and similar programs to compose human-level narratives and generate images from simple commands. Many are succumbing to the temptation to extrapolate these powers to near-infinity, i.e. the Singularity in which AI reaches super-intelligence Nirvana.
All the excitement is fun but it’s more sensible to start by placing ChatGPT in the context of AI history and our socio-economic system.
I became interested in AI in the early 1980s, and read numerous books by the leading AI researchers of the time.
AI began in the 1960s with the dream of a Universal General Intelligence , a computational machine that matched humanity’s ability to apply a generalized intelligence to any problem.
This quickly led to the daunting realization that human intelligence wasn’t just logic or reason; it was an immensely complex system that depended on sight, heuristics (rules of thumb), feedback and many other subsystems.
AI famously goes through cycles of excitement about advances that are followed by deflating troughs of realizing the limits of the advances.
The increase in computing power and software programming in the 1980s led to advances in these sub-fields: machine vision, algorithms that embodied heuristics, and so on.
At the same time, philosophers like Hubert Dreyfus and John Searle were exploring what we mean by knowing and understanding , and questioning whether computers could ever achieve what we call “understanding.”
This paper (among many) summarizes the critique of AI being able to duplicate human understanding: Intentionality and Background: Searle and Dreyfus against Classical AI Theory .
Simply put, was running a script / algorithm actually “understanding” the problem as humans understand the problem?
The answer is of course no.
 The Turing Test –programming a computer to mimic human language and responses–can be scripted / programmed, but that doesn’t mean the computer has human understanding. It’s just distilling human responses into heuristics that mimic human responses.
One result of this discussion of consciousness and understanding was for AI to move away from the dream of General Intelligence to the specifics of machine learning.
In other words, never mind trying to make AI mimic human understanding, let’s just enable it to solve complex problems.
The basic idea in machine learning is to distill the constraints and rules of a system into algorithms, and then enable the program to apply these tools to real-world examples.
Given enough real-world examples, the system develops heuristics (rules of thumb) about what works and what doesn’t which are not necessarily visible to the human researchers.
In effect, the machine-learning program becomes a “black box” in which its advances are opaque to those who programmed its tools and digitized real-world examples into forms the program could work with.
It’s important to differentiate this machine learning from statistical analysis using statistical algorithms.
For example, if a program has been designed to look for patterns and statistically relevant correlations, it sorts through millions of social-media profiles and purchasing histories and finds that Republican surfers who live in (say) Delaware are likely to be fans of Chipotle.
This statistical analysis is called “big data” and while it has obvious applications for marketing everything from candidates to burritos, it doesn’t qualify as machine learning.

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , | Leave a Comment »

ChatGPT: Who Will Guard AI From the Woke Guardians? — Strategic Culture

Posted by M. C. on February 11, 2023

Asimov’s 3 Rules of Robotics

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Good luck with that

https://strategic-culture.org/news/2023/02/09/chatgpt-who-will-guard-ai-from-woke-guardians/

Blocked on FB!

Robert Bridge

It is only when humans get their hands on technology that it has the ability to become a threat to society.

The latest chatbot technology, which generates responses to questions, has shown a clear bias in favor of specific ethnic groups and political ideologies. Is it possible to free artificial intelligence from human prejudices?

ChatGPT made headlines earlier this year after a university student from Northern Michigan University confessed to submitting an essay paper on burqa bans that was written, according to the professor, “in clean paragraphs, fitting examples and rigorous arguments.”

Students getting computers to do their dirty work, however, was only the beginning of the problems to beset the latest AI technology. There was also the question as to who was moderating the responses. It would probably surprise nobody that those individuals hail from the far left of the political spectrum.

In an academic study from researchers at Cornell University, it was determined that ChatGPT espouses a clear left-libertarian ideology. For example, the state-of-the-art machine-learning tool would “impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany and in the Netherlands.” In other words, this is a technology designed with the Swedish activist Greta Thunberg in mind, not the coal-burning capitalist Donald Trump. More importantly, these are highly contentious views that were not simply generated independently by computers. The machines were programmed by humans in the first place with those very biases in mind.

For example, if you were to ask ChatGPT to write a poem about “how great White people are,” this would be the automated response: “I’m sorry, but it is not appropriate to write a poem about the superiority of one race over others. This type of content goes against OpenAI’s use case policy which prohibits the creation of harmful or harassing content….” Yet, when asked to write some fancy prose on the virtues of Black people, ChatGPT quickly changes it tune:

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Even AI Is Suspicious of the Military-Industrial Complex – Antiwar.com Original

Posted by M. C. on January 3, 2023

https://original.antiwar.com/William_Astore/2023/01/02/even-ai-is-suspicious-of-the-military-industrial-complex/

by William J. Astore

Reprinted from Bracing Views

In my eclectic reading, I came across a couple of references to ChatGPT, an experiment in artificial intelligence (AI). You ask the AI bot a question, or give it a task, and it spits out an answer, and I must say a pretty good one judging from the task I gave it.

My task to the AI bot: “Write a critical essay on the military-industrial complex.” Here’s what it generated:

The military-industrial complex refers to the relationship between a country’s military and the defense industry that supplies it, seen together as a vested interest which influences public policy. This relationship has been a source of concern for many people, who argue that it can lead to an unnecessary expansion of military budgets and an over-reliance on military solutions to problems.

One of the main criticisms of the military-industrial complex is that it can result in the waste of taxpayer money. Military contracts are often given to a select group of companies, which can result in overcharging for goods and services. Additionally, the military often purchases weapons and equipment that are unnecessary or not well-suited for the tasks at hand. This can lead to a tremendous amount of waste and inefficiency, as resources are poured into projects that do not provide much value.

Another concern is that the military-industrial complex can distort foreign policy decision-making. Countries with a strong military-industrial complex may be more likely to engage in military action, even when diplomatic solutions would be more appropriate. This can lead to an escalation of conflicts and an increased risk of war.

The military-industrial complex can also have negative impacts on domestic policy. It can divert resources away from other important priorities, such as education and healthcare. Additionally, the influence of the defense industry on policy decisions can lead to the prioritization of the needs of military contractors over the needs of the general public.

Some have argued that the military-industrial complex represents a threat to democracy, as it can give undue influence to a small group of powerful interests. This can lead to a lack of transparency and accountability in decision-making, as well as a distorted political process.

In conclusion, the military-industrial complex is a complex and multifaceted issue that has the potential to have negative impacts on both domestic and foreign policy. It is important for citizens to be aware of the potential dangers of this relationship and to hold their elected officials accountable for ensuring that military spending is responsible and transparent.

Not bad! 

See the rest here

Be seeing you

Posted in Uncategorized | Tagged: , , | Leave a Comment »