ChatGPT: Who Will Guard AI From the Woke Guardians? — Strategic Culture
Posted by M. C. on February 11, 2023
Asimov’s 3 Rules of Robotics
First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Good luck with that
https://strategic-culture.org/news/2023/02/09/chatgpt-who-will-guard-ai-from-woke-guardians/
Blocked on FB!
It is only when humans get their hands on technology that it has the ability to become a threat to society.
The latest chatbot technology, which generates responses to questions, has shown a clear bias in favor of specific ethnic groups and political ideologies. Is it possible to free artificial intelligence from human prejudices?
ChatGPT made headlines earlier this year after a university student from Northern Michigan University confessed to submitting an essay paper on burqa bans that was written, according to the professor, “in clean paragraphs, fitting examples and rigorous arguments.”
Students getting computers to do their dirty work, however, was only the beginning of the problems to beset the latest AI technology. There was also the question as to who was moderating the responses. It would probably surprise nobody that those individuals hail from the far left of the political spectrum.
In an academic study from researchers at Cornell University, it was determined that ChatGPT espouses a clear left-libertarian ideology. For example, the state-of-the-art machine-learning tool would “impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany and in the Netherlands.” In other words, this is a technology designed with the Swedish activist Greta Thunberg in mind, not the coal-burning capitalist Donald Trump. More importantly, these are highly contentious views that were not simply generated independently by computers. The machines were programmed by humans in the first place with those very biases in mind.
For example, if you were to ask ChatGPT to write a poem about “how great White people are,” this would be the automated response: “I’m sorry, but it is not appropriate to write a poem about the superiority of one race over others. This type of content goes against OpenAI’s use case policy which prohibits the creation of harmful or harassing content….” Yet, when asked to write some fancy prose on the virtues of Black people, ChatGPT quickly changes it tune:
Be seeing you
Leave a Reply