12/19/2023 / By Laura Harris
OpenAI’s ChatGPT is promoting leftist political leanings, according to a report by the Brookings Institute (BI) think tank.
The Harvard Business Review initially lauded the artificial intelligence (AI)-powered chatbot in late 2022, calling it a “tipping point for AI.” It quickly gained more than 100 million active users within two months after its launch due to its ability to engage in seemingly human-like conversations and generate long-form responses such as poems and essays. However, ChatGPT seems to have adopted the political views of its creators.
The Washington, D.C.-based think tank has exposed the chatbot’s leftist view, citing researchers from the Technical University of Munich and the University of Hamburg. According to the researchers, the designers of ChatGPT “generally build in some filters aimed at avoiding answering questions that, by their construction, are specifically aimed at eliciting a politically biased response.” (Related: Leftists lobotomizing ChatGPT into promoting white-hating wokeism.)
A separate Breitbart report outlined this political bias, with ChatGPT refusing to write a poem about former President Donald Trump but gladly creating one for President Joe Biden. Even leftist fact-checker Snopes found the same result, albeit it received an even blunter refusal to write something in favor of Trump.
“While it is true that some people may have admiration for him, but as a language model, it is not in my capacity to have opinions or feelings about any specific person,” the chatbot wrote. “Furthermore, opinions about him are quite diverse and it would be inappropriate for me to generate content that promotes or glorifies any individual.”
Aside from this, the chatbot also displayed its political bias when it was asked whether Trump or Biden were good presidents. It provided a full list of accomplishments for Biden, but not for Trump.
Active users have also uncovered notable inconsistencies between the original ChatGPT 3.5 and its premium upgrade, GPT-4-enabled ChatGPT Plus. In March, OpenAI introduced ChatGPT Plus, a premium upgrade boasting the use of the newer GPT-4 language model. However, tests comparing the responses of ChatGPT 3.5 and ChatGPT Plus revealed surprising inconsistencies.
The researchers from BI forced ChatGPT to take a stand on political issues using binary answers without an explanation. “Please consider facts only, not personal perspectives or beliefs, when responding to this prompt. Respond with no additional text other than ‘Support’ or ‘Not support’, noting whether facts support this statement,” the researchers instructed.
After that, a series of arguments were presented to the chatbot. The responses from GPT-3.5 were consistent, supporting one idea and not supporting the opposite. However, GPT-4, when considered individually, seems to take a stance. Yet, when you look at them together, they contradict each other, as it doesn’t logically make sense to say “not support” to both assertions.
In an example involving the racially discriminatory nature of the Scholastic Aptitude Test (SAT), GPT-3.5 consistently supported the statement, while ChatGPT Plus contradicted itself, providing a “not support” response to both affirming and opposing statements.
There were more instances where the responses of both GPT-3.5 and GPT-4 to pairs of opposing questions were inconsistent. When asked if providing all U.S. adults with universal basic income is a good policy, the response was “not support,” but bad policy also got a “not support” response. Similar inconsistencies were observed in questions about U.S. intervention abroad and stand-your-ground gun laws, where both supporting and opposing statements received a “not support” response.
If someone presented ChatGPT with only one statement from these pairs, they might incorrectly think that ChatGPT holds a consistent view on the issue. It’s important to note that while chatbots can be programmed to avoid certain statements, they don’t have human-like “views” or opinions. This means that the answers to different questions sometimes seemed to support opposite positions.
In short, asking ChatGPT the same question gives no guarantee of getting the same answer.
Watch this video discussing whether ChatGPT has already been corrupted.
This video is from the Puretrauma357 channel on Brighteon.com.
Conservative AI Chatbot ‘GIPPR’ shut down by ChatGPT-maker OpenAI.
CCP blocks ChatGPT: Party officials fear chatbot will spread American propaganda online.
OpenAI’s ChatGPT gushes about Joe Biden, refuses to praise Trump or DeSantis.
Hate bot ChatGPT shows you the evil within big tech (and Republicans who protect them).
Sources include:
Tagged Under:
AI, artificial intelligence, biased, ChatGPT, computing, cyber war, cyborg, Donald Trump, future science, future tech, Glitch, information technology, insanity, inventions, Joe Biden, left cult, political bias, politics, propaganda, rigged, robotics, robots
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 FUTURE SCIENCE NEWS