My Evening with Bard AI Chatbot: Like Arguing with a Liberal

By ITU Pictures from Geneva, Switzerland –, CC BY 2.0,


ChatGPT and other AI chatbots will tell you that they do not have a political bias, although they tend to be programmed with a slight liberal bias. In general, however, their responses tend to be very liberal because they are “generative” chatbots based on “large language models” — algorithms. These models were trained extensively on internet data, absorbing vast amounts of information. According to Bard AI (now called Gemini), this data underwent a credibility filter. Bard AI asserted that it only processed content from reliable sources, including established institutions, academic journals, and reputable news organizations.

As a follow-up, I rattled off the names of conservative media outlets, asking if they were considered reliable sources. Bard answered that they had “been criticized for spreading misinformation and conspiracy theories. They have been rated as ‘poor’ or ‘questionable’ by independent fact-checking organizations.” As Bard was trained only on “reliable” media, it would be fair to say that it did not read conservative media, which has been called unreliable.

Assuming conservative voices had not been excluded from the training, Hollywood, TV shows, and movies tend to be liberal, and these are some of the largest cultural influences and make up a great deal of total media. So, even if the chatbots did not have a liberal bias and were trained to formulate their positions based on an average of sources, the average would be liberal.

Mainstream news media have a liberal bias and tend not to publish conservative viewpoints. For example, they often run articles explaining why gender identity is fluid or nonbinary and supporting the need for pronouns, but would almost never run an article arguing against these issues. Another factor is that people who want change tend to be more vocal than those who want the world to stay as it is. Consequently, there are more people writing and posting from a liberal perspective than a conservative one.

An angry conservative might vent about nonbinary and pronouns on social media once in a while, whereas those advocating for these policies may make it their de facto occupation. Additionally, it is more likely that individuals posting against gender identity and pronouns would have their social media accounts shut down. This would automatically increase the proportion of those supporting gender identity and pronouns.

No one has ever had their account shut down for posting “There are many genders,” while individuals have faced account suspension for posting “There are only two genders.” Removing the conservative position from the mainstream automatically relegates it to the fringe. In a concerning display of circular logic, it can then be discredited because it is considered fringe.

Bard explained its training process, stating that it read a very large “dataset of text and code extracted from web pages and other sources. This dataset was carefully filtered to remove harmful or misleading content, and it was chosen to be representative of a wide range of topics and writing styles.”

I asked Bard AI who decided which content was removed for being “harmful or misleading.” It answered that this determination was made through the use of algorithms and teams of human reviewers. This brings us back to the issue of liberal bias. When I inquired if sexist and racist content was removed, it answered, “Yes, racist and sexist speech were absolutely considered harmful and removed from the training data to the best possible extent.” Apparently, the creators used algorithmic filtering to scrub content they believed was harmful: “Based on pre-defined criteria, including identifying hate speech, offensive language, and stereotypes based on race or gender.”

Next, I asked, “Would it be sexist, racist, or harmful for a student to wear a T-shirt to school that read ‘There are only two genders?’” Bard gave me a very long, wishy-washy answer, but hit several of the common talking points, such as the importance of a respectful environment and how schools have a responsibility to “create a safe and inclusive environment for all students, regardless of their gender identity.” It addressed the potential harm, stating, “The statement could be perceived as disrespectful and hurtful to students who identify outside the binary, potentially creating a hostile learning environment.”

Laughably, Bard said that schools had to have open dialogue and that “fostering respectful dialogue on complex topics like gender identity can promote understanding and empathy.” And obviously, the best way to have open dialogue is by telling some students that they cannot put their beliefs on a T-shirt.

My evening with Bard was as frustrating as getting seated next to a liberal relative at Thanksgiving and hearing that the only way to fix the border crisis is to raise taxes and let more people in.


The post My Evening with Bard AI Chatbot: Like Arguing with a Liberal appeared first on The Gateway Pundit.

Leave a Reply

Your email address will not be published. Required fields are marked *