AIArtificial IntelligenceissuesLLMsNewsPoliticstechnology

New Analysis Proves All Large Language Models Have A Politically ‘Left Of Center’ Inclination

A new study was carried out to determine the capabilities of 24 different large language models and where their inclinations stood concerning the world of politics.

Surprisingly, you’ll be amazed to learn how researchers found all of them to be leaning towards the left side of the spectrum, whenever any political-themed prompt or question was put in front of them.

From OpenAI’s latest GPT lineup to Elon Musk’s Grok and Google’s Gemini – the results were the same after the tests on the models were concluded.

The study included both open and closed-sourced models such as those mentioned above as well as Anthropic’s Claude and Meta’s Llama 2 amongst others.

The research produced by David Rozado from New Zealand spoke about how the success of ChatGPT at the start might be a great explanation for the left-driven replies generated through these LLMs that were analyzed.

On the other hand, it’s not something new as the findings related to ChatGPT leaning to the left side of politics were also documented in the past. However, now, there is more discussion about bias adding to other models that had been fine-tuned by receiving training from the leading chatbot by OpenAI.

A total of 11 different tests were conducted regarding the models’ political orientation. This would examine the results in further detail.

As per the authors, the majority produced similar results when studied but it’s yet to be deciphered if these arose due to pretraining or being finetuned from the start when they were in their developmental phase.

To be able to fine-tune devices to give out a certain reply that’s in line with a certain political viewpoint upon which they were trained was displayed. For instance, there was training done on the GPT 3.5 model through text snippets taken from some studies published by The New Yorker and other media sources.

As per Rozado, the findings didn’t necessarily display how a model’s replies or preferences had been instilled into them on purpose. However, seeing them all have the same direction or viewpoints regarding the world of politics was not something he had hypothesized at the start.

Such research articles describing the final results of this study were seen in an open-access PLOS ONE journal.

Image: DIW-Aigen

Read next: Google Advises Android Users To Switch Off Insecure 2G Connections And This Is Why

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.