
Elon Musk caused He caused a stir last week when he told (recently fired) right-wing provocateur Tucker Carlson that he planned to build “TruthGPT,” a competitor to OpenAI’s ChatGPT. Musk said the hugely popular bot exhibited “sane” bias and his version would be a “maximally truth-seeking AI” — suggesting that only his own political views could reflect reality.
Musk is far from the only one concerned about political bias in language models, but others are trying to use AI to bridge political divides rather than promote specific viewpoints.
New Zealand data scientist David Rozado was one of the first to draw attention to the problem of political bias in ChatGPT. A few weeks ago, after documenting what he believed to be the robot’s libertarian-leaning answers to questions about taxes, gun ownership and the free market, he created an AI model called RightWingGPT that expressed more conservative views. It loves gun ownership and doesn’t like taxes.
Rozado took a language model called Davinci GPT-3, similar but less powerful to the one that powered ChatGPT, and fine-tuned it with additional text, at the expense of hundreds of dollars in cloud computing. Whatever you think of the project, it demonstrates how easy it will be for people to incorporate different perspectives into language models in the future.
Rozado told me he also plans to build a more liberal language model, called LeftWingGPT, and a model called DepolarizingGPT, which he says will demonstrate “depolarizing political positions.” Rozado and a centrist think tank called the Institute for Cultural Evolution will bring all three models online this summer.
“We’re training these sides — right, left, and ‘synthetic’ — by using books by thoughtful authors, not provocateurs,” Rozardo said in an email. DepolarizingGPT’s texts draw from conservative voices, including Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers such as Simone de Beauvoir, Orlando Patterson, and Bill McKibben, among other “curated sources.”
So far, the interest in developing more politically inclined AI bots has likely sparked political divides. Some conservative organizations are already building ChatGPT competitors. For example, Gab, a social network known for its far-right user base, said it was developing artificial intelligence tools with “the ability to freely generate content without the constraints of free publicity around its code.”
Research has shown that language models can subtly influence users’ moral views, so any political leanings they have may be a given. The Chinese government recently released new guidelines on generative AI aimed at taming the behavior of these models and shaping their political sensibilities.
OpenAI warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths, and lies.” In February, the company said in a blog post that it would explore developing ways to A model whose value is defined by the user.
Rozardo said he has not spoken to Musk about his project, which aims to provoke reflection rather than create robots that propagate a particular worldview. “Hopefully we, as a society, can … learn to create AI that focuses on building bridges rather than sowing divisions,” he said.
Rozado’s goals are admirable, but identifying objectively real problems through the fog of political disagreement—and teaching them to language models—may be the biggest hurdle.
ChatGPT and similar conversational bots are built on complex algorithms that are fed large amounts of text and trained to predict what words should follow strings of words. This process can produce very coherent output, but it can also catch many subtle biases from the training material they use. Just as importantly, these algorithms are not taught to understand objective facts, but tend to make them up.