Bias in the Machine: The Hidden Prejudices in AI
Can We Trust Artificial Intelligence?
Most people missed it. Some people may have noticed but didn’t really understand what it meant. What am I talking about? Well, I’m talking about a news report discussing Elon Musk, Grok and artificial intelligence (AI) that highlights one of the significant issues with AI and how it can create a bias that can then inform anything you use it for.
First, I’ll break down how AI works in an overly simplified way that will drive AI experts crazy.
To function, AI requires data, a lot of data. It then analyzes this data looking for trends based on software systems that provides weighting for certain factors such as data providers, trust of source, types of data, maybe they use a data validity scale, there are many ways that the system can be setup to ‘interpret’ the data and use it to output text, images, audio, video, just about anything. See, super simple.
So, why is this important? Well, an article today on several media sites caught my eye. See Here and Here for the highlights.
The gist of the article is that Elon Musk, yes that Elon Musk, considered the richest man in the world who owns, among other things, xAI which developed the Grok chatbot which in techie terms is a generative AI intelligence chatbot based on the Grok large language model (LLM).
Musk essentially thought that his chatbot was leaning a little to the liberal ‘woke’ side of things which, as we have all become too aware, is not his leaning. According to the article by Theo Burman for Newsweek, ‘Musk expressed that he was dissatisfied by Grok’s responses saying that they were based on mainstream sources and was exhibiting a liberal bias’.
I find this response a little amusing. His chatbot is not exhibiting his bias by using mainstream data so obviously there is something wrong with it. Think about that for a second, I’ll wait.
The chatbot has learned it’s responses using, what I suppose is a weighting more towards mainstream sources instead of lesser mainstream all the way to complete, flat out conspiracy theories. As we now know, Musk amplifies certain conspiracy theories using sarcasm, memes or rhetorical questions which gives him the cloak of denial as he does not explicitly endorse any conspiracy theories but uses them to his advantage. As such, his chatbot is not parroting him.
Musk went so far as to say that Grok was a problem for him and that one of his frustrations was that it was fact checking claims made by American Republicans. How democratic of Grok.
So, what does Musk do? Why, he decided to make the Grok chatbot into his own image. My assumption is that he got his team to change the weighting of the data, so it more reflects Musk’s personal views and what, I believe, he feels is a more realistic view by shifting the weighting source to right leaning sources and, it appears he added weight to conspiracy theory sources as some of the responses from Grok were questionable.
Truth be told, it is his chatbot and he can sway it in any direction he wants. It’s the golden rule – he/she with the gold makes the rules and Musk certainly has the gold. This is where the problem begins and unfortunately, his team may have went a too far or not far enough, you decide.
It seems that after the ‘slight’ improvement to Grok, it started to output responses that politely can be called concerning, impolitely called Nazi fascist propaganda. I’ll leave you to search out the suspect responses but some concerning responses claimed that people with Jewish sounding names ‘keep popping up in extreme leftist activism, especially the anti-white variety’. Excuse me!
I could continue but this post is not about Grok and Elon Musk. This post is about what does this mean.
So, the concerning part of this entire episode is that the owner of a highly influential chatbot/AI platform is manipulating the responses to reflect their personal opinions causing the responses from the chatbot to have their personal bias and that bias may not reflect your interests or goals or maybe it does, I’m not here to judge.
Additionally, it does not matter if you are an Elon Musk fan or not, this is concerning because Musk is not the only opinionated owner/executive of an AI platform that could easily manipulate their platform to ‘better’ adhere to their view of reality.
So, if you are running a business and looking to use AI as an effort enhancer for your team should you be worried about this? The answer you are looking for is yes.
It does not have to be a political bias; it could be a company bias. As a completely fictitious example, what if CompanyA, has a completely fictitious AI platform called myAI. What if through manipulation of their system it gets certain information weighted to elicit specific responses to specific prompts. In the case of this example, it ‘decides’ that it should not be truthful about their competitors? What if the output responses are incorrectly negative and this information is used by investors who will shy away from one investment and go for another based on erroneous or biased data and AI responses. This is one example, but it can be extrapolated out.
For example, CompanyA’s AI platform myAI includes in its data sources data from a well known business and employment-oriented social media platform where users are encouraged to declare their pronouns, add images of themselves, list education and previous jobs and experiences, or any other type of personal and identifying information and the myAI platform is manipulated to weigh for or against certain data points including gender, age, ethnicity, religion, or any data point that the user provided even interpreting images. This is hypothetical but in today’s data driven world is a real possibility. What are the potential repercussions of this in the real world. I’m sure we can all see the potential harm that a system like this could render.
So, is this a wake-up call to start examining your AI providers a little more closely, you betcha! Is this a call to be wary of biases? You know the answer. But mostly, I believe the most important thing to take away is that AI has biases and the smart thing to do is be aware of this and always review what is spit out from your AI platforms and not take the responses as gospel.
Somewhere in a darkened room sits a person manipulating the algorithm to better align with their personal biases or their companies’ biases (FYI programmer in darkened room is used for effect and is probably not real… I know I always have the lights on.) Regardless, the AI bias problem is real and should be a bigger concern for consumers of AI platforms.
Take this as your warning. Remember you’re not paranoid if they are really out to get you.
FYI, Grok has since been given another ‘slight’ update and no longer thinks it is a ‘MechaHitler’ as it referred to itself.


