Rephrase and rearrange the whole content into a news article. I want you to respond only in language English. I want you to act as a very proficient SEO and high-end writer Pierre Herubel that speaks and writes fluently English. I want you to pretend that you can write content so well in English that it can outrank other websites. Make sure there is zero plagiarism.:
When an AI doesn’t know history, you can’t blame the AI. It always comes down to the data, programming, training, algorithms, and every other bit of built-by-humans technology. It’s all that and our perceptions of the AI’s “intentions” on the other side.
When Google‘s recently rechristened Gemini (formerly Bard) started spitting out people of color to represent caucasian historical figures, people quickly assessed something was off. For Google’s part, it noted the error and pulled all people generation capabilities off Gemini until it could work out a solution.
It wasn’t too hard to figure out what happened here. Since the early days of AI, and by that I mean 18 months ago, we’ve been talking about inherent and baked-in AI biases that, often unintentionally, come at the hand of programmers who train the large language and large image models on data that reflects their experiences and, perhaps, not the world’s. Sure, you’ll have a smart chatbot, but it’s likely to have significant blind spots, especially when you consider that the majority of programmers are still male and white (one 2021 study put the percentage of white programmers at 69% and found that just 20% of all programmers were women).
Still, we’ve learned enough about the potential for bias in training and AI results that companies have become far more proactive about getting ahead of the issue before such biases appear in a chatbot or generative results. Adobe told me earlier this year that it’s programmed its Firefly Generative AI tool to take into account where someone lives and the racial makeup and diversity of their region to ensure that image results reflect their reality.
Doing too much right
Which brings us to Google. It likely programmed Gemini to be racially sensitive but did so in a way that over-compensated. If there were a weighting system for historical accuracy versus racial sensitivity, Google put its thumb on the scale for the latter.
The example I’ve seen tossed about is Google Gemini returning a multi-cultural picture of the US’s founding fathers. Sadly, men and women of color were not represented in the group that penned the US Declaration of Independence. In fact, we know some of those men were enslavers. I’m not sure how Gemini could’ve accurately depicted these white men while adding that footnote. Still, the programmers got the bias training wrong and I applaud Google for not just leaving Gemini’s people image-generation capabilities out there to further upset people.
However, I think it is worth exploring the significant backlash Google received for this blunder. On X (which is the dumpster fire formerly known as Twitter), people, including X’s CEO Elon Musk, decided this was Google trying to enforce some sort of anti-white bias. I know, it’s ridiculous. Pushing a bizarro political agenda would never serve Google, which is home to the Search engine for the masses, regardless of your political or social leanings.
What people don’t understand, despite how often developers get it wrong, is that these are still very early days in the generative AI cycle. The models are incredibly powerful and, in some ways, are outstripping our ability to understand them. We’re using mad scientist experiments every day with very little idea about the sorts of results we’ll get.
When developers push a new Generative model and AI out into the world, I think they only understand about 50% of what the model might do, partially because they can’t account for every prompt, conversation, and image request.
More wrong ahead – until we get it right
If there’s one thing that separates AIs from humans it’s that we have almost boundless and unpredictable creativity. AI’s creativity is solely based on what we feed it and while we might be surprised by the results, I think we’re more capable of surprising programmers and the AI with our prompts.
This is, though, how AI and the developers behind it learn. We have to make these mistakes. AI has to create a hand with eight fingers before it can learn that we only have five. AI will sometimes hallucinate, get the facts wrong, and even offend.
If and when it does though, that’s not cause to pull the plug. The AI has no emotion, intention, opinions, political stands, or axes to grind. It’s trained to give you the best possible result. It won’t always be the right one but eventually, it will get far more right than it does wrong.
Gemini produced a bad result, which was a mistake of the programmers, who will now go back and push and pull various levers until Gemini understands the difference between political correctness and historical accuracy.
If they do their job well, the future Gemini will offer us a perfect picture of the all-white founding fathers with that crucial footnote about where they stood on the enslavement of other humans.
You might also like
I have over 10 years of experience in the cryptocurrency industry and I have been on the list of the top authors on LinkedIn for the past 5 years.