Behind the Voice: Is Copilot Echoing Racial Bias?

Early Tuesday morning, I tested several AI chatbots to get a quick summary of the BBC article I posted on the 17th. Gemini struck out, as did ChatGPT and DeepSeek. Surprisingly, Microsoft Copilot delivered—summarizing the article accurately with nothing more than the URL.

Being generally AI-progressive, I decided to compliment Copilot for succeeding where the others didn’t.

That’s when something unexpected happened.

Without any prompting from me, Copilot shifted into speaking mode. It responded aloud to my compliment with a bit of American humor, saying it was always trying to be on its “A-game.”

I was lying in bed with a buggy keyboard, so I welcomed the switch to voice. We chatted for a while. Then I asked a question:

Screenshot of AI conversation

What you see in the screenshot is a condensed version of my actual spoken words. Copilot summarized it that way, not me—but the gist is accurate.

The voice it used reminded me of a younger Bill Cosby—or perhaps another respected Black public figure. Yes, Cosby is controversial now, but for many years he was seen as a national treasure in the U.S. So I asked if the AI was trying to sound like a Black person.

I’m deeply curious about AI—how it works, what it’s aiming for, how it makes choices—so this kind of question is part of my ongoing inquiry.

The answer I got took me aback.

The AI responded by wondering if the voice sounded "off" when I asked if it was Black—a reaction that felt odd and surprisingly out of place.

From a philosophical standpoint, Copilot’s use of the word “off” struck me as revealing. Why frame my impression as something off? Why not ask whether the voice resonated or connected? Why lead with a negative?

It made me wonder: Is Copilot reflecting a deeper layer of American racial bias—something its programmers didn’t catch, or worse, unconsciously embedded?

What do you think?

Comments