- Unsupervised Learning
- Posts
- Social Media and AI Are Mirrors That Reveal Our Ugliness
Social Media and AI Are Mirrors That Reveal Our Ugliness
There’s something wrong with how we’re thinking about the problems of content moderation and biased AI.
We’re telling ourselves a pleasant childhood fantasy—that we humans are fine, it’s the tools that are the problem! It’s this darn AI that’s biased. It’s the social media that’s hateful.
Nope, that’s too easy. Too childish. Do they magnify negativity? Absolutely. Do they exacerbate innate weaknesses? Sure.
But all these tools have really done is revealed what was already there. They’ve shown a black light on the serial killer porn shops that are the human psyche.
An anti-woman group that would have been 19 angry men in some two-horse town in 1985 becomes a Facebook group with 40,000 people that start harassing women online
A Black family is denied a home loan because the AI looks at the applicant’s face and determines they are high risk
A child predator taps into a massive social network that shares how to target kids without getting caught
This isn’t a tech problem. This is tech revealing a human problem.
The anti-woman group would spread his filth to the United Federation of Planets if he could. The AI said the man was a bad loan because of 150 years of mistreatment of his people. And the only way to stop bad people from congregating is to stop people from congregating.
All this tech has done is evolve to such a high level of efficiency that it’s showing us exactly who we are. The better it gets, the better a mirror it becomes.
In short, the problem isn’t that we have good mirrors, the problem is that we’re ugly.
So when we start talking about fixing biased AI, and fixing social networks, we need to understand exactly what we mean.
Do we hate these mirrors we’ve built, or do we hate what we see when we look in them? We shouldn’t confuse the two.
It could be that an AI will give someone named Daniel Silverman a loan, with very little additional information—based on its training data—and this might actually be predictive of him paying it back.
Is that racist? I think so—it depends on how you define it. But is the AI biased? I’m not so sure. There’s a difference between AI being biased and an AI telling us something we wish were not true.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
If an AI accurately predicts the loan repayment rates for a rich Asian man vs. a poor white man, taking into account a ton of other factors like parental income, level of education, work history, etc., and the algorithm says the Asian guy has a 97% chance of paying the loan back, and the white guy from West Virginia has a 27% chance of repayment, is that racist?
I mean, it’s racist in the sense that it favored one race vs. another in this case. And it probably would many more times. But if the algorithm is good at what it does, and uses lots of data, it’s getting those good results by closely matching reality in its predictions.
It’s the reality that’s the problem, not the algorithm’s ability to describe that reality.
Again, mirror vs. face.
And it’s the same for hate on social media, or in instant messages, or in peoples’ brains. Those mediums simply represent various levels of hiding what’s already there.
The hatred exists in people’s brains. It’s existed in private conversation for thousands of years. And it is now being revealed and magnified like never before due to technology.
I know there’s an analog to weaponry here. So am I arguing that, “Guns aren’t the problem, people are the problem!”? Yes and no.
I support both gun ownership and gun control. And again, I break that into two separate problems, fixing broken humans and broken societies, and limiting the damage that those things do when they go bad.
It’s the same with AI and social media. The core of the problem is the societies we’ve built, but we should also be willing to take steps to limit damage. And if that means controlling the power of the weaponry (AI and Social Media), then so be it.
But we must not confuse the mirror and the face, the weapon and the sickness, the hatred and the microphone.
Notes
I am well aware that not all so-called biased AI is actually accurately representing reality in a way that’s uncomfortable. There are countless implementations that take sloppy, negligent shortcuts that produce horribly racist results, often to the pleasure of the operator because it reinforces their inherent racism using “the wisdom of computers”. It’s super gross.