I’ve thought for a long time that public video feed monitoring would become ubiquitous. My basis for this was looking at humans ultimately desire, not at the tech itself.
When I hear crazy long-term predictions I always think two things: either the prediction is going to be obvious, or it’s going to be wrong.
I think my approach is different in a subtle and powerful way. Rather than predicting the exact form, of the exact tech, in the exact order that it’ll emerge, I’m taking a reverse engineering approach.
Specifically, instead of starting with tech and seeing where it’s going, I’m starting with humans and what they seek, need, and desire. In other words, I think we can predict the future of technology through a strong understanding of what humans ultimately want as a species.
The Real Internet of Things, January 2017
Just yesterday I tweeted that the COVID-19 situation was going to finally make large-scale video surveillance endemic to our society.
We’re about to see AI companies offering algorithms that monitor video feeds for sick people.— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ (@DanielMiessler) March 21, 2020
They’ll be used with other sensors (heat, smell) for individual checks, and in mass video feeds to flag people for additional testing.
Busses, train stations, airports, etc.
Governments and various industries have been trying to do this for a long time, but they’ve been opposed on the grounds of protecting freedom and privacy.
They were going to lose that fight anyway—eventually—because the features of connecting algorithms to sensors are simply too compelling to our ingrained human desires, but this angle of sickness monitoring is game over.
Watching people en masse for terrorism, when people can easily that there isn’t that much terrorism actually happening, is a harder fight. But when people remember the Great Depression of 2020, caused by a pandemic, nobody will lift a finger to stop video surveillance tech that claims to be able to spot sick people.
And sure enough—not one day after that tweet—someone sent me this link.
New: AI/surveillance company claims it's deploying 'coronavirus-detecting' cameras in the United States. Says rolling out to customers such as airports, government agencies, Fortune 500. Essentially detects a fever and sends an alert. Similar used in China https://t.co/f2uX5pD2WY— Joseph Cox (@josephfcox) March 17, 2020
This is from a company that already sells “gun detection” video monitoring, which is another guaranteed winner.
Machine Learning is getting so good that we can bounce WiFi off of someone’s body and read their heartbeat, read their facial expressions, and estimate their emotional state.
There are thousands of projects like this, where you point an algorithm at a video feed and it tells you if something is happening in the scene. And it’s easy to see where it’s going.
This last one might sound familiar.
- Show me wanted criminals.
- Show me people who might be sick.
- Show me people who look like they’re carrying a weapon.
- Show me people who look like they’re concealing explosives.
- Show me people who look anxious.
- Show me people who look dangerous.
- Show me people who we should interrogate further.
- Show me people who might commit a crime in the future.
This is amazing stuff. In a world without evil, power-hungry people it would be glorious. But in our world it’s a quick path to discrimination and dystopia.
But my point isn’t that this stuff is bad and we should stop it. That’s silly. We can’t stop it. It’s too compelling. Stopping terrorists and dangerous people—and now sick people—is simply too deep of a desire in too much of the population.
Our only hope in this is to keep people educated on the tradeoffs.
We must understand what we give up when we enable tech like this. For every person who’s creating it or implementing it because they believe it will help people there is at least one more who sees that deployment as an onramp to profit and control.
That awareness of the tradeoffs is crucial, and that’s what gets tossed first when people are injected with fear.
If there were another 9/11 in the US or Europe, for example, video surveillance would blossom with minimal resistance.
COVID-19 is going to cause a worldwide economic depression unlike we’ve never seen. And that will cause PTSD equivalent to many 9/11’s.
Someone will say, “We need these cameras everywhere, hooked up to dozens of algorithms looking for threats”, and people will say, “What about privacy?”
And then they’ll say, “It will let us know if anyone is sick, so we can isolate them and prevent the next 2020 Depression.”
And that will be the end of the conversation.
Basically, the fear of pandemics just permanently opened the door to ubiquitous video surveillance, and our next hope of opposing it won’t come until after we’ve completely recovered from the economic fallout of COVID-19.