- Unsupervised Learning
- The Real Internet of Things (tRIOT) Series: Reputation Infrastructure
The Real Internet of Things (tRIOT) Series: Reputation Infrastructure
[ This is part of my Real Internet of Things (tRIOT) Series—an extension of my book The Real Internet of Things—about my perspectives on the future of technology and its intersection with society. ]
The topic of this entry is Reputation Infrastructure, which I expect to be the most objected to chapter in my new book. The chapter talks about how a major part of the future of tech will be reputation systems for hundreds of different components of a person, and how those various scores will be used in different ways by different people.
Sounds straightforward enough, except S3E01 of Black Mirror—a show about various versions of tech dystopia—shows how bad this can go. I’ll tell you what it’s about because it won’t take from it much and you should still watch it anyway (it’s excellent).
In the episode there is a single score for every person, and that score is used to determine if you can live in certain neighborhoods, how much you pay for things, which lines you can stand in, etc.
So what inevitably happens is people start obsessing over raising their scores. They act super fake to raise it, they be super nice to people with higher scores. They disrespect people with lower scores because they don’t want to be associated with them and lose points. Etc. It’s sickening.
A lot of people will read that chapter and think that the Black Mirror episode effectively showed why such a system will always end up badly. It’s not true.
What is true is that this is a real danger of a system like this. China actually went even more crazy with it and created an entire gamification and incentive management system for their authoritarian state, complete with ratings and scores.
That’s clearly bad as well.
But the idea of people rating other people, and making those ratings more available to others, is 1) as old as people, and 2) doesn’t have to always be bad.
More importantly, as I talk about in my Afterword, it’s going to happen. It’s already happening. There are credit scores. Those are shared. There is linked in. There are job references. Etc. And all this is only going to get easier to share and use. It’s the pee in the pool problem.
Even more fundamentally, it’s just too damn useful for business. And things that are useful will find a way to happen, like water breaking stone.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
Anyway, there are ways to counter the negatives of these kinds of systems (of which there will be many). First, there will be ratings for genuineness. Ratings for honesty. Ratings for “Down to Earth”. Ratings for “Has your Back”. Etc. The people will make the ratings because those are the handles they use in real life. Companies will build what the people want to use.
So the way Reputation Infrastructure happens in the episode of Black Mirror isn’t actually a consequence of technology or of reputation itself being digitized. Having a single score that everyone uses and everyone sees is more of a China problem. Meaning, it’s one group running all the tech and deciding what’s in it—the government.
That’s an authoritarian regime. That’s 1984. And that’s the problem.
In a more democratic and capitalist world, and indeed in the world that I predict in the book, people will see different ratings when they look at people because they value different things. And many will see none at all because they’d rather have a more natural experience.
But musicians might see people who’ve recently written a lot of music, or who performs a lot. Artists might see painters in a crowd. Media might see famous people. Lonely people will see single people. Etc. Everyone seeing one score is just…unnatural and scary.
[ NOTE: I think an interesting feature of future algorithms would be to show people much higher or much lower scores for people than their actuals and see how they react. If they rate them more like the displayed value then you know they’re gaming the system, and if they rank them more like their actual scores then you know they’re likely going by the actual experience. ]
There will be several countermeasures to “too much reputation”, including one that I talk about where people are simply given clean slates on a regular basis. And the receiver can decide how much they honor clean slates. So if you have some low score in some value, but your scores in the recent past are much higher because you’ve been trying, you can just assume that they’re attempting to become that new person.
It’ll be up to people to decide how they want the systems to work because there’ll be thousands or millions of companies competing to provide the perfect rating algorithms (and the associated AR displays for them) for every situation.
A core concept here is that in a functioning model, the tech works for the people—not the other way around. So yes, there will be some seedy shit happening with ratings that make all of us squirm, but we can expect that kind of stuff to get stamped out by better versions due to competition in the marketplace.
There is an assumption here that more tech doesn’t just turn us into a massive police state like China, but I’m comfortable making that assumption because if it happens it’s going to happen no matter what we do. It’s not as if we can see that tech will make it happen and we can apply the breaks to prevent it. The tech is coming regardless of what is done to speed or slow its arrival.
You can get The Real Internet of Things, from Amazon.