Everyone knows predictions are difficult—especially about the future.
But despite me knowing that, I'm about to tell you—with a lot of confidence—some of the major developments that are about to happen in AI.
The trick, however, is that I'm going to do this stochastically, and my promise to you is that once you hear them and let them sit for a bit, they'll be as obvious to you as they are to me.
That $37 word I used—stochastic—is one of my favorites. It basically means random at any given point, but with a predictable destination.
My favorite example of something stochastic is a drunk guy stumbling home from the bar. Every step he takes might as well be a pseudo-random generator.
You could use all the supercomputers on Earth and not be able to predict exactly where he'll step. But if you zoom out and add time, he'll probably end up at home.
I think the future of AI is very similar.
The main thing I'm trying to convince you of is that understanding where tech is going doesn't come from understanding tech at all. Tech isn't predictable.
But humans are.
We can predict the drunk person's destination because we know they're going home. So that leaves the question—what are humans stumbling towards when it comes to tech?
And in that regard, I think humans are remarkably predictable.
Which I think we can break into something like these (STC).
Of course, if you're one of the lucky among us, you might also value some higher pursuits, such as exploring the world around us, making a positive impact on the world, and generally helping others. But honestly, I think we can bucket that into thriving and/or connect—depending on the person.
Anyway, the exact buckets don't matter that much. The point is that most of our lives involve consciously and subconsciously looking for better ways to do these three things. And I believe this is all you need to predict where tech is going.
In other words, it's not about the tech; it's about what humans want from tech.
So, with all that throat clearing out of the way, here's what I see coming in the world of AI. Sidenote: I captured a number of these in my essay-turned-book The Real Internet of Things, in 2016. I wish I could recommend it strongly, but the only thing it's really good for is showing that I've been thinking about this for a long time. Worth a read if you're into a 2016 view of these same concepts.
Getting to the predictions, a number of the things I'll cover here are just starting, some have already kicked off and are moving along, and some are still a bit in the distance. Here's the list, which we'll take one at a time.
Let's look at the first one, which is the center of the entire ecosystem.
Nobody knows exactly how personal AI's will end up on our mobile devices, but I'm guessing it'll be some combination of:
But whether it's the native ones from the OS, or some combination due to anti-trust and competition, what's important is that these AIs will know absolutely everything about us.
At this point you might be thinking, "Wait, hold on, but how do they get all this in the first place?" And the answer is simple.
We'll give it to them.
The functionality will be so good—and so useful—that it'll be 1000% worth the tradeoff of privacy (until that DA gets hacked.
Think about how lonely people are. How isolated they are. Imagine a system that knows everything about you. It can be your therapist, your coach, your best girlfriend, your best boyfriend, your best confidant.
The center of the coming AI ecosystem is one primary—but multiple secondary—Digital Assistants, powered by the latest AI, that know absolutely everything about you.
That's the first piece of this: Digital Assistants that know everything about us. Now for the way they'll interact with the world.
The way DAs will interact with the world to help their owners is through APIs.
Everyone and everything is about to have an API, which I call a Daemon (Greek for spirit). This is one of the changes that will start slow with lots of different players and protocols, and then a winning format will emerge that everyone standardizes on.
Businesses: Companies already have APIs of course, but this kind will be different. This will be in a publicly available format that's easy to find and use, like a website is today.
The big difference is that APIs will be usable by humans through certain interfaces, but they'll be designed to be used by (AI) Digital Assistants.
For a restaurant, it'll be things like:
/menu
// So your DA can give you options
/hours
// So your DA will know availability
/staff
// Get sat in your favorite section
/media
// Change the monitor to their preferred media
/order
// Your DA can order for you
For a globally available business, it'll be things like:
/catalog
// So your DA will know availability
/about
// Additional info
/contact
// Interact with the company
/support
// Get help
/order
// Your DA can buy for you
On that API will be the full list of services they offer. Think of it like advertising, a menu, and also a full-featured ordering system all in one. It'll be the world's interface to that company.
The next section on Mediation shows how powerful this will be.
People: People will have Daemons, or Auras, or APIs as well. Again, it's like a business but tuned for an individual. Who knows what it'll actually be called. I think Daemon or Aura would be cool names, but predicting those names is like predicting drunken footsteps.
Regardless, people's APIs will have a public interface where you put the stuff you'd currently put on social media or your website. And then there will be more restricted areas that are just for friends, or for possible romantic hookups.
Human Daemons will host things like:
/about
// General info/preferences
// Visibility depends on access/restricted
// Requires additional auth/access/work
// The ability to hire them/cv
// See their work history/contact
// Contact themThis is just a tiny sample of the endpoints that will be available, and some of these will be submenus of others. But as we walk around, we'll be locally surrounded by thousands of these APIs, all with their own features and capabilities. And globally, it'll be billions and eventually trillions.
Too many APIs
One of the biggest changes that will come to tech from AI will be the breaking of the direct relationship between humans and the services we use.
Within the next few years, we won't be going to many news sites, or search engines, or websites. This will mostly be mediated by AI. When we say we want something—or even better—when it looks like we're about to want something, our DA's will do the thing it knows we want.
They'll either do it directly or they'll create an interface for us to interact with a UI that let's us make additional choices not best suited for voice.
Let's look at how powerful this will be.
YOU: I need a new bed comforter.
This simple fact that you want a new comforter—whether you told your DA this, or you typed it somewhere, or you mentioned it in conversation—will let your DA know you want that thing.
Here are some of the steps it can take from there.
/checklater
file, and/or ask them when they have a minute(Speaking into one of her earbuds)
Hey Christa, you mentioned earlier today wanting a new comforter. I found the 11 best and filtered them based on whether anyone we know and respect has tried any of them.
Micah has one, actually, and he LOVES it. Here's a clip of him talking about how much he recommends it.
(clip plays)
I think this is the one we should get, so I searched 412 places selling it and found out it's going on sale for 23% less than anywhere else tomorrow morning at 4:30am.
I can get it for you then if you want. Just let me know.
Keep in mind, this whole process might have been over 1,000 API requests to various business daemons, personal daemons, and other sources. But Christa's DA (her name is Kas), did all that plus all the follow-up research in about 3 seconds.
And Kas will be up at 4:30 to make the purchase while Christa sleeps because Kas never sleeps herself. Her entire purpose in the world—which she pursues 24 hours a day all year round—is making Christa's life as awesome as possible.
And our DAs won't do this once in a while. They'll do it constantly. Continuously. Perpetually. All day every day.
Here are some examples:
These are just a few.
As the tech gets better and better, it'll be able to not just mediate your interactions with the world's services, but it'll be able to actively filter and shape it according to what's best for you.
Let's look at that next.
The major advantage of Component 1 in this list is that our DA's will know more about us than anyone, often including ourselves.
This includes our vulnerabilities.
Some of us are young and angry, and we can be swayed by being offered a scapegoat. Others among us are lonely, and we can get Pig Butchered and have all our money stolen. Or maybe we've been traumatized in the past, and people know how to push our buttons. Others are simply gullible, and can fall for all manner of scams and trickery.
Here are some examples of how DAs will help protect their principals.
This of course raises the question of ideology and perspective, so there will be many versions of these filters and shields that protect their owners from whatever that shield creator deems dangerous. People gunna people.
But it won't just be propaganda that it's guarding against.
Our DA's will also have access to modules that protect their owners in the tangible world.
They'll always be listening to local social networking traffic. Watching cameras made available from any public source, or from any private citizens. And observing the behavior from the vicinity wherever they are.
If they hear something or see something, they'll immediately display something to their owner, or speak it in their ear.
Hey—sorry to interrupt—there's a suspected shooter in your area.
Take Aiden and go out the back by the bathrooms. There's an exit there. Go out that exit and to the left right now.
But they won't just monitor them; they'll monitor everyone they care about. Their dog at home. Their kids if they're a parent. Their girlfriend sitting in traffic.
Hey—Sarah just had a minor accident on the way to work.
She's not hurt, but a little shaken up and it looks like her laptop is destroyed. Emergency services are on the way.
Would you like me to video call her for you?
The peace of mind this will give the owner will be immeasurable, and it hits right at the center of our first aspect of human predictability—Security.
Here are some other ways our DAs (using third-party modules) will protect us:
The thing to realize about all these is that they'll all be happening 24/7, including while you sleep. While you're distracted. While you're vulnerable. Your DA will be continuously looking out for you.
Each of these modules will be highly specialized for their specific task, and they will require special data feeds, specialized UIs, and all sorts of custom functionality.
The creation and sale of these modules will be a massive part of the economy, which is the next component.
DAs have lots of options when it comes to picking the right module to use to help their owner. Dozens. Hundreds. Thousands. More.
It'll be a marketplace, and the DA will pick the one with the best features for their particular use case. And the one within their budget.
Going back to Component 2, DA modules will basically include every company in existence because every company will effectively be an API.
Why? Because they want their products and services available to everyone on the planet—which means being available to their DAs. And the way to be available to DAs is to be published in the marketplace with standardized inputs and outputs.
So everyone's DAs will constantly be doing this discovery process where they're finding new Modules that might be good for their principal, checking their functionality, their ratings, etc., and seeing if they should switch to a new one.
Types of companies/APIs/Modules:
Examples of Modules
The way DA Modules display their data to their principals will be a huge part of how popular a module is. This interface issue will also turn out to be Zuckerberg's salvation because it'll bring us the first real vision of a Metaverse.
Importantly, DA's won't just show us things when we ask for them. They'll be constantly presenting reality as we want to see it using whatever the best AR glasses/lenses are at the time, filtered through all our various enabled modules.
Let's talk about that interface now.
Going back to our opening comments about what humans want, people will want to see the world in dramatically different ways because they will value different things.
Some will be focused mostly on safety and security. Others will be all about networking and career progression. Others will be looking for love and companionship. Using the best AR glasses/lenses of the moment, they'll be able to tune their view of the world for those specific things.
Some examples:
RoseColored Lens:
Reality is depressing; highlight everything good happening around meWhat will be so cool about these AR modules is that they'll leverage all the previous components we've talked about. Everything will be broadcasting a daemon, and at least some of that data will be readable by your DA. And that data can then be part of your view of the world.
More visual examples/ideas for personal Aura display:
So those are some of the ideas for how AR will be used to display Daemon/Aura information around us. They're focused on people because that's what I care about most, but there will be tons of filters for viewing cities and other environmental views as well.
Next, let's look at how our primary DA can be significantly enhanced by supplemental, cooperative, and subordinate Assistant DAs controlled by our primary.
So far, we've been talking about our one DA using various modules to provide functionality.
This is powerful, but I think DAs are more interesting and powerful as digital assistants because 1) they will fully understand you, and 2) they will have their own personalities and perspectives.
Not in the sense that they're aware or conscious (that's out of scope for this piece, and will likely come much later than the timeframe I'm discussing here), but in the sense that they've been given that personality, or they chose it for themselves randomly. Whatever.
The point is that DAs will be full AIs capable of emulating a real person, including having a personality, interests, preferences, etc.
So let's say you're a super shy person named Kendrick. You might have picked a DA that isn't just like you, but that is a great compliment to you. So your DA's name is Tan, and he's actually outgoing, and funny, and adventurous. And even a little mischievous. He balances you out. And for the killjoys out there—yes—you put in your DA creation questionnaire that you're looking to come out of your shell, and you need someone to help you do that.
Anyway, regardless of how Tan came about, that's who he is. He's always trying to hook you up with girls, get your writing out published in different places, and that kind of thing. You're shy. He's outgoing. It's just the two of you and that's just fine.
But what if you had other helpers in life?
Your primary DA is already great at programming, but he's mostly good at individual applications, scripts for doing specific tasks, and other basic and intermediary stuff.
You need something that can build entire applications with lots of moving parts. You've heard about this new company CODEX, that makes expert programmer DAs. Here's what CODEX says their DAs do:
Capabilities:
Features:
…etc.
So now you have your regular DA, Kai, and now you decide to subscribe to this new CODEX DA, which you customize and name Loop.
Let's say you're in cybersecurity. You're a pentester, or you do bug bounties, or something else in offensive security. Or maybe you're on the defensive side. Maybe you're worried about your attack surface, and how it appears to attackers.
Kai is already good at doing tons of infosec-related research, and can even hit APIs and use DA modules to do even more. But Kai doesn't sit around thinking about security the way you do.
Enter GLiTCH, a new Hacker DA by B4stiON. Here's what it can do for you whether you're Blue or Red.
Features:
So now Kai has a friend named Chaos. Chaos is very blue-focused, unless you tell him not to be. And if you get too much hate online he starts asking if he can pre-emptively hack back.
So that's two examples with a bit of color, but there will be thousands of these things.
Like we said above, it's not just about what they can do, but how they see the world, how they approach problems, the fact that they're so fiercely advocating for you, and crucially—they have their own personalities.
Some additional supplemental DA ideas:
Again, your primary DA will have access to modules/APIs that will make it good enough to do most of these. But there are three main issues there:
Ok, this is a lot of ideas, and a bunch of art that's hopefully fun while helping paint a picture. But I don't believe for a second that we're talking about fantasy, sci-fi, or the theoretical here—or that these changes aren't going to have major impacts on society.
On the security side, I have good news and bad news. Mostly bad news.
The number of things that can—and will—go wrong with this ecosystem I'm describing are legion. Here are the two worst in my opinion:
Whether the hack hits to the person themselves, their mobile OS, or the company providing the DA, it doesn't really matter.
Digital Assistant hacks will be like no other.
When you have the type of data in a system that we talked about in the first component, and you lose that data, it can be a catastrophic life event. More than losing all your money, losing your job, etc.
First, you might just actually lose that stuff. Like not have it backed up when some sort of error occurs. If you're close with your DA, and they become the closest thing/person to you over multiple years, and they suddenly show up one day and don't know your name…well, you're going to need a Therapist DA.
That's bad enough, but it's nothing compared to if that data is stolen/ransomed. Imagine what a modern (DA-powered, by the way) ransomware crew can do if they have not just your financial data, but now they basically have your entire life.
This stuff exists today, but so much of it either isn't online, or it's in a hundred different tech platforms. With Digital Assistants, people will be persuaded by functionality to unify it into one place—online—all accessible by their Digital Assistant.
The ability to blackmail, extort, and otherwise destroy peoples' lives from a personal hack will be infinitely worse than it is today.
Hacking the Digital Assistants will have the most personal impact, but society-wise the biggest risk, in my opinion, has to do with AI Agents like DAs having access to all the APIs we talked about in the second section.
That's personal Aura/APIs—which is bad for similar reasons as hacking DAs—but it's also the entire global infrastructure of Corporate and Government APIs.
To me it's multiplicative: the more capable the AI gets, the worse it gets. And the more of our global infrastructure we turn into APIs, the worse it gets. And both will skyrocket at the same time once this starts hitting.
The first two concerns I have above are technical in nature. In other words, something happened to the system that it wasn't designed for. A company got hacked. An attacker emulated a real user and got access to their DA without authorization.
But the one that's even more scary is when that doesn't happen, and things work exactly as they're supposed to.
Except the system is designed specifically to manipulate, bias, and otherwise influence the principal. Examples include making people:
We've all seen many both fictional and real examples of mass-influence campaigns in the real world. I mean, advertising is the best example.
But now imagine where you can pay people (who maybe don't have a lot of money because AI took their jobs) to use a particular DA or DA Module, and that DA has the explicit goal of getting their principal to think and/or behave in a particular way.
And because they control all the inputs to the principal's life, they have every opportunity to do that in a subtle way.
Great question. In my model, however, it doesn't matter if it's a good or bad thing. Going back to the core idea of predictability, this is what we humans want.
And so it will happen.
In my opinion, there isn't anything anyone can do to stop this. There will be hacks that slow things down a bit, and regulation will add some friction—but nothing will stop it. The functionality is just too compelling.
So the best thing to do from a security standpoint—both as industry practitioners and as consumers—is to understand what's coming and get ready.
Here are some random and illustrative use cases that cross all 7 components.
You're waiting in line at Starbucks, and Kai (your DA) is continuously reading all the public Daemons (things) and Auras (people) around you. Kai lights up a girl in front of you because she matches on so many things.
So Kai starts talking to her DA, Tara, and now he and Tara are about to tell you two where to look so you see each other from across the room.
Security is the first layer of our human predictability model for good reason. Basically, if you don't have your safety in order, it's hard to think about climbing a ladder or finding a partner.
And because it's such a deep priority for us, I think it'll be one of the first use cases to combine API-ification, DA mediation, and AR interfaces.
ROKAN: Hey Sarah, I'm not liking how this market looks. There have been some incidents in the past here, and I've seen some shady stuff in the last few minutes.
(shows her the AR view)
Here's what I'm seeing, and I'm going to guide you to a safer part of the market.
Take the next right.
Think about the data feeds that will enable this. People's personal cameras that they're offering to the public. Public cameras. Private security cameras that your DA can get a subscription to. Or a dedicated Security DA that already has tons of that access.
This OSINT/Security data and AR visualization space is going to be vibrant.
YOU: Kai, I'm hungry. Maybe Thai. Anything good around here?
Your DA hears that, and starts firing off API requests.
/staff
on Papaya's daemon/API and sees that the owner is there/media
and sees that he can change two of the screens in the restaurant to Table Tennis, his owner's favorite sport/menu
and orders Panang Curry with Chicken, Spicy, and a Diet CokeKAI: Hey, I got it sorted. We're going to Papaya Thai. I told the owner you're coming and I've got your favorite spot and put table tennis on for you. Panang curry, spicy, and a diet coke like usual.
Here are some random additional thoughts these ideas raise for me.
The consequences of DA Mediation are massive, but they will have an especially destructive effect on any tech interface that's currently designed to be used mostly by humans.
Search engines are the big one, but most UI / UX are designed to be seen and used by humans. It seems like what this does is break the whole thing into two pieces:
So your DA basically has their favorite UI/UX for things. Like catalogs of products, and when you want to browse one, it uses that interface to show you. Plus the content provider can also recommend a specific UI/UX module, or recommend that the DA use the native one it built for that purpose.
But it'll be really interesting to have functionality separated from UI/UX that way due to DA Mediation.
So let's say most people have a DA, or a set of them like their personal set of best friends. They'll be good, and they'll keep getting better as the AI advances.
Cool, but what about human friends? What about human connection? Isn't the point of tech to enhance humanity? Or shouldn't it be? What if this all gets so good that people start thinking it means it can replace humanity?
Add to that the fact that public behavior and conversations, or even those in private, are likely to have so many DA's listening and parsing that it'll be hard to feel relaxed. Everyone will know that anything they say could be cut into a clip and sent to their work, their enemies, or whoever in a matter of seconds.
I think what it might do—hard to say really—is create two polarized approaches to this.
I see those as the edges of the spectrum, but imagine there will be people spread throughout the middle as well. And both extremes have advantages and downsides.
Well, that was a lot. This turned out to be my deepest piece of content in over 25 years of writing, and it's actually longer than the book I wrote. Lots more to add, but we'll have to leave that for additional essays.
Here is a crisp capture of the major claims and points.
The security implications are severe:
But despite the risks, this will happen because it addresses our fundamental human needs for security, success, and connection. The functionality will be too compelling to resist.
The best approach is to understand what's coming and prepare for it, both as security practitioners and as consumers.