Predictions

A set of stochastic predictions about security, tech, and society

This is where I maintain a list of my predictions about technology and society—along with their accuracy over time.

I do this because I’m obsessed with improving my model for how the world works, and I’ve found no better way to do this than making clear predictions and watching them fall apart over time (or not).

The goal is to find systemic errors in my thinking that I can then correct.

Primary, Topical Predictions

Tech future

Predicting the Emerging Tech Stack

In 2016 I wrote a (somewhat crappy) book called The Real Internet of Things where I said the future of tech was 1) AI-powered Digital Assistants that perfectly understand us as principals, 2) Everything Getting an API, and 3) Our DAs will show us contextual information through Augmented Reality interfaces.

I believe this is exactly what we’ve been seeing happen since late 2022. OpenAI and many other companies are moving directly towards Digital Assistants as I described them. MCP is now enabling the API-ification of everything. And Meta is now first to market with AR-enabled glasses.

Digital Assistants as Primary Interface

I specifically said AI-powered digital assistants would become our primary interface with technology. Not just as voice assistants, but as tireless advocates working 24/7 on our behalf to further our goals.

Everything Gets an API

The second main idea in the book was that everything would get an API, including businesses, people, and objects, and that our Digital Assistants would interact with these daemons on our behalf, becoming the main consumers of all the daemons in the world. This is now happening with MCP (Model Context Protocol) where AI agents can directly connect to business services.

AR Displays for DA-controlled Contextual Overlays

The third core prediction from the book was that Augmented Reality would become the display layer for all this data. Basically, our DAs will read all the APIs around us and present us the most relevant data at that moment through our AR displays.

You can read more about these in the full book blog post, and in this fully-illustrated breakdown of the book.

Humans and Society

Pinker and Thriving

In 2018, I critiqued Steven Pinker’s Enlightenment Now, arguing that despite his great stats on progress, I thought things were actually getting worse for people, and that this would accelerate. I even made a diagram describing the cycle I saw us heading towards.


Current Predictions Table

Here’s a more complete table

PredictionConf %Date PredictedStatus
Recession-like shock caused by AI job loss by 2027Chances About EvenJuly 2025🔄
Russia will significantly return to normal trading status by 2027Chances About EvenMarch 2023🔄
We’ll have AGI betewen 2025 and 2028Almost CertainMarch 2023🔄
Open-source models will largely catch up to closed-source modelsProbableFebruary 2023🔄
Apple is about to go from the worst AI implementation to the best.ProbableJanuary 2025🔄
Everyone, including people, will have an APIAlmost CertainDecember 2016🔄
Personal daemons will broadcast preferencesAlmost CertainMarch 2014🔄
Venues personalize based on customer preferencesAlmost CertainMarch 2014🔄
People will lack meaning and look for it in gamesAlmost CertainJuly 2006🔄
Masive jumps in AI will be made through slack-in-the-rope advancements, i.e., “tricks”Almost CertainAugust 2024
Extreme liberals will get Trump re-elected in 2024Almost CertainMid 2020
Trump will officially end the war in Ukraine by April 1, 2025LikelyMarch 2023
People will return to Twitter within 6 monthsAlmost CertainDecember 2022

Correct Predictions

PredictionConf %Date PredictedStatus
Extreme liberals will get Trump re-elected in 2024Almost CertainMid 2020
Masive jumps in AI will be made through slack-in-the-rope advancements, i.e., “tricks”Almost CertainAugust 2024

Missed Predictions

PredictionConf %Date PredictedStatus
Trump will officially end the war in Ukraine by April 1, 2025LikelyMarch 2023
People will return to Twitter within 6 monthsAlmost CertainDecember 2022

Analysis

This is where I’ll try to look at misses and see what I can learn from them.

Observation 1: Trump and Russia

One thing I seem to have been most wrong about is the behavior of Trump in relation to Russia. I think this is a good learning opportunity for me.

My assumption was that Trump’s greed would be a giant target and he would be easily manipulated into capitulating completely against Ukraine and completely for Russia. And what has happened instead is a kind of a mush of a little bit of Russian opposition and a little bit of support of Ukraine. This is not what I expected, and that means an opportunity for growth.

I will continue thinking about what I can learn from this. Perhaps one thing is simply that Trump himself is not very predictable. But that doesn’t feel like a lesson I can do much with, unless I could figure out how to determine this again in the future for others.

I’m really looking for a more universal lesson to learn.

Observation 2: Elon Going Astray

I don’t have a specific prediction for this, but I generally did not see Elon going far-right-ish in 2023-2024. Looking back, I guess it’s rather obvious given his child and the trans issue, but I still didn’t see someone I thought was very center and humanist going that mean, angry, and hateful.

Perhaps the bigger observation can apply to both him and Trump. Perhaps my thinking flaw is believing I’ve nailed (or that it’s possible to nail?) a personality type with someone, and thinking that I can therefore predict their behavior. It’s like I’m assuming I’m so good at modeling, that I now “understand” them. This is dangerous, and it’s the exact type of thinking I have this resource here to catch and learn from.

Takeaways

Stop Thinking You “Get” Complex People

  • You’ve been wrong multiple times about people like Elon and Trump, and you generally have a bias towards thinking you have a better theory of mind than you actually do. Watch this carefully.

Notes

  1. Once I make a prediction here I will not materially change it. The whole point of this is to lock it in so I can see my mistakes and tally my record over time.

  2. AI is great for projects like this because you can feed your whole list to a model and have it tell you the biases that are causing your errors. I do this frequently within my TELOS file and it’s quite powerful.

  3. I have omitted a number of predictions I’ve made that came true just because they seem so obvious at this point, e.g., how deepfakes would make it so people can’t tell reality from fiction. They just don’t seem worth mentioning.