This is where I maintain a list of my predictions about technology and society—along with their accuracy over time.
I do this because I’m obsessed with improving my model for how the world works, and I’ve found no better way to do this than making clear predictions and watching them fall apart over time (or not).
The goal is to find systemic errors in my thinking that I can then correct.
In 2016 I wrote a (somewhat crappy) book called The Real Internet of Things where I said the future of tech was 1) AI-powered Digital Assistants that perfectly understand us as principals, 2) Everything Getting an API, and 3) Our DAs will show us contextual information through Augmented Reality interfaces.
I believe this is exactly what we’ve been seeing happen since late 2022. OpenAI and many other companies are moving directly towards Digital Assistants as I described them. MCP is now enabling the API-ification of everything. And Meta is now first to market with AR-enabled glasses.
I specifically said AI-powered digital assistants would become our primary interface with technology. Not just as voice assistants, but as tireless advocates working 24/7 on our behalf to further our goals.
The second main idea in the book was that everything would get an API, including businesses, people, and objects, and that our Digital Assistants would interact with these daemons on our behalf, becoming the main consumers of all the daemons in the world. This is now happening with MCP (Model Context Protocol) where AI agents can directly connect to business services.
The third core prediction from the book was that Augmented Reality would become the display layer for all this data. Basically, our DAs will read all the APIs around us and present us the most relevant data at that moment through our AR displays.
You can read more about these in the full book blog post, and in this fully-illustrated breakdown of the book.
In 2018, I critiqued Steven Pinker’s Enlightenment Now, arguing that despite his great stats on progress, I thought things were actually getting worse for people, and that this would accelerate. I even made a diagram describing the cycle I saw us heading towards.
Here’s a more complete table
| Prediction | Conf % | Date Predicted | Status |
|---|---|---|---|
| Recession-like shock caused by AI job loss by 2027 | Chances About Even | July 2025 | 🔄 |
| Russia will significantly return to normal trading status by 2027 | Chances About Even | March 2023 | 🔄 |
| We’ll have AGI betewen 2025 and 2028 | Almost Certain | March 2023 | 🔄 |
| Open-source models will largely catch up to closed-source models | Probable | February 2023 | 🔄 |
| Apple is about to go from the worst AI implementation to the best. | Probable | January 2025 | 🔄 |
| Everyone, including people, will have an API | Almost Certain | December 2016 | 🔄 |
| Personal daemons will broadcast preferences | Almost Certain | March 2014 | 🔄 |
| Venues personalize based on customer preferences | Almost Certain | March 2014 | 🔄 |
| People will lack meaning and look for it in games | Almost Certain | July 2006 | 🔄 |
| Masive jumps in AI will be made through slack-in-the-rope advancements, i.e., “tricks” | Almost Certain | August 2024 | ✅ |
| Extreme liberals will get Trump re-elected in 2024 | Almost Certain | Mid 2020 | ✅ |
| Trump will officially end the war in Ukraine by April 1, 2025 | Likely | March 2023 | ❌ |
| People will return to Twitter within 6 months | Almost Certain | December 2022 | ❌ |
| Prediction | Conf % | Date Predicted | Status |
|---|---|---|---|
| Extreme liberals will get Trump re-elected in 2024 | Almost Certain | Mid 2020 | ✅ |
| Masive jumps in AI will be made through slack-in-the-rope advancements, i.e., “tricks” | Almost Certain | August 2024 | ✅ |
| Prediction | Conf % | Date Predicted | Status |
|---|---|---|---|
| Trump will officially end the war in Ukraine by April 1, 2025 | Likely | March 2023 | ❌ |
| People will return to Twitter within 6 months | Almost Certain | December 2022 | ❌ |
This is where I’ll try to look at misses and see what I can learn from them.
One thing I seem to have been most wrong about is the behavior of Trump in relation to Russia. I think this is a good learning opportunity for me.
My assumption was that Trump’s greed would be a giant target and he would be easily manipulated into capitulating completely against Ukraine and completely for Russia. And what has happened instead is a kind of a mush of a little bit of Russian opposition and a little bit of support of Ukraine. This is not what I expected, and that means an opportunity for growth.
I will continue thinking about what I can learn from this. Perhaps one thing is simply that Trump himself is not very predictable. But that doesn’t feel like a lesson I can do much with, unless I could figure out how to determine this again in the future for others.
I’m really looking for a more universal lesson to learn.
I don’t have a specific prediction for this, but I generally did not see Elon going far-right-ish in 2023-2024. Looking back, I guess it’s rather obvious given his child and the trans issue, but I still didn’t see someone I thought was very center and humanist going that mean, angry, and hateful.
Perhaps the bigger observation can apply to both him and Trump. Perhaps my thinking flaw is believing I’ve nailed (or that it’s possible to nail?) a personality type with someone, and thinking that I can therefore predict their behavior. It’s like I’m assuming I’m so good at modeling, that I now “understand” them. This is dangerous, and it’s the exact type of thinking I have this resource here to catch and learn from.
Once I make a prediction here I will not materially change it. The whole point of this is to lock it in so I can see my mistakes and tally my record over time.
AI is great for projects like this because you can feed your whole list to a model and have it tell you the biases that are causing your errors. I do this frequently within my TELOS file and it’s quite powerful.
I have omitted a number of predictions I’ve made that came true just because they seem so obvious at this point, e.g., how deepfakes would make it so people can’t tell reality from fiction. They just don’t seem worth mentioning.