Every year I put out a piece called FrontView Mirror that looks at some trends I think could be developing. It’s something like a predictions piece, except I don’t really believe in predictions because they’re usually either obvious or wrong.
So think of it more like Potential Trend Analysis.
Here are some dynamics that I see developing that will take shape in the next 6-36 months.
Harari has a concept in one of his books where he says government styles and social constructs, such as feudalism, democracy, etc., are all phases based on human maturity at a given point in time.
He basically says something like Feudalism works for a particular period, and then it doesn’t. Same with religion. Same with democracy.
His analysis was that we might be heading into an Authoritarian phase, I believe. I think he’s right about that, and AI in the hands of groups like the CCP will accelerate that. But I think there’s another similar phase like Feudalism and Democracy.
Corporate workers. I think the concept of corporate workers might have hit its peak.
Why? Because it doesn’t benefit both sides the way it used to. From the 40s to the 00’s, there was a clean trade in place. Corporations had full control over the worker, and the worker got stability. They had a career. Benefits. Pension. Vacation. Did I say stability?
Corporations don’t offer stability anymore. It’s far too similar to freelance or contract work.
The other thing is that corporate workers used to be obsessed with the job. Dedicated to it like the origin of "company", which is military. Hierarchy. Rank. Dedication.
That too has been diluted. It’s basically bankrupt in both directions. Corporate workers are largely checked out, and corporations are willing to hire and fire like they’re running a pop-up flea market.
The Future of Work is Everyone Running the "Work" App
I wrote this about it back in 2014.
danielmiessler.com/p/the-future-of-work-is-everyone-running-the-work-app
I think what replaces it is smaller, startup companies that have the full religious buy in of previous big companies. Powered by AI of course so they can get more done. But even those employees are really independent players dedicated to a joint cause for those few years. They’re not corporate workers, they’re mercenaries on a project.
For individual workers, the game will become (and it’s already starting) how well you can advertise your skills.
You are not your corporate job. You are the problems you can solve. You are the solutions you can create. You are the outcomes you can produce. Those are advertised in your personal API, on your website, and on federated work services like LinkedIn that connect people who need work done and the people who can do the work.
I think I wrote this back in 2014 or something that highlights the idea more.
Start thinking of yourself this way now
Capture yourself properly on your website
Your Website and LinkedIn becomes your offer of value to the world
Help your loved ones (especially young ones) get ready for this world
We can no longer rely on corporations, and corporate work, to be our sense of identity. We must define ourselves, in our own way, and present that to the world.
It’s getting harder and harder to know who to trust.
Joe Rogan is extraordinarily clear-minded, and liberal, about 80% of topics. But he’s also lost (in my opinion) on vaccine efficacy. Jordan Peterson is the same. Same with Marc Andreesen.
I think I’m a pretty good filter, but you no doubt trust me more on some topics than you do on others. And there are some things like Apple vs. Android where I have clear biases that shape my analysis.
Meanwhile, COVID happens and the global health authorities can’t deliver a consistent, logic-based approach to keeping us safe. Then the WHO comes out and says Aspartame causes cancer and the FDA says "nuh-uh".
What are we supposed to do when we follow all these "experts" and "authority" figures, and they’re either inconsistent with themselves or with each other?
I think the answer will be AI powered agents helping you to triangulate truth. Note: That’s another bias I have: thinking so many things will be solvable with AI. Also Note: I think I’m right about that, though. Also Note: But that’s what I would say.
Basically, once we’re fully enrolled in our AI Assistant, it’ll know what we like and don’t like, and we can tell it we’re trying to triangute the truth. We’ll tell it that we know our experts are somewhere between 60 and 95% right, or whatever. And that when they say something whacky it’s their job to go find out what the consensus is, and what our other favored experts say, and then triangulate from there.
Then it can give us some sort of score or rating of how likely this whacky idea is to be novel and brilliant, vs. kooky and wrong.
Think about all your sources
Remember that they’re wrong some percentage of the time
Figure out how to use them (and the wisdom of the masses/experts) to check each other
Treat most truth as probabilistic and flawed, like models vs. canon
I had the epiphany over the last few months that you can tell when something is mature when it’s mostly run by process instead of people. Or, more pointedly: the more mature a discipline, the more human intervention in a process is considered a bad thing.
The military is one example. Another is a company like Exxonmobil. If you read books about that company you can see that it, like the military, is run by processes. The people are there to steward and improve the processes. And to follow them. Not to work outside them.
Security is largely the inverse of that. Which is why it’s so fun.
The early form of any discipline is chaotic and dangerous and exciting. Navigating the oceans. Hacking computers. Whatever. Nobody knows what they’re doing. All innovation moves the state of the art. Practitioners are wizards more than anything else.
It’s alchemy, not chemistry.
Then, as things settle down, over the decades, the science starts to seep in. It’s more efficient to use science. It saves money. It yields more predictable results.
But it starts to get boring. Think accounting.
Well, AI is going to bring accounting to infosec. It’s going to map the inputs to the outputs, pushed by insurance companies. They want to know more than anyone, and they’ll be the first to construct these maps.
Once that happens, it’s going to get harder and harder to be a wizard.
That means less charisma-based security leaders, and more CFO types who use data to drive outcomes.
Start thinking of your security efforts in terms of processes, i.e., inputs and outputs
Find every situation in your org where heroism/wizardry is required, and try to replace it with better process
This doesn’t mean there isn’t a place for wizards; they should be dedicated to breaking and improving process
Train everyone you care about (especially the young ones) on how to build great processes, and how to be wizard enough to find and fix flaws in them