- Unsupervised Learning
- On the Science and Philosophy of DEVS
On the Science and Philosophy of DEVS
I think DEVS is probably the best show I’ve seen since Game of Bad Endings. And DEVS had a great ending, so that puts it on top.
This piece contains spoilers.
I am particularly excited when a project can combine great ideas, story, and acting in a single effort. Upload is a great show, for example, but it wants for two of the three.
DEVS hits all three for me. But rather than talk about how great it is, which is kind of boring, I’m going to talk about what I took issue with.
1. They never talk about how to get around the complexity of simulating that many variables
I get that the God in the Machine is the secret-sauce quantum tech they developed, but I didn’t see them address how that’s supposed to allow them to simulate entire universes.
The complexity is mind-blowing for doing such a thing for a single moment in time. Even if the processing were possible using something magically quantum, I can’t fathom the memory and storage that’d be required.
They seemed to perform a magician’s trick by talking about whether the universe is singular vs. the multiverse model, but that doesn’t do it for me. It’s a clear distraction from the level of resources that would be required—even if you could somehow run the computations (because quantum).
That must be some compression algorithm.
Perhaps I simply don’t grok enough of the relevant science here.
In short, this doesn’t get around the problem of needing a universe to store a universe. Sure, maybe you have an amazing compression algorithm (space is full of space, afterall), but still…
We’re talking about storing a universe for every instant in the universe, which, even if you were only saving changes would be colossal.
Again, it’s sci-fi. And they did show us the one artifact/idea that we’re supposed to accept and move on from. But because they were so technical with many parts of the science I would have loved to hear a theoretical explanation for how they solved the problem of saving that much state data.
2. They messed up the hard determinism bit when they showed people what they’d do in multiple seconds, minutes, or days in the future
The scene where the guy was seeing himself react like a second in the future was spectacular. The reason it worked, and was realistic, is because he didn’t have time to review the prediction and adjust.
This doesn’t work—even in hard determinism—where you show someone what they’ll do an hour from now. Let’s look at why.
Even if you take away true, absolute free will from the equation, humans are processing engines capable of arriving at a desired path of execution.
If you show me that I’ll pick heads 10 times in a row when you flip a coin, and you offer to pay me $1,000 dollars if I pick tails, my brain can calculate that I want to defy your prediction, or that I want the money. So I can easily end up saying the word “tails” while the coin is next in the air.
That’s not breaking determinism. That’s processing a flow of inputs that includes you giving me a reason to not say heads. That flow is deterministic, but it doesn’t look like it unless you 1) can see all the variables, or 2) can look backward at the choice that was made.
Second order chaos is chaos that responds to prediction.
It’s like first and second-order chaos. We can predict the weather even though it’s extremely complex—to the point of being pseudo-random. But we can’t predict the stock market because it depends on how people react to chaos.
Weather is predictable because it doesn’t change its behavior when the prediction has been made. Humans can. And that doesn’t require us to break determinism for that to work. It’s just a stream of inputs that we react to.
The real issue here is another sci-fi concept that wasn’t explored, which is that of the paradox. John Connor sending someone back to protect him, which resulted in him being born, for example.
If time is determined, and you go back and change the inputs at some point in the stream, that changes everything after that point. As soon as the inputs change, the outputs do as well. That’s determinism.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
Showing a rational agent an outcome they want to avoid in the future—which is the result of choosing option A—allows them to take option B instead. That’s not free will, that’s just processing based on what you think will happen vs. what you want to happen, based on modeling various scenarios.
Change the information you have and your processing might “choose” a different path. And that’s what showing the future does: it changes the inputs. Second-Order Chaos.
3. The breaking of determinism by having the hero make a “true choice” was the weakest part of the whole show
This was super annoying. You either have a show based on hard determinism or you don’t.
The magic was the quantum computer that made this all possible. You can’t add new magic in the last episode. It’s against the sci-fi rules.
She’s the first human to make a free choice? Really? Out of all the billions of universes? Over their billions of years of existence? A single person did something that nobody else has?
How? Why? That’s actually far less realistic than a massive quantum computer that can calculate shit.
I would have preferred her doing something innovative based on her knowledge that everything is determined, combined with the fact that we can run simulations that are essentially alternative realities.
Work within the rules you’ve established. There’s tons of room for creativity there.
It doesn’t require true choice.
In fact, that’s kind of the whole point.
These points did bother me, but they only brought the show from a 10/10 to a 9/10 8/10 in my opinion.
There is still expansive ground to cover with the maintaining of those various realities, and the fact that you can simply select a branch and drop people into it.
I hope there’s a second season.
16.05.20: I lowered it from 9/10 to an 8/10 after talking this over with my friend Saša.
16.05.20: Saša had a further point about the letdown at the end. He equated it to Copperfield making the Luxor disappear and then, as the finale, pulling a coin from behind your ear.