My approach to AI is different than most.
I am using AI to do something that I think human's should be doing in general, without AI. And I think AI is just making that thing possible for the first time because of complexity and scale.
—-
Most AI talk is about the models. I think about it differently.
Every task has the same shape: you're somewhere, you want to be somewhere else. Current state to ideal state. The whole game is naming the destination clearly enough that you can tell when you've arrived.
Most people never name it. "Make the page better." "Fix the bug." "Write the post." Those are vibes. AI will chase a vibe forever and you'll never be done.
David Deutsch says a real explanation is one that's hard to vary. Every piece is load-bearing. Move one and the whole thing breaks. Bad explanations are easy to vary — swap the gods, the rituals, the names, and nothing changes.
That's about physics. The interesting part is that the same shape holds for things that aren't physics.
A great website is hard to vary. Move a color, change a font weight, shift the spacing — and it gets worse. Every choice is doing work. A mediocre website is easy to vary because nothing was earning its place to begin with.
A great product concept is hard to vary the same way. Pull out a feature and the whole thing collapses. Pull a feature from a mediocre product and it's about the same.
So whether you're explaining the universe or designing a homepage, the test is the same: could you swap the parts and end up somewhere equally good? If yes, you don't have an ideal state. You have a wish.
The frame unifies the two things people think are separate. Verifiable work — code, research, analysis — climbs toward an explanation of reality. Experiential work — design, art, product — climbs toward an explanation of why this and not something else. Both are hard-to-vary structures. Both have ideal states. The substrate is the same.
That's what the Algorithm is. A process for hill climbing on any task: name current state, name ideal state as a hard-to-vary spec, iterate with verifiable steps.
Verifiable is the part that matters. AI without verification just feels good — fluent output that pattern-matches to what you asked for, a little dopamine hit, something half-cooked shipped. Every step has to be checkable against the spec. If you can't check it, you didn't do it.
The check looks different across domains but works the same. For code, the spec checks itself against tests and reality. For a design, the spec checks itself against the response in your chest when you look at it. Both are real signals. Both can be wrong. Neither is optional.
That response in your chest has a name.
You know it worked when the answer surprises you with its rightness. You didn't see it coming. You can't unsee it now. The bug fix clicks. The homepage clicks. The sentence clicks. Same feeling.
It's what happens when a hard-to-vary structure meets something novel. Surprise from the novelty. Recognition from the fit. Together, joy.
I call it euphoric surprise, and it's the only metric I trust — because it's the same metric on both sides. The fix that makes you grin and the layout that finally feels right are the same event from the inside.
If you have to convince yourself it's right, it isn't.
Name the ideal. Make it hard to vary. Climb until the answer surprises you.
Works for code. Works for design. Works for everything in between.
The rest is tooling.