I have a new concept I'm using everywhere in my AI engineering called Bitter Lesson Engineering (BLE).
The idea comes from Richard Sutton's essay, "The Bitter Lesson".
The essay argues that all of our human attempts to control, modify, and enhance AI are kind of not worth it, because when you increase the intelligence of AI—through more hardware or better algorithms or whatever—that will increase intelligence far more than anything we can do with our human approaches.
It's stronger than that actually. Not only will it not be better if we try to help, but it will likely be far worse.
Essentially, we should avoid poisoning AI's native capabilities with our supposedly superior guidance, because it's not actually superior.
Some other quotes from the essay:
"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin."
"We want AI agents that can discover like we can, not which contain what we have discovered."
"We should build in only the meta-methods that can find and capture this arbitrary complexity."
"Building in our discoveries only makes it harder to see how the discovering process can be done."
My takeaways:
So my simple BLE rule for myself when building AI systems is:
Don't confuse the "what" with the "how".
Be extremely specific about what you want, and then give the best tools you have to the best AI you have, and let it figure out how to execute.
This means as the AI gets smarter, our scaffolding becomes more about preferences than execution, ultimately making our entire system meta-upgradeable instead of BLE-hobbled.
AISTEERING rule.