I have a new concept I'm using everywhere in my AI engineering called Bitter-Pilled Engineering (BPE).
The idea comes from Richard Sutton's essay, "The Bitter Lesson".
The essay argues that all of our human attempts to control, modify, and enhance AI are kind of not worth it, because when you increase the intelligence of AI—through more hardware or better algorithms or whatever—that will increase intelligence far more than anything we can do with our human approaches.
It's stronger than that actually. Not only will it not be better if we try to help, but it will likely be far worse.
Essentially, we should avoid poisoning AI's native capabilities with our supposedly superior guidance, because it's not actually superior.
Some quotes from the essay:
"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin."
"We want AI agents that can discover like we can, not which contain what we have discovered."
"We should build in only the meta-methods that can find and capture this arbitrary complexity."
"Building in our discoveries only makes it harder to see how the discovering process can be done."
My takeaways:
So my BPE rule for myself when building AI systems is:
Don't over-engineer scaffolding using your pet/"smart" ideas into the system; instead, make sure any scaffolding you build is robust/anti-fragile to the underlying AI getting smarter.
AISTEERING rule.