Dwarkesh Patel is one of my favorite thinkers right now. I just love the intensity of his curiosity. I love how broad his interests are.
He's like a young Tyler Cowen, and I watch everything he puts out. But lately he's been on a media cycle talking about why he doesn't think AGI is forthcoming in the next 1-3 years, and I think he's wrong about that.
It's not so much the timeline that I disagree with—or at least that I'm writing about here. It's actually his reasoning that's bothering me.
He starts by saying that after traveling for a number of weeks outside the Bay Area, he's come to believe that we AI people are:
...getting high on their own supply (with regard to AGI timelines).In his interview with Chris Williamson
Sure, I can see that. But he goes on to say that the reason he doesn't think it's imminent is because he's spent around 100 hours trying to get AI to do basic tasks, and it still has major problems.
First of all...100 hours? To quote Matthew McConoughey in Wolfs of Wall Street:
Most people I know who are into AI (myself included) are doing 100 hours every 1-2 weeks, and have been for at least 3 years now. We have well over 10,000 hours at this game over the last few years.
But let's continue.
Then he gives some examples and we start to see more of the problem.
I've probably spent on the order of 100 hours trying to build these little tools, the kinds I'm sure you've also tried to build of like, rewrite auto-generated transcripts for me to make them sound, the rewritten the way a human would write them.
Find clips for me to tweet out, write essays with me, co-write them passage by passage, these kinds of things.
And what I found is that it's actually very hard to get human-like labor out of these models, even for tasks like these, which should be dead center in the repertoire of these models, right?In his interview with Chris Williamson
This tells me he's just not using the tools fully and/or correctly. I agree that AI can't full replace a high-end video editor yet, and some related tasks are still a bit out of reach. But some of them are pretty close to trivial now as well.
Then there's this quote that was another major tell.
...this whole subtle understanding of my preferences and style is lost by the end of the session.Also from the Chris Williamson conversation
Losing knowledge of your preferences and style after each session? This sounds like he's using web-based chatbots, and then closing his browser and starting over each time. This is Level 0 of the AI skill tree.
That doesn't happen if you're even just using the built-in features like Memory and Personalization, and definitely doesn't apply if you're using any one of the countless other ways to maintain context on reloads.
The picture this pains for me is:
In the current parlance this is known as a Skill Issue. A little snarky, but it fits perfectly here.
Again, I think the world of the guy, and I will continue to watch everything he puts out.
But on this issue he's talking about limitations that only exist for people who aren't fully using the tech.
And with only 100 hours of tinkering, that's to be expected.