There are many who don’t see what GPTs are doing as actual “understanding”.
To them, ChatGPT is interesting—and possibly useful—but it’s ultimately nothing more than a text completion parlor trick, and it should not be confused with real understanding.
I think I can convince most good-faith holdouts otherwise in less than 60 seconds. Ready? Go.
⏱️ 60 seconds starting now…
First, let’s define terms.
- Data: Raw facts or figures without context
- Information: Data that has been processed, organized, or structured
- Knowledge: Information that has been combined to become useful
- Understanding: The ability to apply knowledge in new situations and contexts
Or in compact form:
- Data is raw
- Information is organized
- Knowledge is useful
- Understanding is applied
So the definition we’re using is:
The ability to apply knowledge in new situations and contexts.
A fair definition would allow non-human entities to meet the bar.
So, can GPTs do this? Let’s respond with an example. Here’s a request for the AI to write a short story using an impossible maze of complex human ideas.
I didn’t capitalize on purpose so I didn’t give it any hints.
Write a 2000-word faustian hero’s journey short story about a socratic sister and a machiavellian mother that has a shibboleth as its main plot point. There’s also a magical scarf and a dog with 3 legs. The story is set in a dystopian setting but it has a ted lasso type ending.
If that sounds like a bunch of fancy words strung together, that’s precisely what it is! I purposely engineered this prompt to be extremely difficult for anyone—human or machine—to execute into a story because it involves so many deep concepts. Let’s look at all the traps.
- Faustian: A Faustian parable tells the cautionary tale of an ambitious individual who makes a pact with the devil, exchanging their soul for worldly knowledge, power, or pleasure, ultimately leading to their downfall.
- Hero’s Journey: A hero’s journey involves a call to adventure, trials, mentor guidance, transformation, atonement, and return, ultimately leading to personal growth and the hero’s triumph.
- Socratic: A Socratic individual engages in thoughtful dialogue, asking probing questions to stimulate critical thinking, self-examination, and intellectual growth, embodying humility and a love for wisdom.
- Machiavellian: A Machiavellian person is cunning, manipulative, and deceitful, prioritizing personal gain over ethics. They strategically exploit others, using charm and persuasion to achieve their goals.
- Shibboleth: A shibboleth is a custom, phrase, or linguistic feature distinguishing a group, often used to identify outsiders. It can signify cultural, social, or religious differences.
- Dystopian: Dystopian fiction explores imagined societies characterized by oppression, suffering, and injustice, often serving as cautionary tales reflecting contemporary social and political issues.
- Ted Lasso: In a heartwarming Ted Lasso-style ending, the underdog team overcomes adversity, unites through genuine camaraderie, and achieves an unexpected victory, inspiring hope and positivity in all.
- Scarf and Dog: I added these items randomly just to make it extra difficult.
That’s a lot of concepts that real humans have a hard time writing about. But we’re not asking for a story about one of them—or even three of them. I’m asking for a story about all seven.
Here’s what it wrote:
Try it yourself and add new contexts/situations to the mix.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
Notice that I didn’t even mention Star Wars or Forbidden Love in the prompt! It understood the meanings of “Luke and Leia”, and “Love”, and it inferred that it was forbidden because it knows siblings aren’t supposed to feel that way about each other.
A lot of the confusion about GPTs and whether they “understand” things comes from confusing understanding with experiencing.
Understanding fell out (emerged) from the system unexpectedly.
Do GPTs understand things? Yes. The magic of the tech is that GPTs basically have to accidentally learn concepts in a deep way so they can properly predict the next letter in a sequence. It can then apply those concepts in new situations.
If you argue that you must feel to understand, then you’re saying understanding requires consciousness.
But does a GPT know what it feels like to love? Or to contemplate the universe? Or human mortality? No. They haven’t a clue. They don’t have feelings. They’re not conscious. They don’t experience things one little bit.
But remember—we’re not asking GPTs to experience things. We’re not asking if they feel things. The question is whether they can generalize from concepts using new information, i.e., apply knowledge to new situations and contexts.
⏱️ Timer stopped.
That’s understanding. And yes, they do it astonishingly well.