GPT-based AI is about to give us unprecedented public transparency. Imagine being able to input a public figure’s name and instantly access everything they’ve ever said on any given topic. That’s cool, right? Well, it’s just the beginning.
We’re about to have “Me Too Search Engines”.
The true power lies in the ability to query a comprehensive dataset on an individual, about anything. For example, you could track the evolution of someone’s political views over their entire online presence, or assess the accuracy of their predictions throughout their career.
It’ll be used to attack people, research people’s contributions, and to construct remarkable narratives about their evolution as a person over time. But mostly—at least at first—it’ll be used to expose people.
The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.
Paul Krugman, 1998
Consider influential figures like Paul Krugman, who has made numerous predictions from his prominent position at the New York Times. With AI, we could evaluate every prediction he’s made and rate their overall effectiveness in terms of confidence and accuracy.
The software architecture that will power this will be something like SPQA.
The real significance of this technology is not in any specific application, but rather in the unprecedented transparency it offers to any use case. AI enables us to view an entire body of information on a subject and ask targeted questions, providing unparalleled insight and understanding.
I’m going to add timestamps to keep myself honest.
Transparency applications
- Prediction Evaluation: Look at every prediction a public figure has made and give them a score based on 1) how important the topic was, 2) how strong the claim was, 3) how confident they were they were right, and 4) how wrong or right they were.
Keep in mind this will be all publicly accessible accounts, anywhere, ever.
The Me Too Search Engine: Look at everything a public person has said, and find every instance of where they were racist, sexist, or otherwise outside the lines of what’s currently acceptable in society.
The ‘That’s Not Me Anymore’ Redemption Engine: A system that can read the same corpus of data as the Me Too Search Engine and come up with why this person shouldn’t be canceled into oblivion. It’ll look at good things they’ve done, progress over time as they got older, etc., and it’ll put together a corresponding set of public campaigns to counter the MTSE attacks.
The Corruption Detector: For every public government official, find every donation ever made, by every donor. Find every piece of legislation they voted on. Fully analyze all the different ways it would help different groups. Find all the votes they made on that legislation. Find the full list of donors and rate their biases and goals based on their track record as a donor. Finally, produce a corruption score for each government representative based on how often they voted based on the money or benefits they received.
The Hiring Helper: If you’re hiring for a teacher or church position maybe you don’t want people who have expressed certain views in the past. Perhaps unless they have properly evolved out of those views. Software will be developed that looks at the entire arc of a public person’s contributions and estimates their moral character. And this will be used to inform decisions about all sorts of things, including hiring. Will this be illegal? Maybe. Probably. In lots of places. But it’ll still be used.
The Match Maker: Sticking with hiring and extending to dating, what if everyone perfectly described what they were about, and what they wanted to do, and what they’d be happiest doing, and what they’d be best at doing. This would be helped by AI as well, of course. Then we would throw all of those people together in a giant salad bowl of millions of people and we’d ask, “Which of these people would make the best lifelong partners together? The best business partners? The best employers and employees? The best local acquaintances? AI will be really good at that because it has the wisdom of every psychology study, every dating expert, every business expert, etc.—all built into it. It’s the perfect match maker. All it needs is the right context to be provided for each person and entity, and for us to ask it the right questions. Hell, we can just describe our goals and it’ll ask the right questions itself. Unsupervised Learning — Security, Tech, and AI in 10 minutes… Get a weekly breakdown of what's happening in security and tech—and why it matters.
The Risk Adjuster: Insurance has always been a context game. The more they know about you the better they can determine how much risk you pose to their bottom line. We already see insurance companies giving people discounts if they share their health data. Now imagine that it has your life history as well, and your social connection network, and a stream of your public writings. Now there will be a much larger split between safe people to insure and those that should pay super-high premiums or not get a policy at all. This applies to everything from e-bike insurance to insuring the cybersecurity readiness of a Fortune 500 company.
The New Detection/Response Model: What if you knew the current context of every host, application, dataset, and system in the company, along with the context of every user? The biggest part of detection and response is knowing all the things. This is what good IR people do. They track things down. They figure out what the systems are in the source and destination. They connect dots. Humans suck at that. Especially in massive and complex environments. Thousands of systems. Thousands of edge cases. You know what doesn’t suck at that? LLMs. LLMs are the big brains of connecting dots. It’s their favorite thing. So, it’s 2:47AM PST and Julie’s system just made a connection to fileshare Y. Is that malicious? Can you tell me from what I just wrote? No, you can’t. And neither can an IR specialist. They have to go research. An LLM with context on every user, and every system in the company won’t have to research. No, it’s not malicious. Because Julie said in Slack 3 hours ago that she’d be connecting once she landed home in Japan, where she also went to college, and where she’s now living since she moved 6 months ago. LLMs know that because they have the context for everyone at this 49,000 person company. The new IR employee, Rishi, didn’t know that about Julie. Rishi started yesterday.
Spoiler: I’m building this one right now.
- The Security Program Builder: Like we talked about above, the problem with doing security in any complex environment is that you can’t 1) see, and 2) prioritize everything all at once. There is too much to hold in a human brain. Vendors. Software installs. Vulnerabilities. Requirements from stakeholders. Compliance and regulation. Attackers and their goals and techniques. It’s too much. So what we do is flail around with OKRs and Jira tickets, trying to do the best we can. That all goes away with SPQA-based transparency. Because now we don’t try to hold that in our brains anymore. Now we let language models hold that in their heads, and all we do is ask it questions. So we take everything we have—our mission, our goals, our problems, our systems, our assets, our teams, our people, our Slack messages, our meeting transcripts, etc.—and tell it our desires. We describe the type of program we want, who we want to do business with, what we consider good and bad, and we write that all in natural language. Then we ask it questions (Q). Or give it commands for action (A). Using this structure it’ll be able to write our strategy docs, create QSRs, find active attackers, prioritize remediation, patch systems, approve or deny vendors, approve or deny hires, etc. All by doing two things: 1) asking questions, 2) using context.
Summary
These are just a few examples of what transparency can give us in this post-AI world of software. Before we had to force everything. We had to force the data into a forced schema. And then force queries against that database. It’s rigid. It’s fragile. And it’s so very limited.
Nobody should blindly take such answers and go, but rather use the answers to properly focus their decisions.
In this model we don’t force anything. We’re simply feeding context to something that understands things, and we’re asking questions. Who voted most with their donors? Who was most right in their predictions? Who’s my best match for a life partner? What’s the best investment for our business given my preferences? Which risk poses the most danger to our business given everything you know about our company?
Extraordinary things happen when you can hold the entire picture in your brain at once while making a decision. LLMs can do that. We can’t.
AI is about to move human problem-solving from alchemy to chemistry.
Notes
Unfortunately, the Me Too Search Engine will also be paired with Me Too Extortion Monetization. Many businesses will pop up that find everything bad you’ve ever said, turn that into tweets, emails, letters, etc., to your boss and your loved ones, and then send that content to you, saying, “Here’s what I’m about to send. If you don’t want it to go out, send me X amount of money to this address.” I wasn’t going to write about this because it gives people ideas, but the bad guys will see the potential as soon as it’s possible within the tech.
Thanks to someone in the UL community for coming up with the redemption arc idea after I explained the Me Too Search Engine. Great idea.
I’ll be adding more Use Cases to the end of the list as I add them, with timestamps.