- Unsupervised Learning
- Posts
- SPQA: The AI-based Architecture That’ll Replace Most Existing Software
SPQA: The AI-based Architecture That’ll Replace Most Existing Software
How most legacy software will soon be replaced by LLM-based systems
March 10, 2023
AI is going to do a lot of interesting things in the coming months and years, thanks to the detonations following GPTs. But one of the most important changes will be the replacement of our existing software.
We used to adapt our businesses to the limitations of the software. In this model the software will adapt to how we do business.
AI-based applications will be completely different than those we have today. The new architecture will be a far more elegant, four-component structure based around GPTs: State, Policy, Questions, and Action.
Fundamentally it’s a transition from something like a Circuit-based architecture to an Understanding-based architecture.
Expand full image
Our current software is Circuit-based, meaning the applications have explicit and rigid structures like the etchings in a circuit board. Inputs and outputs must be explicitly created, routed, and maintained. Any deviation from that structure results in errors, and adding new functionality requires linear effort on the part of the organization’s developers.
Circuit isn’t the perfect metaphor, but it’s descriptive enough.
New software will be Understanding-based. These applications will have nearly unlimited input because they’re based on natural language sent to a system that actually understands what you’re asking. Adding new functionality will be as simple as asking different questions and/or giving different commands.
The SPQA Architecture
(record screech sound)
“Nope! GPTs don’t have true understanding…”
There are many who don’t see what GPTs are doing as actual “understanding”, but I think I can convince most holdouts in less than 60 seconds. Ready? Go.
First, let’s define terms.
Intelligence is something like, “The collection, analysis, and sharing of information on the enemy to enable superior decision-making.
Data: Raw facts or figures without context
Information: Data that has been processed, organized, or structured
Knowledge: Information that has been combined to become useful
Understanding: The ability to apply knowledge in new situations and contexts
Or in compacted form:
Data is raw
Information is organized
Knowledge is useful
Understanding is applied
So the definition we’re using is:
The ability to apply knowledge in new situations and contexts.
So, can GPTs do this? Let’s respond with an example. Let’s see if they understand concepts like Star Wars and Forbidden Love and Shakespeare enough to write the following poem?
Write a love poem from Luke to Leia.
I asked it to do that, and here’s what it wrote…
O Leia, fairest of all maidens fair,
Thy beauty doth my very soul ensnare.
Thy shining eyes, like stars that light the night,
Doth fill my heart with love’s sweet, shining light.
Thy gentle touch, like feathers on my skin,
Doth make my heart to dance and leap within.
Thy voice, like music to my eager ears,
Doth soothe my soul and calm my deepest fears.
And though our love may be forbidden, true,
I cannot help but give my heart to you.
For thou art all I ever could desire,
My Leia, thou hast set my heart on fire.
So let us steal away into the night,
And let our love take wing and take to flight.
For in thy arms, I know I shall find peace,
And all the world’s troubles shall surely cease.
Try it yourself and add new contexts/situations to the mix.
That’s a Shakespearean poem, about Forbidden Love, in the Star Wars universe, between two Siblings. And it’s applied to a completely new situation/context that I just made up.
Awkward
Notice that I didn’t even mention Star Wars or Forbidden Love in the prompt! It understood the meanings of “Luke and Leia”, and “Love”, and it inferred that it was forbidden because it knows siblings aren’t supposed to feel that way about each other.
A lot of the confusion about GPTs and whether they “understand” things comes from confusing understanding with experiencing.
Understanding fell out of the system unexpectedly.
Do GPTs understand things? Yes. The magic of the tech is that GPTs basically have to accidentally learn concepts in a deep way so they can properly predict the next letter in a sequence. It can then apply those concepts in new situations.
But does a GPT know what it feels like to love? Or to contemplate the universe? Or human mortality? No. They haven’t a clue. They don’t have feelings. They’re not conscious. They don’t experience things one little bit.
If you argue that you must feel to understand, then you’re saying understanding requires consciousness, and that’s a bigger chasm than Luke jumped with Leia.
But remember—we’re not asking GPTs to experience things. We’re not asking if they feel things. The question is whether they can generalize from concepts using new information, i.e., apply knowledge to new situations and contexts.
That’s understanding. And yes, they do it astonishingly well.
Software that understands
It’s difficult to grok the scope of the difference between our legacy software and software that understands.
Both the State and Policy will be Model-based
I say “something like” because the exact winning implementations will be market-based and unpredictable.
Rather than try to fumble an explanation, let’s take an example and think about how it’d be done today vs. in the very near future with something like an SPQA architecture.
A security program today
So let’s say we have a biotech company called Splice based out of San Bruno, CA. They have 12,500 employees and they’re getting a brand new CISO. She’s asking for the team to immediately start building the following:
Give me a list of our most critical applications from a business and risk standpoint
Create a prioritized list of our top threats to them, and correlate that with what our security team is spending its time and money on
Make recommendations for how to adjust our budget, headcount, OKRs, and project list to properly align to our actual threats
Let’s write up an adjusted security strategy using this new approach
Define the top 5 KPIs we’ll track to show progress towards our goals
Build out the nested OKR structure that flows from that strategy given our organizational structure
Create an updated presentation for the board describing the new approach
Create a list of ways we’re lacking from a compliance standpoint given the regulations we fall under
Then create a full implementation plan broken out by the next four quarters
Finally, write our first Quarterly Security Report, and keep that document updated
How many people will be needed to put this together? What seniority of people? And how long will it take?
If you have worked in security for any amount of time you’ll know this is easily months of work, just for the first version. And it takes hundreds of hours to meet about, discuss, and maintain all of this as well.
Hell, there are many security organizations that spent years working on these things and still don’t have satisfactory versions of them.
So—months of work to create it, and then hundreds of hours to maintain it using dozens of the best people in the security org who are spending a lot of their time on it.
A security program using SPQA
Let’s see what it looks like in the new model.
It could be that POLICY becomes part of STATE in actual implementations, but smaller models will be needed to allow for more frequent changes.
Choose the base model — You start with the latest and greatest overall GPT model from OpenAI, Google, Meta, McKinsey, or whoever. Lots of companies will have one. Let’s call it OpenAI’s GPT-6. It already knows so incredibly much about security, biotech, project management, scheduling, meetings, budgets, incident response, and audit preparedness that you might be able to survive with it alone. But you need more personalized context.
Train your custom model — Then you train your custom model which is based on your own data, which will stack on top of GPT-6. This is all the stuff in the STATE section above. It’s your company’s telemetry and context. Logs. Docs. Finances. Chats. Emails. Meeting transcripts. Everything. It’s a small company and there are compression algorithms as part of the Custom Model Generation (CMG) product we use, so it’s a total of 312TB of data. You train your custom model on that.
Train your policy model — Now you train another model that’s all about your company’s desires. The mission, the goals, your anti-goals, your challenges, your strategies. This is the guidance that comes from humans that we’re using to steer the ACTION part of the architecture. When we ask it to make stuff for us, and build out our plans, it’ll do so using the guardrails captured here in the POLICY.
Tell the system to take the following actions — Now the models are combined. We have GPT-6, stacked with our STATE model, also stacked with our POLICY model, and together they know us better than we know ourselves.
So now we give it the same exact list of work we got from the CISO.
Give me a list of our most critical applications from a business and risk standpoint
Create a prioritized list of our top threats to them, and correlate that with what our security team is spending its time and money on
Make recommendations for how to adjust our budget, headcount, OKRs, and project list to properly align to our actual threats
Let’s write up an adjusted security strategy using this new approach
Define the top 5 KPIs we’ll track to show progress towards our goals
Build out the nested OKR structure that flows from that strategy given our organizational structure
Create an updated presentation for the board describing the new approach
Create a list of ways we’re lacking from a compliance standpoint given the regulations we fall under
Then create a full implementation plan broken out by the next four quarters
Finally, write our first Quarterly Security Report, and keep that document updated
We’ll still have to double-check models’ output for the foreseeable future, as hallucination is a real thing this early in the game.
Let’s say our new combined SPQA system is called Prima. Ask yourself two questions.
Unsupervised Learning — Security, Tech, and AI in 10 minutes…
Get a weekly breakdown of what's happening in security and tech—and why it matters.
How long will it take it to create the first versions of all these, given everything it knows about the company?
How much time will it take to create updated versions every week, month, quarter, or year?
The answer is minutes. Not just for the initial creation, but for all updates going forward as well.
The only things it needs are 1) up-to-date models using the latest data, and 2) the right questions coming from the human leaders in the organization. In this case, we already have those questions in the list above.
Remember, Prima won’t just come up with the direction, it’ll also create all the artifacts. Every document. Every OKR. The QSR itself. The strategy document. The outline for the board presentation. The auditor preparation documents. Even the emails to stakeholders. That’s additional hundreds of hours of work that would have been done by more junior team members throughout the organiztaion.
So—we’re talking about going from thousands of hours of work per quarter—spread across dozens of people—to maybe like 1% to 5% of that. In the new model the work will move to ensuring the POLICY is up to date, and that the QUESTIONS we’re asking are the right ones.
Transforming software verticals
Sticking with security, since that’s what I know best, imagine what SPQA will do to entire product spaces. How about Static Analysis?
Static Analysis in SPQA
In Static Analysis you’re essentially taking input and asking two things:
What’s wrong?
How do we fix it?
SPQA will crush all existing software that does that because it’s understanding-based. So once it sufficiently grok’s the problem via your STATE, and it understands what you’re trying to do via your POLICY, it’ll be able to do a lot more than just find code problems and fixes. It’ll be able to do things like:
Find the problem
Show how to fix it in any language (coding or human)
Write an on-the-fly tutorial on avoiding these bugs
Write a rule in your tool’s technology that would detect it
Give you the fixed code
Confirm that the code would work
Plus you’ll be able to do far more insane things, like create multiple versions of code to see how they would all respond to the most common attacks, and then make recommendations based on those results.
Security software in general
Now let’s zoom out to security software in general and do some quick hits on some of the most popular products.
Detection and response
Who are the real attackers here?
Who is dug in waiting for activation?
Find the latest TTPs in our organization
Write rules in our detection software that would find them
Share those rules with our peers
Pull their rules in and check against those as well
Create a false parallel infrastructure that looks exactly like ours but is designed to catch attackers using the following criteria
Automatically disable accounts, send notifications, reset tokens, etc. when you see successful attacks
Watch for suspicious linked events, such as unknown phone calls followed by remote sessions followed by documentation review.
Basically, most of you had to build by hand when you stand up a D&R function will be done for you because you have SPQA in place.
It natively understands what’s suspicious. No more explicitly coding rules. Now you just add guidance to your `POLICY` model.
Attack surface management and bounty
Pull all data about a company
Find all its mergers and affiliations
Find all documentation related to those things
Make a list of all domains
Run tools continuously to find all subdomains
Open ports
Applications on ports
Constantly browse those sites using automation
Send data to the SPQA model to find the most vulnerable spots
Run automation against those spots
Auto-submit high-quality reports that include POC code to bounty programs
(if you’re froggy) Submit the same reports to security@ to see if they’ll pay you anyway
Constantly discover our new surface
Constantly monitor/scan and dump into a data lake (S3 bucket or equivalent)
Constantly re-run STATE model
Connect to alerting system and report-creation tooling
Have the system optimize itself
Corporate security
Monitor all activity for suspicious actions and actors
Automatically detect and block and notify on those actions
Ensure SaaS security is fully synched with corporate security policies (see POLICY)
Vendor and supply chain security
Vendor and Supply Chain Security is going to be one of the most drastic and powerful disruptions from SPQA, just because of how impossiblehard the problem is currently.
Make a list of all the vendors we have
Consume every questionnaire we receive
Find every place that the vendor’s software touches in our infrastructure
Find vulnerable components in those locations
Make a prioritized list of the highest risks to various aspects of our company
Recommend mitigations to lower the risk, staring with the most severe
Create a list of alternative vendors who have similar capabilities but that wouldn’t have these risks
Create a migration plan to your top 3 selections
Today in any significant-sized organization, the above is nearly impossible. An SQPA-based application will spit this out in minutes. The entire thing. And same with every time the model(s) update.
We’re talking about going from completely impossible…to minutes.
What’s coming
Keep in mind this entire thing popped like 4 months ago, so this is still Day 0.
Those are just a few examples from cybersecurity. But this is coming to all software, starting basically a month ago. The main limitations right now are:
The size limitations and software needed to create large custom models
The speed and cost limitations of running updates for large organizations with tons of data
The first one is being solved already using tools like Langchain, but we’ll soon have super-slick implementations for this. You’ll basically have export options within all your software to send an export out, or stream out, all that tool’s content. That’s Splunk, Slack, GApps, O365, Salesforce, all your security software, all your HR software. Everything.
They’ll all have near-realtime connectors sending out to your chosen SPQA product’s STATE model.
We’re likely to see `STATE` and `POLICY` broken into multiple sub-models that have the most essential and time-sensitive data in them so they can be updated as fast and inexpensively as possible.
For #2 that’s just going to take time. OpenAI has already done some true magic on lowering the prices on this tech, but training custom models on hundreds of terrabytes of data will still be expensive and time-consuming. How much and how fast that drops is unknown.
How to get ready
Here’s what I recommend for anyone who creates software today.
Start thinking about your business’s first principles. Ask yourself very seriously what you provide, how it’s different than competitor offerings, and what your company will look like when it becomes a set of APIs that aren’t accessed by customers directly. Is it your interface that makes you special? Your data? Your insights? How do these change when all your competitors have equally powerful AI?
Start thinking about your business’s moat. When all this hits fully, in the next 1-5 years, ask yourself what the difference is between you doing this, using your own custom models stacked on top of the massive LLMs, vs. someone like McKinsey walking in with The SolutionTM. It’s 2026 and they’re telling your customers that they can simply implement your business in 3-12 months by consuming your STATE and POLICY. Only they have some secret McKinsey sauce to add because they’ve seen so many customers. Does everyone end up running one of like three universal SPA frameworks?
Mind the Innovator’s Dilemma. Just because this is inevitable doesn’t mean you can drop everything and pivot. The question is—based on your current business, vertical, maturity, financial situation, etc.—how are you going to transition? Are you going to do so slowly, in place? Or do you stand up a separate division that starts fresh but takes resources from your legacy operation? Or perhaps some kind of hybrid. This is about to become a very important decision for every company out there.
Focus on the questions. When it becomes easy to give great answers, the most important thing will be the ability to ask the right questions. This new architecture will be unbelievably powerful, but you still need to define what a company is trying to do. Why do we even exist? What are our goals? Even more than your STATE, the content of your POLICY will become the most unique and identifying part of your business. It’s what you’re about, what you won’t tolerate, and your definition of success.
My current mode is Analytical Optimism. I’m excited about what’s about to happen, but can’t help but be concerned by how fast it’s moving.
See you out there.
NOTES
Thank you to Saul Varish, Clint Gibler, Jason Haddix, and Saša Zdjelar for reading early versions of this essay and providing wonderful feedback.