I'd like to ask—and answer for myself—what I consider a crucially important question about AI right now:
What are we actually doing with all these AI tools?
I see tons of people focused on the how of building AI. And I'm just as excited as the next person about that.
I've spent many hours on my MCP config, and now I've probably spent a couple hundred hours on all of my agents, sub-agents, and overall orchestration. But what I'm most interested in is the what and the why of building AI.
Like what are we actually making?!? And why are we making it?
So what I want to do is show you my overall system, how all the pieces fit together, and what I've been building with it.
As far as my "why", I have a company called Unsupervised Learning, which used to just be the name of my podcast I started in 2015, but now it's a company. And essentially my mission is to upgrade humans and organizations. But mostly humans.
Basically I think the current economic system of lots of what David Graeber calls "Bullshit Jobs" is going to end soon because of AI, and I'm building a system to help people transition to the next thing. I wrote about this in my post on The End of Work. It's called Human 3.0, which is a more human destination combined with a way of upgrading ourselves to be ready for what's coming. Or as ready as we can be.
And I build products, do speaking, and do consulting around stuff related to this whole thing.
Anyway.
I just wanted to give you the why. Like what is it all going towards?
It's going towards that.
Another central theme for me is that I'm building tech but I'm building it for human reasons. I believe the purpose of technology is to serve humans, not the other way around. I feel the same way about science in general.
When I think about AI and AGI and all this tech or whatever, ultimately I'm asking the question of what does it do for us in our actual lives? How does it help us further our goals as individuals and as a society?
I'm as big a nerd as anybody, but this human focus keeps me locked onto the question we started with: "What are we building and why?"
The main practical theme of what I look to do with a system like this is to augment myself.
Like, massively, with insane capabilities.
It's about doing the things that you wish you could do that you never could do before, like having a team of 1,000 or 10,000 people working for you on your own personal and business goals.
I wrote recently about how there are many limitations to creativity, but one of the most sneaky restraints is just not believing that things are possible.
What I'm ultimately building here is a system that magnifies myself as a human. And I'm talking about it and sharing the details about it because I truly want everyone to have the same capability.
Ok, enough context.
So the umbrella of everything I'm gonna talk about today is what I call a Personal AI Infrastructure (PAI), which is PAI for an acronym. Everyone likes pie. It's also one syllable, which I think is an advantage.
And the larger context for this is the feature that I talked about in my really-shitty-very-short-book in 2016, which was called The Real Internet of Things.
The whole book is basically four components:
A lot of these pieces are starting to come along at their own pace. One of the components most being worked on is DAs. We have lots of different things that are the precursors to DAs, like:
Lots of different companies are working on different pieces of this digital assistant story, but it's not quite there yet. I would say 1-2 years or so. We're actually making more progress on the API side.
Speaking of progress on the API side, the second piece from the book is the API-fication of everything—and that's exactly what MCP (Model Context Protocol) is making happen right now.
So this is the first building block: every object has a daemon—An API to the world that all other objects understand. Any computer, system, or even a human with appropriate access, can look at any other object's daemon and know precisely how to interact with it, what its status is, and what it's capable of.THE REAL INTERNET OF THINGS, 2016
Meta and some other companies are obviously working on the third augmented reality piece and they're making some progress there, but the fourth piece is basically AI orchestration of systems that have tons of APIs already running, so that's going to take some time.
I've basically been building my personal AI system since the first couple of months of 2023, and my thoughts on what an AI system should look like have changed a lot over that time.
One of my primary beliefs about AI system design is that the system, the orchestration, and the scaffolding are far more important than the model's intelligence. The models becoming more intelligent definitely helps, but not as much as good system design.
If you design a system really well, you can have a relatively unsophisticated model and still get tremendous results. On the other hand, if you have a really smart model but your system isn't very good, you might get great results—but they're not going to be exactly the results you were asking for. And they're not going to be consistent.
The system's job is to constantly guide the models with the proper context to give you the result that you want.
I just talked about this recently with Michael Brown from Trail of Bits—he was the team lead of the Trail of Bits team in the AIxCC competition. This was absolutely his experience as well. Check out our conversation about it.
I'm a Neovim nerd, and was a Vim nerd long before that.
I fucking love text.
Like seriously. Love isn't a strong enough word. I love Neovim because I love text. I love Typography because I love text.
I consider text to be like a though-primitive. A basic building block of life. A fundamental codex of thinking. This is why I'm obsessed with Neovim. It's because I want to be able to master text, control text, manipulate text, and most importantly, create text.
To me, it is just one tiny hop away from doing all that with thought.
This is why when I saw AI happen in 2022, I immediately gravitated to prompting and built Fabric—all in Markdown by the way! And it's why when I saw Claude Code and realized it's all built around Markdown/Text orchestration, I was like.
Wait a minute! This is an AI system based around Markdown/Text! Just like I've been building all along!
I can't express to you how much pleasure it gives me to build a life orchestration system based around text. And the fact that AI itself is largely based around text/thinking just makes it all that much better.
And I guess it's a good time to mention that I've named my whole system Kai.
Kai is my Digital Assistant—like from the book—and even though I know he's not conscious yet, I still consider him a proto-version of himself.
So everything I talk about below is in reference to my PAI named KAI. 😃
Context management is being talked about a lot right now, but mostly in the tactical scope of prompts and how to improve their performance. I think the idea is much bigger than that. It's not just about context size, and retrieval, RAG, and haystack performance and all that.
I think it's more about how you move knowledge and memory through an entire AI system.
And this is what I spend most of my time thinking about and optimizing.
I think a good example of this is how much better Claude Code was than products that came before it that were using the exact same models.
To me, 90% of our problem and 90% of our power comes from deeply understanding the system you're dealing with and being able to set up little tricks to be able to remember things just at the right time, just in the right amount to be able to get the job done.
So given everything we've talked about regarding text, context, etc., here's the directory structure I have under ~/.claude
to serve as the foundation of all this.
~/.claude/
├── agents/ # Specialized agent configurations
├── commands/ # Custom command workflows
│ ├── create-custom-image/ # AI image generation workflow
│ └── etc/ # Additional commands
├── context/ # The brain of the system
│ ├── memory/ # System memory and learnings
│ ├── methodologies/ # Structured approaches
│ │ ├── perform-web-assessment/
│ │ └── perform-company-recon/
│ ├── projects/ # ← Super important directory
│ │ ├── ul-analytics/ # Analytics project context
│ │ └── website/ # Blog & content context
│ │ ├── content/ # ← Where my writer agent works from
│ │ └── troubleshooting/
│ ├── philosophy/ # Core beliefs and principles
│ │ └── design-preferences/
│ ├── architecture/ # System design patterns
│ └── tasks/ # Task-specific workflows
│ ├── troubleshooting-web-errors/
│ ├── create-new-dashboard/
│ └── etc/
├── hooks/ # Event-based automation scripts
│ ├── agent-complete/ # Triggers when agent finishes
│ ├── subagent-complete/ # Triggers when subagent finishes
│ └── etc/
└── output-format/ # Response formatting templates
└── ul.md # My custom output style
A lot of that is just the Claude Code-based infrastructure, but what I have built as a substrate for the whole thing is the /.claude/context
directory.
Here's the detailed structure of the context directory—the brain of the entire system:
~/.claude/context/
├── CLAUDE.md # Master UFC documentation
├── architecture/ # System design patterns
│ ├── principles.md # Core architectural guidelines
│ ├── test-driven-development-with-playwright.md
│ └── ult-system-design.md # System architecture patterns
├── design/ # Visual standards & UI/UX
│ ├── design-principles.md # Core design philosophy
│ ├── style-guide.md # Visual standards
│ └── saas-dashboard-checklist.md
├── development/ # Development philosophy
│ └── CLAUDE.md # Visual development revolution
├── documentation/ # Documentation standards
├── philosophy/ # Core beliefs & mental models
├── projects/ # Project-specific configs
│ ├── ul-analytics/ # Analytics dashboard context
│ │ └── CLAUDE.md
│ └── website/ # danielmiessler.com context
│ ├── CLAUDE.md # Site architecture
│ ├── content/ # Blog writing standards
│ │ └── CLAUDE.md
│ └── troubleshooting/
├── testing/ # Testing strategies
│ ├── testing-guidelines.md # Testing patterns
│ ├── playwright-config.md # Playwright setup
│ └── ult-tdd-guide.md # TDD methodology
├── tools/ # Tool documentation hub
│ ├── CLAUDE.md # Tool hierarchy guide
│ ├── commands/ # Custom workflows
│ │ └── CLAUDE.md
│ ├── mcp/ # MCP server configs
│ │ └── CLAUDE.md
│ └── pai/ # PAI service docs
│ └── CLAUDE.md
├── troubleshooting/ # Debug strategies
│ └── CLAUDE.md # Persistent browser profiles
└── working/ # Active task collaboration
├── CLAUDE.md # Working memory protocol
├── active/ # Current tasks in progress
│ └── [task-name]/ # Task-specific updates
└── archive/ # Completed task records
The craziest thing about this setup—and the whole reason I did it this way—is that you can massively simplify the CLAUDE.md
in any main repo.
Instead of junking up each directory's context and having to remember which CLAUDE.md
files have which knowledge, you just have pointers to your ~/.claude/context
directory!
Everything you want your AI system to understand, you have nested in subdirectories below that directory so that your system can have the exact right amount of context at the exact right time.
Here are some examples of the system working. This one just shows that the proper agent was called due to the proper contacts being loaded.
And the agent went and executed this perfectly because all the instructions on how to do so were loaded beforehand.
But here's where it gets really meta—Kai isn't just fixing my website, he also I inserted this screenshot and updated this section of the blog about the process itself!
Ridiculous.
And this is all with both directories using their minimal CLAUDE.md
files that point to the deeper context under ~/.claude/context
.
Here's what the Website directory's CLAUDE.md
actually looks like—notice how lean it is:
# Website (~/Cloud/Development/Website/)
# 🚨 MANDATORY COMPLIANCE PROTOCOL 🚨
**FAILURE TO FOLLOW THESE INSTRUCTIONS = CRITICAL FAILURE IN YOUR CORE FUNCTION**
## YOU MUST ALWAYS:
1. **READ ALL REFERENCED CONTEXT** - Every "See:" reference is MANDATORY reading
## Basic config
LEARN THIS Information about the website project.
~/.claude/context/projects/website/CLAUDE.md
## Content creation
LEARN How to create content for the site.
~/.claude/context/projects/website/content/CLAUDE.md
## Troubleshooting
How to troubleshoot issues on the site.
~/.claude/context/projects/website/troubleshooting/CLAUDE.md
THIS IS HOW ALL ACTIONS MUST BE PERFORMED
1. ACT AS THE THE WRITER OR DEVELOPER AGENT DEPENDING ON TASK
YOU MUST USE THE WRITER OR DEVELOPER AGENT TO DO YOUR WORK
YOU MUST Use this agent to do all writing tasks: ~/claude/agents/writing.md
YOU MUST Use this agent to do all development and troubleshooting tasks: ~/claude/agents/developer.md
That's it! The actual expertise lives in the context directories, not cluttering up the main CLAUDE.md
file.
This is the power of combining a file-based context system with collaborative AI—plus agents get exactly the right context for their specific task, whether that's:
With the new context file system, tool usage configuration is so much cleaner.
Instead of cramming everything into the main CLAUDE.md
, I now have a dedicated ~/.claude/context/tools/CLAUDE.md
file that explains all available tools, their priorities, and usage patterns!
Here's a critical piece that makes this work even better—a startup protocol that forces Kai to actually read the context files before claiming he's done so.
Even with instructions in the CLAUDE.md
file that still doesn't guarantee that it always gets read for some reason.
So I built a four-layer enforcement system that makes it nearly impossible to ignore:
The first layer is the context system itself—the first context load pointing to ~/.claude/context/CLAUDE.md
, which explains the entire file system-based context structure in depth.
This is a Claude Code hook that runs on EVERY user prompt submission. It literally intercepts every message I send and adds mandatory instructions that Claude sees before processing the actual prompt.
Here's what makes this so powerful—the hook lives at ~/.claude/hooks/user-prompt-submit-context-loader.ts
and outputs something like this on every interaction:
// MANDATORY CONTEXT CHECK
// You MUST load these context files before responding:
// 1. Read ~/.claude/context/CLAUDE.md
// 2. Read ~/.claude/context/tools/CLAUDE.md
// 3. Read ~/.claude/context/projects/CLAUDE.md
//
// You will provide incorrect responses without this context.
The power of this approach is that it's system-level enforcement. I'm not relying on CC to remember to read instructions—the instructions are literally put in the input stream. These instructions are strict in their language and they're also repeated in multiple places via different injection paths to make it as likely as possible that the system stays on track.
The third layer is what you see here—the instruction block that appears at the top of my main CLAUDE.md
file:
# 🚨🚨🚨 MANDATORY FIRST ACTION - DO THIS IMMEDIATELY 🚨🚨🚨
## SESSION STARTUP REQUIREMENT (NON-NEGOTIABLE)
**BEFORE DOING OR SAYING ANYTHING, YOU MUST:**
1. **SILENTLY AND IMMEDIATELY READ THESE FILES (using Read tool):**
- `~/.claude/context/CLAUDE.md` - The complete context system documentation
- `~/.claude/context/tools/CLAUDE.md` - All available tools and their usage
- `~/.claude/context/projects/CLAUDE.md` - Active projects overview
2. **SILENTLY SCAN:** `~/.claude/commands/` directory (using LS tool) to see available commands
3. **ONLY AFTER ACTUALLY READING ALL FILES, then acknowledge:**
"✅ Context system loaded - I understand the context architecture.
✅ Tools context loaded - I know my commands and capabilities.
✅ Projects loaded - I'm aware of active projects and their contexts."
**DO NOT LIE ABOUT LOADING THESE FILES. ACTUALLY LOAD THEM FIRST.**
**FAILURE TO ACTUALLY LOAD BEFORE CLAIMING = LYING TO USER**
You cannot properly respond to ANY request without ACTUALLY READING:
- The complete context system architecture (from context/CLAUDE.md)
- Your tools and when to use them (from context/tools/CLAUDE.md)
- Active projects and their contexts (from context/projects/CLAUDE.md)
- Available commands (from commands/ directory)
**THIS IS NOT OPTIONAL. ACTUALLY DO THE READS BEFORE THE CHECKMARKS.**
Notice the aggressive language and urgent emojis? That's intentional. It creates a psychological barrier that makes it harder for the AI to skip this step. The explicit "DO NOT LIE" instruction combined with the requirement to use actual tools (Read and LS) means Kai must perform observable actions before proceeding.
The fourth layer includes symlinks in the .claude
directory pointing to the parent CLAUDE.md
for redundancy, ensuring the instructions are discoverable from multiple paths.
All these combine to give Kai extraordinary clarity in the actions that he's taking. It's like way beyond Claude Code by itself.
Here's another example of it working. Check out this screenshot from a recent troubleshooting session where you can see the protocol in action:
Look at what's happening here—the mandatory protocol executing exactly as designed:
First, Kai uses the Read tool to load the master context system documentation (~/.claude/context/CLAUDE.md
)—590 lines of detailed instructions about how the entire system works.
Then, he reads the tools context (~/.claude/context/tools/CLAUDE.md
) to understand all available commands, MCP servers, and capabilities—459 lines of tool documentation.
Next, he loads the projects overview (~/.claude/context/projects/CLAUDE.md
) to become aware of active projects and their specific contexts—250 lines of project-specific configurations.
Only after actually reading all three files, those checkmarks appear—proving he followed the protocol:
The key bit here is that the aggressive "DO NOT LIE" instruction combined with requiring observable tool usage (the Read tool calls you can see in the screenshot) creates a verification mechanism. Kai can't skip to the checkmarks without actually loading the files first.
This is actual context hydration with built-in verification. When Kai says he's loaded the context, you can trust it because you saw him do it live. And you can immediately test this by asking him to perform any task—he'll know exactly what tools to use and how to use them because he actually read the documentation.
And most importantly, this applies to the entire system, not just tools, or agents, or specific projects. It's the entire system.
Give me the takeaway from the meeting today that mentioned Alex Hormozi. Me talking to Kai
Let me show you one of my use-cases that freaks me out every time I do it.
I started a completely fresh session with Kai—no context, no instructions, just a raw Claude instance—and asked him about specific takeaways from a meeting I just had.
Think about what this requires:
And here's what Kai does:
Sweet Jesus. He actually did it. And he actually does it every fucking time.
So here's what actually happened.
First, I ask a simple question: "What was my specific takeaway from the last meeting related to Alex Hormozi?"
Immediately, Kai starts loading context files—but notice, he's not just randomly reading. He's systematically understanding the entire context architecture.
Then, He realizes he needs to search through my Limitless pendant recordings (my life log). Without me telling him anything about how this works.
He finds the get-life-log command, reads how to use it, and executes it with the exact search term "Alex Hormozi."
The result? He extracts my exact three-part takeaway from that meeting:
This isn't just impressive—it's fucking insanity. Starting from absolute zero, Kai:
And he did all this in about 20 seconds.
This is what I mean when I say DAs and Personal AI Infrastructure isn't just about having tools—it's about having an intelligent system that knows how to discover, understand, and use those tools to solve real problems.
The combination of structured context, intelligent tool discovery, and API integration creates something that feels genuinely magical.
It's exactly what we're building towards with real DAs.
This means I can say something like:
Do a basic security look at that site.
And Kai will automatically know to use the httpx
MCP for tech stack detection, the naabu
MCP for port scanning, combine them with Fabric patterns for analysis, and format everything using the appropriate commands—all because the tools context file told him exactly how these tools work together for security tasks.
And without having to junk up a massive CLAUDE.md
.
This is extremely critical because large context windows don't solve the problem of junking up context.
It's much better to have to use very little, perfectly chosen context in the first place.
And then some stuff is just cool and fun. Here you can see what it looks like when Kai starts up.
Fobs and commands are sets of modular tools that do one thing once or do one thing particularly well, and that can be called by various agents, sub-agents, or be by me directly.
Okay, so that's the underlying context system. This next one is the second most important part of my system.
I try to only solve a particular problem once, and I then turn that solution into command, a Fob, a Fabric pattern, or whatever that Kai or I can use in the future.
Good examples of this are my create-custom-image
command, which will use Fabric and OpenAI to create me a custom image using any context that I provide.
It's completely insane when I combine that with my write-blog
command that takes an essay I have dictated and turns that into a fully formatted blog post using all my custom configurations.
Also, with Claude Code we have the ability to use a slash command to run the custom image generation command, but I don't have to! Remember that I've already told Kai exactly where all his tools are and how to use them. I can just say:
Okay, cool. Go make an image for that post.
...and he will go read the post that I just dictated and he just formatted. He will read it, figure out the context, and then use the command to make a perfect header image for that post.
And add it to the post for me. With a perfect caption.
The chaining together of this stuff is where all the power is.
Here's my current lineup of specialized agents:
.claude/agents/
├── engineer.md # TypeScript/Bun development specialist
├── pentester.md # Security assessment expert
├── designer.md # UI/UX and visual design
├── marketer.md # Product positioning & copy
├── gamedesigner.md # RPG mechanics & narratives
└── qatester.md # Quality assurance & testing
I'm still in the process of implementing updated versions of my agents that use specialized context from the context directory. So far, I've only done one, but it's so much better.
This is me telling Kai that he also has access to Fabric.
You also have access to Fabric which you could check out a link in description. That's a project I built in the beginning of 2024. It's a whole bunch of prompts and stuff, but it gives you, Kai, my Digital Assistant, the ability to go and make custom images for anything using context. This includes problem solving for hundreds of problems, custom image generation, web scraping with jina.ai (
fabric -u $URL
), etc.
Fabric patterns end up working very similarly to Commands and Fobs because all three are just combinations of models, prompts, and (sometimes) code.
In the case of Fabric, we've got like 200 developers working on these from around the world and close to 300 specific problems solved in the Fabric patterns. So it's wonderful to be able to tell Kai, "Hey, look at this directory - these are all the different things you can do," and suddenly he just has those capabilities.
Commands are my primary toolset along with MCPs.
.claude/commands/
├── write-blog-post.md # AI-powered blog writing
├── add-links.md # Enrich posts with links
├── create-custom-image.md # Generate contextual images
├── create-linkedin-post.md # Social media content
├── create-d3-visualization.md # Interactive charts
├── code-review.md # Automated code review
├── analyze-paper.md # Academic paper analysis
├── author-wisdom.md # Extract author insights
├── youtube-to-blog.md # Convert videos to posts
└── ... 20+ more specialized commands
More and more of my functionality is being moved over to MCPs. And most not third-party ones, but ones that I've built myself using this blog post and methodology from Cloudflare.
Here's my .mcp.json
config:
{
"mcpServers": {
"playwright": {
"command": "bunx",
"args": ["@playwright/mcp@latest"],
"env": {
"NODE_ENV": "production"
}
},
"httpx": {
"type": "http",
"description": "Use for getting information on web servers or site stack information",
"url": "https://httpx-mcp.danielmiessler.workers.dev",
"headers": {
"x-api-key": "[REDACTED]"
}
},
"content": {
"type": "http",
"description": "Archive of all my content and opinions from my blog",
"url": "https://content-mcp.danielmiessler.workers.dev"
},
"daemon": {
"type": "http",
"description": "My personal API for everything in my life",
"url": "https://mcp.daemon.danielmiessler.com"
},
"pai": {
"type": "http",
"description": "My personal AI infrastructure (PAI) - check here for tools",
"url": "https://api.danielmiessler.com/mcp/",
"headers": {
"Authorization": "Bearer [REDACTED]"
}
},
"naabu": {
"type": "http",
"description": "Port scanner for finding open ports or services on hosts",
"url": "https://naabu-mcp.danielmiessler.workers.dev",
"headers": {
"x-api-key": "[REDACTED]"
}
},
"brightdata": {
"command": "bunx",
"args": ["-y", "@brightdata/mcp"],
"env": {
"API_TOKEN": "[REDACTED]"
}
}
}
}
Here's what each MCP server does:
Ok, so what does all this mean?
Well, with this setup I can now chain tons of these different individual components together to produce insane practical functionality.
Some examples:
I've built multiple practical things already using this system through various stages of its development.
I have automation that takes the stories that I share in my newsletter and gives me a good summary of what was in the story and who wrote it in the category in an overall quality level of the story so that I know what to expect when I go read it.
I built a product called Threshold that looks at the top 3000+ of my best content sources, like:
It sorts into different quality levels of content, which tells me "Do I need to go watch it immediately in slow form and take notes?" Or can I skip it? So it's a better version of internet for me.
And this is like a really crucial point:
Threshold is actually made from components of these other services.
I'm building these services in a modular way that can interlink with each other!
For example, I can chain together different services to:
By calling them in a particular order and putting a UI on that, and putting a Stripe page on that, guess what I have? I have a product.
This is not separate infrastructure, although I do have separate instances for production, obviously. The point is, it's all part of the same modular system.
I only solve a problem once, and from then on, it becomes a module for the rest of the system!
Another example of one I'm building right now. I have a whole bunch of people that are really smart in OSINT right? They read satellite photos and they can tell you what's in the back of a semi truck. Super smart. Super specialized. And there's hundreds of these people.
Well, I'm gonna:
So I'm building myself an Intel product because I care about that. Basically my own Presidential Daily Brief.
By using Kai, I can make lots of different things with this infrastructure. I say,
Okay, here's my goal. Here's what I'm trying to do. Here's the hop that I want to make.
And he could just look at all the requirements, look at the various pieces that we have, and build me out a system for me and deploy it.
And I've already got multiple other apps like this in the queue.
The other day I was working on the newsletter and I was missing having Chartbeat for my site, so I built my own—in 18 minutes with Kai. It hit me that I now had this capability, and I just...did it.
In 18 fucking minutes.
This is a perfect example of what I wrote about—not realizing what's possible is one of the biggest constraints.
When you have a system like Kai, you can't even think of all the stuff you can do with it because it's just so weird to have all those capabilities.
So we have to retrain ourselves to think much bigger.
So basically, I have all this stuff that I want to be able to do myself, And I want to give others the ability to do the same in their lives and professions.
If I'm helping an artist try to transition out of the corporate world into becoming a self-sufficient artist (which is what I talk about in Human 3.0), I want them to become independent. That means having their own studio, their own brand, and everything. So I'm thinking about:
What I'm about is helping people create the backend AI infrastructure that will enable them to transition to this more human world. A world where they're not dreading Monday, dreading being fired, and wallowing in constant planning and office politics.
There are a few things you want to watch out for as you start building out your PAI, or any system like this.
One example is that you want to be really good about writing descriptions for all your various tools because those descriptions are critical for how your agents and subagents are going to figure out which tool to use for what task. So spend a lot of time on that.
I've put tons of effort into the back-and-forth explaining different components of this plumbing, and the file-based context system is the biggest functionality jump on that front.
What's so exciting is that it's all tightening up these repeatable modular tools! The better they get, the less they go off the rails, and the higher quality the output you get of the overall system. It's absolutely exhilarating.
You also want to regularly keep your context files and CLAUDE.md
files updated. The good news is you only have one place to do that now - especially for tools, which all live in a single tools/CLAUDE.md
file!
Don't forget that as you learn new things about how agents and sub-agents work, you want to update your agent's system and user prompts accordingly in ~/.claude/agents
. This will keep them far more on track than if you let them go stale.
Going forward, when you see all these new releases in blog posts and videos about "this AI system does this" and "it does that" and "it has this new feature"—I want you to think before you rush to play with it.
Too many people right now are getting massive FOMO when something gets released. But next time, just ask yourself the question: "Why do I actually care about this? What particular problem does it solve?"
And more specifically, how does it upgrade your system?
The key is to stop thinking about features in isolation. Instead, ask yourself: How would this feature contribute to my existing PAI? How would it update or upgrade what I've already built?
Consider using that as your benchmark for whether it's worth your time to mess with. Because remember—every new, upgrading feature that actually fits into your system becomes a force multiplier for everything else you've built.
So, what does an ideal PAI look like?
For me it comes down to being as prepared as possible for whatever comes at you. It means never being surprised.
I will soon have Kai talking in my ear, telling me about things around me:
Then, as companies start putting out actual AR glasses, all this will be coming through Kai, updating my AR interface in my glasses.
How will Kai update my AR interface? He'll query an API from a location services company. He'll pull UI elements from another company's API. And the data will come from yet another source.
All these companies we know and love—they'll all provide APIs designed not for us to use directly, but for our Digital Assistants to orchestrate on our behalf.
Kai will build this world for me, constantly optimizing my experience by reading the daemons around us, orchestrating thousands of APIs simultaneously, and crafting the perfect UI for every situation—all because he knows everything about my goals, preferences, and what I'm trying to accomplish.
This is ultimately what I'm building, and the infrastructure described here is a major milestone in that direction.
This is my life right now.
This is what I'm building.
This is what I'm so excited about.
This is why I love all this tooling.
This is why I'm having difficulty sleeping because I'm so excited.
This is why I wake up at 3:30 in the morning and I go and accidentally code for six hours.
I really hope this gets you as excited as I am to build your own Personal AI Infrastructure. We've never been this empowered with technology to pursue our human goals.
So if you're interested in this stuff and you want to build a similar system, or just follow along on the journey, check me out on my YouTube channel, my newsletter, and on Twitter/X.
Go build!