How AI Defenders Will Protect Us From Manipulation

We'll use AI Assistants to defend ourselves against marketing, propaganda, and personal cons

One of the AI topics that I’ve been talking about for the last few months is Context. The basic argument is that everything we’re about to do with AI will sit on top of a deep, nuanced understanding of the principal—which could be an individual, a business, or whatever.

Context examples

Here are some examples of where AI is much more powerful when it knows about the subject it’s helping.

  • 🗣️Therapy — You ask an AI assistant why you’re feeling sad, or what you can do to feel better. It can do a much better job if it knows your background, your history, has exposure to your journal, your goals as an individual, your life challenges, your work and financial situation, etc.

  • 🧳Work — You ask an AI assistant to help you solve a problem at work. It can do a much better job if it knows the company’s capabilities, the details around the challenge, the resources we have available to solve it, etc.

  • ✍️Writing — You ask an AI assistant to help you write a story or a screenplay. It can do a much better job if it knows the types of stories that interest you, based on seeing your ratings of other films, or having access to your favorite books or movies.

These are just a couple examples and the list is infinite. It’s really any situation where problem solving is improved by more deeply understanding the problem, and that’s almost always.

Continuous context

You could of course try to jam a bunch of context into each request. So when you go to ask for a story you try to feed it a bunch of stuff you like. But that’s annoying. Plus you won’t remember everything in the moment. Plus it’ll be too much to remember and add each time.

The better, obvious, and inevitable solution is that your AI assistant will simply maintain continuous context about you, across multiple dimensions, and it’ll keep it updated with incoming data. Your workouts, mood ratings, your diet, journal entries, etc. People will be hesitant to share for the first few years, but soon our AI assistants will basically have everything.

And while that will be utterly awesome functionality-wise, it’ll also present an unprecedented attack surface as well.

Anatomy of a near-future, Context attack

We’re not talking about physical attacks here; we’re talking about people being tricked, manipulated, duped, and otherwise convinced or coerced into doing something they wouldn’t want to.

Context is a tool, and many tools can be used as weapons. Context is especially dangerous as a weapon when it’s an attacker using it to attack the principal. Let’s open our minds up to not just 1-on-1 attacks but groups and organizations against groups or individuals.

Context attack types

  • ✍️AI Assistant Data Brokers — We already have Data Brokers who collect and sell way more data than the Dark Web could hope to have. They do that for marketing purposes, but once everyone has these rich AI profiles they are going to become targets for not just “legit” Data Brokers, but underground markets who collect that data on high-value targets.

    Imagine a service where you can find crypto holders, or people bragging about how much money they make, or posting pictures of their opulent vacations. Now gather all their Context via a hacked Personal AI assistant profile or some OSINT/Recon. Now that information is for sale.

  • 🗣️Propaganda Attacks — Both special interests (corporate, activist, or whatever) and governments can also use this type of information to target people or groups with specific campaigns. They might not need that extra context, but the more targeted they go the more they can tailor the messaging to that particular mark. Think: changing opinions on political events, destroying the reputation of their enemies, etc.

  • 💰Marketing — Marketers will happily purchase this data, or collect it themselves however they can, to use the same propaganda techniques to make people aware of their space, their product/service, or whatever. They’ll be able to slowly and effectively drive behavior in a way that benefits them.

  • 🕵️Feelings Hacks — Perhaps most scary is what will be possible when social engineers get access to this information, especially when they’re already experts at pressing buttons. But now instead of cue-reading outside buttons, and stuff they get from purely public information, they’ll be able to tailor their attacks to a target’s background, history, trauma, and other highly-revealing information.

ATTACKER: Hey I’d love to keep talking but I need to go take care of my mother who’s going through a hard time.

TARGET: Oh, really? My mom just passed away from _______.

ATTACKER: Well, I’d love to catch up on that because that’s what my mother has, and I don’t think she has long to live. I’m just devastated.

This is the type of wedge that cuts into people’s inner circles, and it will all be AI powered as well. E.g.,

Given this Target Context, construct the ultimate entry script for our new recruits going after this target.
  • 🖤AI-powered Pig Butchering — Once place this will do extraordinary damage is with Pig Butchering attacks, which is where attackers use companionship and/or romance to get lonely (often elderly) people to part with their money. They often play out over months as the attacker gains trust with the target. Then at the end they take whatever the target has and disappear.

    This type of attack will be a lot more effective, and even automatable, using the combination of Context and AI Agents.

AI Defenders: AI defense against AI attacks

And now we arrive at the point of the article.

All this was buildup to say that Context will soon be wielded against us to:

  • Get us to believe things

  • Get us to think things

  • Get us to buy things

  • Get us to feel things

And ultimately, to control us. The scariest part of it is that because these are hidden buttons being pressed, and AI will be doing a lot of the campaign creation, the target often won’t even know it’s happening.

Your AI Shield

But you know who will know? Your AI Defender. It has the most Context of all.

I know. I saw your face crinkle up. You’re thinking:

Wait, so AI and Context is the problem? And the solution is more AI and Context?

Yeah. Unfortunately. This isn’t what I’m prescribing. It’s what I think is coming, and there’s not anything anyone can do about it.

Let’s talk through it.

Continuously monitoring for attacks

The way this AI Defender will work is actually pretty simple. It’s just cat and mouse, and mouse and cat, round and round.

Basically your AI Defender (just another personality of your AI Assistant) will be in charge of defending you. And it knows you better than anyone, including you.

So when you meet someone cute who starts flirting and looking at your clothes, and starts complimenting you, and maybe mentions a shared piece of background, it’ll start engaging to defend you.

DEFENDER (In Your Ear) — He has complimented you twice and has mentioned 3/7 background markers in the last 38 minutes. He also mentioned a canary marker. Current malicious actor probability is 91%.

Same for buying products.

DEFENDER (In Your Ear) — You might be getting influenced to buy that face cream. You’ve heard 8 people talk about it and it’s been on YouTube 14 times. Current marketing exposure rating is 84%.

Same for political opinions.

DEFENDER (In Your Ear) — The narrative in this YouTube video is currently circling the internet, and it appears to be funded by the Carlyle Group, who is known for sponsoring propaganda campaigns. Would you like me to load a counter-argument video? Current propaganda exposure likelihood is 88%.

Monitoring the exposure to behavior loop

Basically our AI Assistant will know what pushes our buttons, because it’s the world’s expert on those buttons.

It will also see our behavior, and will be able to see if that behavior is tracking with the desires of the propaganda/manipulation we’re being exposed to.

And it can warn us, prompt us, and otherwise pull us out of the tunnel that the manipulator is trying to take us down.

Next level? Filtering the input.

This will be an upcoming post, but now realize that your AI Assistant/Defender will also have edit capabilities. What happens when it can:

  • Remove the label from products

  • Remove manipulative language from writing

  • Overwrite edit incoming audio that would press buttons

Cool, right? Totally.

Terrifying as hell? Absolutely.

Imagine attackers/governments getting access to that interface? Even worse, they won’t have to hack it. They’ll pay people to use their filters.

Summary

  1. Manipulators work by pushing buttons

  2. Deep Context will make AI assistants infinitely more powerful, but that same context will get used as intimate buttons by attackers

  3. People will constantly be under attack by AI powered systems abusing their Context

  4. Paradoxically, our AI Defenders will monitor that 24/7 and let us know when it’s happening

  5. The next step after that is prophylactic controls, i.e., filtering the attacks from even hitting you, which will also be used against us