If you’re like me you’ve had a number of ideas throughout your life. Most of them were mediocre, derivative, or just plain bad. But some of them were worth capturing at the very least. But you didn’t, and now you’re not sure if they were good at all because you can’t remember most of them.

This project is my response to that. It’s a capture of some subset of the ideas I have that I think either are, or might be, worth exploring.

In this section of the site I’ll be keeping a list of the various thoughts I’ve had over the years that I think are (or were) worth noting. I’m going to try to limit it to only my best ideas.

This is an essay/idea in which I detail how the ubiquity and minimization of video technology will eliminate privacy and fundamentally change how people interact with each other.

[ Post Date: May, 2008 ]

I wrote in 2005 about how the tend in IT will be towards complete outsourcing of IT resources to large providers like Accenture, HP, and IBM. A few elite IT people will remain with the business, but they will be fluent in the business and will serve as go-betweens for the businesses requirements and the implementations.

Basically, the business will have a need, they’ll go to their IT consultant (permanent hire at the company), and that consultant will then drag and drop the various pieces of the solution together using Amazon, Accenture, HP, and whoever else. A PM will be assigned, and the project will be implemented. The idea of doing all those pieces will become obviously inferior.

[ Post Date: May, 2008 ]

Example usage:

[ Original Post: Aug and Naug ]

[ Post Date: May, 2008 ]

Proxicus is a risk-based browser proxy system that rates websites according to a number of parameters and then assigns each a level of risk. This risk score is then used to determine the type of content the client will be handed by the proxy. In other words, a proxy is being used to determine whether or not images, javascript, java, active x, or flash will be given to a protected client.

So, if risk calculations determine that a site is completely benign, all the content of the site will be delivered (flash, javascript, etc.). But if a site comes back suspect in some way the proxy may choose (according to policy) to only send text and images (and none of the other more powerful functionality).

I’ve alluded to a major project a few times in recent months. Well, I’m now ready to talk about what it is. I apologize for the disjoined presentation; I’m a bit excited and will clean up as needed later.

One of the most annoying problems that faces computer users is contact management. Most don’t have a truly organized digital address book, and even those that do suffer from contact-rot. This is where each passing day means one more mailing address has changed, someone got a new mobile number, and another person got married and has a new last name. In other words, time deteriorates the quality of your information about other people.

Many services have come and gone that tried (or are trying) to solve this problem. Most notable of these is Plaxo. Plaxo, as well as most of the other services like it, have essentially been services where you kept your updated information. The idea being that when you changed your info, Plaxo could notify the people in your address book that you had done so. At that point they could take some steps to update their information. The problem is that it’s required too much involvement with the third party service. Plaxo is, after all, a for-profit company, so it makes sense that they would want you to interact with them.

Identity Management + Semantic Web

My idea is simple: provide a free and open infrastructure upon which people can build identity-based services ranging from contact management to social interaction functionality. Focus on transparency and open standards, meaning that the exchange of informaton should be as simple as possible and should allow for infinite potential for securely sharing and manipulating the data.

Here are the two primary components:

  1. Central, Server-Side Representation of People using XMLI’m currently working on RDF for the main definition.

  2. Open, RSS-based ClientThe client piece, while completely open to various implementations, will have two components. 1) Subscriptions to contacts via RSS, and 2) translation of the server’s XML to their own address book format.


  • Maintain constantly updated contact information by subscribing to your friends’ information on a central server. You stay updated because your information is not static. The information you see when you open your address book is what was last pulled from your contact’s RSS feed.

  • Your contact list is constantly maintained in a neatly defined, XML-based format on the server (OPML?). To get your contacts onto any new system (including mobile devices), install any client (there will be many) that speaks both the server-side XML protocol and the local address book format.

  • Link the elements within a given definition to other namespaces that carry weight within the semantic world. In other words, allow favorite bands, favorite foods, and a multitude of other attributes to be defined in such a way that associated information can be referenced (and mashed) semantically.

The server resides at (currently living in a VMware machine in San Fransisco that’s powered off) and hosts the various identity files (RDF, etc.). As an example, we’ll say we have two accounts — myself (Daniel Miessler), and my friend (Seth Kline).

We respectively live at and Within whatever client we’re using for the system (again, this will be any one of many available) I’ll subscribe to Seth’s address within my client that’s installed on my local system. The client works by maintaining two types of information: who you are, and who your subscriptions are (your contacts).

More On Client Functionality

The most basic client monitors the local address book for changes to my own contact information, and upon sensing changes translates the changed result into the server’s XML format and uploads it. This updates my information on the server and updates the associated RSS feed that represents me as a person.

Since people who have me in their “contact list” are actually just subscribed to my RSS feed, their respective clients (web clients, desktop clients, mobile clients) will be notified the next time they check in that I have updated my information. Their client will then update my information in their contact list (server-side) and make the associated change to the local address book on the system they are using (mobile phone, work computer, etc.).

So what we end up with is an infrastructure in which I can update my information using my own local address book, and that information will transparently be propogated (via RSS pull) to anyone who is subscribed to me using the system.

Once I have a client installed it disappears into the background. From that point on I interact only with my regular contact management application, and changes I make are propagated to my subscribers, and their changes are propagated to me.

The end result is that when I open my address book entry for Seth two years from now and dial his mobile number, I could very well be dialing a number that I never entered. He’ll still answer the phone on the other end, however, because at some point he updated HIS local address book, which updated the server, which updated MY local address book.

No extra steps. No extra hassle.

Security is handled on the server by managing who can and cannot access your information. Obviously we don’t want just anyone to be able to pull your entire personal definition (essentially what’s now a vcard) by simply visiting a given URI. I also intend for the various elements/fields in the definition to be granularly controllable, e.g. work associates can see only your home number, while friends can see everything, etc.

Clients are the key; without them we don’t have the transparency that’s required to make the infrastructure useful. Specifically, we need the client to be able to translate between the server’s XML format and the local address book format. In later client iterations, however, I anticipate moving towards address book integration, i.e. being able to add kitmee subscriptions right into the native address book.

So that’s the project. I’m currently working with one other developer on the server side, and have not even started considering the client piece. Our development environment currently consists of a fairly stout Gentoo Linux server running in VMware. The application platform is RoR, and we’re using Subversion for version control.

[ Idea Date: 2007 ]

There will be a universal gaming engine that gives location and physics information, and game designers will design themes in the various areas of current games, e.g.: FPS, fashion, sports, cars, nature, etc.

Those who like these particular genres will buy the subscriptions for those mods (from whatever company), and when they move throughout the world they’ll seamlessly transition from a generic engine experience to seeing the engine’s world through the perspective of the mod.

The way this will manifest is that a person will be walking down the street and see a car. If they are into cars they will have the mod and they’ll see the exact type of car, and it will make the exact sounds of that make and model. When this person looks at a woman (if he/she has the appropriate clothing mod) the brand of clothing will be apparent.

Upon entering a nearby bar, it will be possible to get into a fight with someone. Upon doing so, you can either have it end with little flair (natural game engine), or you can instantly pivot into a fight game like UFC if you like this type of thing and have paid for the mod. If someone draws a gun you enter FPS. If you take a girl home you have the option to actually use certain positions and such, much like in the UFC game.

In short, it’s a universal game engine with a series of mods available for interfacing with it — all based on your preferences and what you’re willing and able to pay for. And the mods are almost endless. Flight simulators. Underwater exploration. Hiking. Birdwatching. Etc., etc..

[ Post Date: ~2003 ]

I’ve had an idea for over a decade about personal servers. The idea is that as you walk around, you’re constantly presenting a number of daemons about yourself that then interact with the daemons of others. Through personal AI (or a reasonable representation thereof), people are then alerted to interesting matches or other data about those around them.

Daemon types will include a public daemon, an adult daemon, etc. And there will be explicit and general rules for how these daemons will interact, for example not allowing young people to acces others’ adult daemons.

Use cases for these daemons are obvious and powerful, and include things like going into a bookstore and being alerted that someone within a few feet is a massive fan of both Tolkein and SCUBA, or that a woman in the store is both single and a massive fan of Dr. Who.

The key is the combination of the data and location.

My initial concept for this idea way back in 2004 was to have the broadcasts be local, i.e. physical, via something like wireless or bluetooth. The obvious answer at this point seems to be to use the cloud.

At this point (2014) the solution could pull from Facebook data as an initial seed, and then prompt for more data from the user.

[ Idea Date: ~2004 ]

The distinct separation between two types of free will—the kind we do not have (absolute), and the kind we do have (practical). Practical free will is the kind we experience every day, and absolute free will is the freedom for someone to have done something differently if the universe were rolled back to the exact same moment in the past. Read more below:

[ Post Date: 2006 ]

A website and REST API that accepts string input and returns it’s meaning if it knows what it is. Works on hashes, encoding, and encryption. Content is being constantly added.

The concept is that security testers often come across strings that they think may have meaning, but don’t know what it may be. Tokenscope takes millions of inputs and translates them into many, many forms, and then compares its inputs to the now known outputs.

[ Post Date: December 2013 ]

The idea that, within information security, we are approaching the maximum amount that we can prevent breaches from occurring.

Related to this, we observe that risk is ultimately made up of two main components—probability and impact. Prevention, which sustains most of the security industry, deals only with the first component: probability. Therefore, the near future of infosec will largely be based around lowering impact rather than reducing probability.

This isn’t because it’s better than prevention, but because when prevention isn’t possible, or when you’ve already done as much as you can with it, the only way to make progress is to work on the other side of the equation.

I argue here that 2014 will be the year that companies realize that being hacked isn’t the catastrophe that they once thought it was.

There are a number of things that lead to this: the public becoming desensitized to companies being hacked (because everyone gets hacked), and the corresponding lack of concern among business leaders. The focus will move from “were you hacked?” to “how did you handle the compromise?”.

[ Post Date: October 2013 ]

The idea of creating a epic story masterpiece, executed via music and art, over the period of one or more decades.

[ Original Post: The Grand Music Project ]

[ Post Date: January 2014 ]

A theory for how/why evolution may have selected free will.

[ Post Date: January 2014 ]

Imagine a short school that parents can send their kids to learn topics that aren’t taught well—if at all—in today’s learning institutions. Examples would include: finances, relationships, the dangers of superstition, the dangers of zenophobia, why racism makes zero sense, the importance of the environment, rhetoric and dialectic, how to build an argument, the science of persuasion, effective writing, etc.

These would be short modules—perhaps 1-3 days a piece, and would be both instructional and practical-based. The idea is that a young student could learn important life lessons early in life rather than having to experience pain gaining the same knowledge.

[ Post Date: January 2010 ]

The Internet of Things is a term that is gaining momentum in everyday use. The basic concept is that there are many types of objects that we are familiar with, e.g., cars, scales, TVs, ovens, refrigerators, etc., that have always been, well…just things. But with the Internet of Things (IoT), they are now becoming network aware.

I think the IoT conversation is good, but it’s a bit shortsighted. I’ve been thinking about a few related concepts for years now, including Social Daemons (see above), Lifecasting (see above), and some others buried in the site.

Social Daemons have to do with providing daemons to humans, and allowing them to interact with each other (AI to AI) and provide input to the human when interesting matches occur. Lifecasting has to do with everything broadcasting video all the time, and the various implications that will come with that for privacy, surveillance, etc.

Universal Daemonization is a potentially far more interesting combination of these concepts. The idea is simple: Everything has a daemon associated with it that does a few things:

  • Sensors (video, audio, temperature, pressure, etc.)

  • A Broadcast Daemon (attributes such as object type, owner, capabilities, color, size, weight, current location, input types accepted, etc.),

  • An Input Daemon (receives requests to perform actions, e.g., turn on, turn off, cross the street, jump up and down, reformat, sound an alarm, give someone a kiss, etc.)

  • An Output Daemon (enables you to automatically send stimuli to other objects, such as texts, noises, flashes, emails, whatever)

[ NOTE: These are really kind of one daemon with different components, but it’s helpful to visualize them this way. ]

The key, then, is that these will be enabled through a combination of existing and new, to-be-developed protocols. Specifically, these will likely ride on top of TCP/IP, HTTP, and REST-based web services.

So any device that you might think about being part of the “Internet of Things” really just becomes an “active” object. Or, in future terms, a “non-legacy” object.

Here’s where it gets interesting: Humans are just objects. We have attributes, we have desires, we have preferences, we have ways you can make requests of us, and we have ways of notifying other objects when things take place. This goes into the social daemon concept, but it also blends seamlessly into Internet of Things and Universal Daemonization.

Basically, the name says everything: Universal Daemonization. Humans get daemons. Toasters get daemons. Stoves get daemons. Beds get daemons. Baby clothes get daemons. Dog collars get daemons. Cars have daemons. Restaurants. Bars. Screwdrivers. Screws. Shopping Malls. Books. Watches. Doors. Locks. Packaging. TVs. Etc. Everything gets a daemon.

Then it simply becomes a matter of how these daemons interact with each other, over what protocols, and what rules guide that interaction. That’s where the fun begins. Cities will have thousands of “city” objects. Parking meters, all vehicles, the camera on every cop, the cops themselves, every building, every restaurant, etc. And because it’s in the city, the city will have certain rights to those objects.

All those rights will be brokered and handled through a robust system of Federated Identity. What you can do with the thousands of devices around you will depend on who you are.

Perhaps you’ll be able to see through those cameras, or listen through the microphones. Perhaps they’ll make those cameras public during parades, so people can get a 10,000 eyes perspective of the parade (better than being in person).

And maybe states have multiple cities. And maybe countries have multiple states. Maybe there is an API key that the Federal Government can use to query these Internet-facing daemons, to ask them to do things. Maybe it’s voluntary. So you get a tax break if you allow the government to use your personal set of public cameras (and associated face recognition technology) to look for known fugitives. Or maybe it’s involuntary.

What we’ll have is a world where people and machines move through a given space interacting in real-time with every other object (and daemon) around it. Consuming its daemon information, interacting with its APIs, providing information to them, etc.

For humans, these interactions will be brokered by personal assistants operating according to business rules. Siri will know not to bother you unless x, y, or z occurs. But she will be parsing and collecting millions of data points every day so that when you do ask a question she’ll have the best answer.

The point is that this stuff will unify. It will become standardized via a set of solid protocols, and they’re not likely to be new ones. They’re likely to be very similar, or at least ride on top of, the protocols we already use today.

Objects will simply get enrolled into your personal ecosystem. You will bring your new TV home and it’ll do a meet and greet with your other devices. Sharing settings and preferences and sensor data, etc. They’ll find ways they can serve you better by interacting with each other.

That’s Universal Daemonization. That’s the future of the Internet of Things. Everything is an object. Everything has a daemon with a set of APIs. And everything speaks the same language.

People wonder about the size and impact of the Internet of Things. They wonder what scale it’s on. Is it another personal computer? Is it another mobile computing? But it’s not any of those.

Universal Daemonization is beyond even the scale of the Internet. With the Internet one at least has to participate. With Universal Daemonization, the previously normal things in the world will be tied into, and largely available for interaction with, anything else on the planet—whether human or machine.

Universal Daemonization is the future of the Internet of Things, and while it uses the Internet, it’s actually much bigger than the Internet. It will categorically redefine how we interact with the world around us.

[ Post Date: March 2014 ]

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

Imagine a short school that parents can send their kids to learn topics that aren’t taught well—if at all—in today’s learning institutions. Examples would include: finances, relationships, the dangers of superstition, the dangers of zenophobia, why racism makes zero sense, the importance of the environment, rhetoric and dialectic, how to build an argument, the science of persuasion, effective writing, etc.

These would be short modules—perhaps 1-3 days a piece, and would be both instructional and practical-based. The idea is that a young student could learn important life lessons early in life rather than having to experience pain gaining the same knowledge.

[ Post Date: January 2010 ]

Human behavior is driven by simple yet often subconscious calculations on the most efficient way to satisfy essential needs. These needs include things like mating success, money, power, respect, etc.

In work environments, incentives are present—either explicitly or implicitly—that do the work of modifying employee behavior. There is much discussion and mystery around why certain efforts are effective in changing behavior, and others are not.

The problem with these systems is that the various incentives—both innate and artificial—often conflict with one another. Initiatives like security awareness, or secure coding in information security—often pull in opposing directions, resulting in mysterious, unpredictable, or undesired outcomes.

My idea, which I’ll call Incentive and Control Analysis (ICA), says that any attempts to change human behavior need to be initiated from a position of understanding the existing incentive ecosystem, and that attempts to change behavior without this information is likely to be inefficient at best, and negligently wasteful at worst.

In brief, one must understand the various ways a subject is being pulled by Incentive Vectors (IVs), which have both direction and magnitude. This will allow management to adjust those incentives before proceeding, as that may be all that is needed. But if any additional behavior change is desired, the controls put into place need to be mapped into the existing incentives ecosystem to ensure that the control will have the desired effect on outcomes.

The system uses a visual representation of human behavior tiers mapped onto a diagram of concentric circles. At the center of the circle you have the desires that are most core to the group or individual (status/money/etc.). As you move out to the second ring, you arrive at secondary desires, such as the willingness to work with team members, make them successful, etc. Further out you have a third ring that deals with more abstract and distant goals, like protecting against unseen threats, or helping people one doesn’t ever see.

The system works by evaluating where given controls (attempts to change behavior) fall on this diagram, and it maps how the strength of the incentive relative to the others in that given ecosystem.

Taking computer security as an example, and a developer ecosystem in specific, we can place status as top developer, and high pay, at the center of the circle. Then we map what behaviors the developer can exhibit to receive the maximum amount of those things. In many cases that behavior is releasing the most code. The most features. Etc.

Along comes a security organization that wishes to make this group take security seriously. They implement a training program that requires developers to sit through CBT classes. Not doing so will result in a negative email being sent to their boss.

So here we have two competing incentives, clearly displayed in a visual way. We have at the center of the bullseye a desire to make money. And we have right next to it, the company’s policy of paying people more if they put out more product. The incentive vector arrow is long and thick in the direction of this center goal, indicating clearly its direction and strength.

Then, on the outside of the second ring, we have another desire: The desire to create more secure code. A thin, weak arrow is pointing to this distant target which basically equates to,

Shown visually in this way, it is clear what developers will chose. We have conflicting incentives being defined by the same organization, and one (security) is scripted to lose to the other (features) because its behavior is more directly mapped to desirable outcomes in the center ring (money/power/etc.)

A manufacturing plant, however, might have a completely different set of concentric circles. Perhaps they put safety above all else—even about production output. So at the center of this factory floor worker’s concentric circles you have the same goals (money/power/etc.). But next to it is an incentive/control that says she receives a 20% bonus if she has zero security incidents within the last year.

Suddenly, security incidents fall. What this company has done, is take their key desired outcome, and have mapped it to the key desires of employees.

As said earlier, failing to do this will produce a litany of impotent controls. And the most efficient way to see which controls will be effective and which will be ignored is to understand at a fundamental level what incentives exist (both visible and invisible) in your organization. From there, one can map the desired outcomes, and then proceed to the task of alignment.


p>Here is a brief ICA Methodology outline:

  1. Discuss and list your top three desired outcomes for this group of employees

  2. Capture the existing incentive ecosystem by understanding how rewards, punishments, and reimbursement is handled within that organization.

  3. Map the existing goals and incentives onto the concentric circle diagram.

  4. Find the conflicting vector forces.

  5. See if corrections to these forces would be sufficient, or if the addition of additional incentives (in the direction of your desired outcomes) is necessary.

An analysis and recommendation framework is forthcoming.

[ Post Date: May 2014 ]

The Unity Machine is an idea for how to increase empathy in humans. A detailed outline exists here.

[ Post Date: September 2005 ]

Lupus Liberalism is a metaphorical reference to Lupus—a medical disorder that causes the body’s immune system to become hyperactive and attack one’s own body. Similarly, Lupus Liberalism turns well-meaning but confused liberals upon their fellow humanists in the defense of beliefs that run directly counter to what liberalism represents.

[ Post Date: October 2015 ]

The future of security architecture is having thousands, or millions, of tiny Control Points scattered throughout the environment. These control points inspect all content moving through them, from layer 2 to layer 7, and they report back to a central system that provides rules and updates for the CPs.

The head of security for the organization, which will be in the business at this time, will give natural language instructions as to what the security policy should be. Things like:

Nobody exports data of type “sensitive” outside the company except me and Judy. Nobody can copy data to any device unless they’re in the group “data exporters”.

They say the word, and BAM–that policy is pushed down to the millions of control points in the organization. These are in the switches, the firewalls, the mobile devices, the applications, the databases, and every other piece of IT within the environment.

And the Control Points filter based on their perspective. If the CP is in a switch, it filters network and application traffic. If it’s on an endpoint it’s looking at memory and processes and data being stored and accessed, etc.

So, a single policy defined at the top, which all CPs know how to parse, which is then deployed to millions of CPs simultaneously so they can enforce their portion of the policy based on their perspective in the stack.

That’s the future of enterprise security.

[ Date: January 2013 ]

KARMA is based on the idea that relatively few attributes tell you your readiness to avoid a particular undesirable outcome.

In health we try to avoid premature death. In insurance we try to avoid payouts. And in both cases we have key attributes that we gather about people to see if they are likely to have one of these undesirable outcomes in the near future.

KARMA attempts to do this for information security by evaluating key attributes of a security program’s components (Asset Management, Security Awareness, Application Security, Network Security, Data Loss Prevention, etc.) and assigning them a rating based on how likely they are to be compromised.

This is done by looking at the key attributes associated with successful compromise for each of these components, based on significant experience and research in the security testing space.

The result is a look at one’s security program from a perspective of real-world risk.

A key aspect of the KARMA system is that each of the issues that are found that lead to increased risk are rated by how much risk they present, and thus how much security can be improved by mitigating that risk.

The resulting output is a continuously updated list of recommendations for what exactly to do next within your security program. This is something that no product or service offers today, and is absolutely essential if you want to be able to properly prioritize new issues as you hear about them.

TL;DR: Know the exact ratings of the risk factors in your environment, which then creates the prioritized list of recommendations for what you should work on next, which allows you to properly prioritize new issues as they become known to you. In short, a framework for understanding real-world risk that allows you to always know what your current highest priorities are for remediation.

Desired Outcome Management (DOM)

Here is the post about DOM.

[ Date: February 2016 ]

Vulnerability Hierarchies

[ Date: November 2015 ]

Free Will and the Absurdist Chasm

[ Date: JANUARY 2016 ]

Use ML to Find Hidden Brilliant Content

[ Date: JANUARY 2017 ]

Related posts: