I’ve been doing some work for a client recently in the realm of vulnerability management. It’s an interesting area of information security because it draws on so many disciplines. The single biggest thing I’ve learned about this problem is the criticality of asset management.
Quite simply, you can’t hope to “manage” what you don’t know about. What I’d specifically like to see is a move toward security scanners that leverage rich data about an organization’s assets. I know of one product doing this (largely unsucessfully), but I’d like to see it become common in the space.
Here are a few things that asset management offers us:
- Show me all Vista systems that are vulnerable to MS08-001 that are in my building.
- Find all Solaris boxes in our Indiana offices that have SSH enabled, as of yesterday.
- Make me a report of all systems running Telnet that Bob Smith manages.
And if we factor in other rich, user-added security data into the database, such as “importance”, “exposure”, or “risk”, we could say:
- Display all high-risk systems in North America that run Windows Vista or XP, but don’t have HIPS installed.
- List all webservers running Apache 1.3.x in our Wyoming offices that are exposed to the Internet but aren’t running SELinux.
Then add to that the ability to run scans off of those queries. An information loop from the asset-management database to the security scanner, and then (potentially) back into the asset-database. This is how I think we should be moving forward — gathering as much information as possible on what you are protecting, and use that information to improve the quality of your security testing.