Two of my favorite bloggers engaged in a trackback love-fest lately on the topic of building security into applications; specifically, enabling applications as a service delivery function to be able to innately detect, respond to and report attacks.
Richard Bejtlich wrote a piece called Security Application Instrumentation and Gunnar Peterson chimed in with Building Coordinated Response In - Learning from the Anasazis. As usual, these are two extremely well-written pieces and arrive at a well constructed conclusion that we need a standard methodology and protocol for this reporting. I think that this exquisitely important point will be missed by most of the security industry -- specifically vendors.
While security vendor's hearts are in the right place (stop laughing,) the "security is the center of the universe" approach to telemetry and instrumentation will continue to fall on deaf ears because there are no widely-adopted standard ways of reporting across platforms, operating systems and applications that truly integrate into a balanced scorecard/dashboard that demonstrates security's contribution to service availability across the enterprise. I know what you're thinking..."Oh God, he's going to talk about metrics! Ack!" No. That's Andy's job and he does it much better than I.
This mess is exactly why the SEIM market emerged to clean up the cesspool of log dumps that spew forth from devices that are, by all approximation, utterly unaware of the rest of ecosystem in which they participate. Take all these crappy log dumps via Syslog and SNMP (which can still be proprietary,) normalize if possible, correlate "stuff" and communicate that something "bad" or "abnormal" has occurred.
How does that communicate what this really means to the business, its ability to function, deliver servce and ultimately the impact on risk posture? It doesn't because security reporting is the little kid wearing a dunce hat standing in the corner because it doesn't play well with others.
Gunnar stated this well:
Coordinated detection and response is the logical conclusion to defense in depth security architecture. I think the reason that we have standards for authentication, authorization, and encryption is because these are the things that people typically focus on at design time. Monitoring and auditing are seen as runtime operational acitivities, but if there were standards based ways to communicate security information and events, then there would be an opportunity for the tooling and processes to improve, which is ultimately what we need.
So, is the call for "security application instrumentation" doomed to fail because we in the security industry will try to reinvent the wheel with proprietary solutions and suggest that the current toolsets and frameworks which are available as part of a much larger enterprise management and reporting strategy not enough?
Bejtlich remarked on the need for mechanisms that report application state must be built into the application and must report more than just performance:
Today we need to talk about applications defending themselves. When they are under attack they need to tell us, and when they are abused, subverted, or breached they would ideally also tell us
I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging. Ideally the application will be self-defending as well, perhaps offering less vulnerability exposure as attacks increase (being aware of DoS conditions of course).
I would agree, but I get the feeling that without integrating this telemetry and the output metrics and folding it into response systems whose primary role is to talk about delivery and service levels -- of which "security" is a huge factor -- the relevance of this data within the visible single pane of glass of enterprise management is lost.
So, rather than reinvent the wheel and incrementally "innovate," why don't we take something like the Open Group's Application Response Measurement (ARM) standard, make sure we subscribe to a telemetry/instrumentation format that speaks to the real issues and enable these systems to massage our output in terms of the language of business (risk?) and work to extend what is already a well-defined and accepted enterprise response management toolset to include security?
The Application Response Measurement (ARM) standard describes a common method for integrating enterprise applications as manageable entities. The ARM standard allows users to extend their enterprise management tools directly to applications creating a comprehensive end-to-end management capability that includes measuring application availability, application performance, application usage, and end-to-end transaction response time.
Or how about something like EMC's Smarts:
Maximize availability and performance of mission-critical IT resources—and the business services they support. EMC Smarts software provides powerful solutions for managing complex infrastructures end-to-end, across technologies, and from the network to the business level. With EMC Smarts innovative technology you can:
- Model components and their relationships across networks, applications, and storage to understand effect on services.
- Analyze data from multiple sources to pinpoint root cause problems—automatically, and in real time.
- Automate discovery, modeling, analysis, workflow, and updates for dramatically lower cost of ownership.
...add security into these and you've got a winner.
There are already industry standards (or at least huge market momentum) around intelligent automated IT infrastructure, resource management and service level reporting. We should get behind a standard that elevates the perspective of how security contributes to service delivery (and dare I say risk management) instead of trying to reinvent the wheel...unless you happen to like the Hamster Wheel of Pain...