October 6

Risk Management – Making Effective Decisions (Part 1 of 2)

by David Stern, CISSP
In the last session we discussed the taxonomy and terminology of security vulnerability. Now that the language is not foreign, some of the FUD (fear, uncertainty, and doubt) should be gone. However, the daunting challenge of determining an appropriate response to a vulnerability alert or discovery still looms. Evaluating the real impact of a published vulnerability is not an exact science. There are just too many factors, many of which simply come from raw experience.

In this short series, we will discuss a basic process framework that can be used to lead a decision maker down the right path. By answering the following questions, one should be able to arrive at a proper course of action (It’s too bad that I don’t have a catchy acronym):

•    What is the vulnerability?
•    What does it affect?
•    How is it delivered and how hard is it to deliver?
•    Does my organization use any of the affected systems?
•    Do I have any controls in place that would slow down or eliminate the effects of this vulnerability?

With this information collected, you should have enough to make a rough evaluation of the risks at hand.

At first glance, the questions might seem out of order. “Wouldn’t it be more efficient to determine if my organization even uses the affected product? If they don’t, I can simply ignore this alert.” Well, not really.

On July 19, 2001 the Code Red worm began its massive global propagation campaign. At the time, I was working in corporate risk management for one of the largest banks in the world. Keep in mind that web based applications were just starting to come into popularity. When the alert came in, the organization did their risk analysis and triaged it to a low status since “they really didn’t use IIS much. Maybe a handful of installations. We are a WebSphere shop.” Right.

Within hours, the network was being hammered. When it was all over, thousands of man hours equating to millions of dollars were spent cleaning up. IIS was automatically installed with Windows 2000 and it was being tested by dozens of internal development groups. In fact, there were hundreds of instances of IIS running on PCs and on servers under desks.  The lesson here is simple. Do the 2 minute triage on every alert that comes with an elevated rating.

In our next posting, we will go through some real world examples to gain a better understanding of the process.


Tags


You may also like

Are you using frameworks properly?

Leadership and communication are actually layers, not levels

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Subscribe to our newsletter now!