February 6

It’s Time to Pay the Piper

By Michael Starks

piper

Why do companies keep losing our personal information? That, of course, is the billion dollar question. Theories abound, and while we all theorize about the causes, data is still being compromised at an alarming rate.

Allow me to add to the theorizing, fully aware that this is going to sound a bit unconventional. What follows is not so much a concrete theory and solution, but an offering for creative thought. Here’s my take on one of the main reasons breaches happen, followed by a crazy idea about what we can do about it.

Breaches happen because companies are only looking out for number one.

Sorry, you’re not number one. They are.  You are but a meaningless number in a pool of data. They have no attachment to you as an individual and only view your risk as a function of their own. If your risk doesn’t factor into their own, it is casually disregarded. In the event of a breach of your personal information, they will act in their own self-interest. They are unlikely to compensate you for your time, stress, loss of work or anything else directly related to that breach. You get the short end of the stick.

That’s the bad news.  The good news is that it doesn’t have to be this way.  We can change things.

Payment is Past Due: The Action Plan

When our personal risk becomes a real economic factor in the risk of someone holding our information, the balance of the scales will have tipped. Since it is unlikely that companies will find incentives to factor in personal risk, they need to be persuaded through personal privacy and data security legislation.

It might work something like this.  From the multitude of breach statistics collected, we develop a profile of the harm done to a typical person after a breach of a certain type. One would expect, for example, that a lost social security number be more personally harmful than a lost credit card number. That breach profile is then used to assign relative security requirements to companies that wish to deal with that aspect of your data self. The more personal, static and valuable the information, the more stringent the requirement.

To validate that the data is sufficiently protected, the company will be required to undergo independent penetration tests. Audits, while sometimes helpful, are insufficient in that they primarily measure compliance and not the ability to withstand attack. We need to know how safe the data really is.

Here’s where the rubber meets the road. For every failed test, the company will be required to pay premiums to those whose information they are not adequately protecting, proportionate to the amount of risk the test reveals. In traditional insurance models, the insurance company holds risk. You pay them to assume that risk. With this model, the company is putting you in a similar position of risk. Doesn’t it follow that you should be similarly compensated?

In this paradigm, the company doesn’t get to wait until the information is actually breached. They lose the ability to roll the dice, and hope everything is going to be OK, while you remain at risk They face actual consequences, not just for breaches, but for creating circumstances predisposed to a breach. And with ongoing consequences for doing a poor job of protecting information, it then becomes in their best economic interest to get and remain secure.

By now you are undoubtedly thinking thoughts such as, “this won’t work because..” or “but what about.” Good. The idea wasn’t so much to offer a single solution to a complex problem; rather, it was to spark realization that we can change the rules of the game. No longer do we have to be victims. What are the problems with my proposal? How can it be re-worked? What ideas do you have to win back your identity? Throw me a comment or let’s chat in the forums.


Tags

breach


You may also like

Are you using frameworks properly?

Leadership and communication are actually layers, not levels

  1. Michael,

    What type of pentesting do you think is reasonable? Given that the CIA and FBI who hold more sensitive national security data than do most private companies have had numerous breaches of highly classified material…. what level do you expect most companies to attain?

  2. Andy,

    Penetration testing as an industry has a ways to go. Standardization and skill levels vary wildly among testers, so you can’t even be reasonably sure that one firm performing a test is a completely accurate measure of resilience to attack.

    That being said, I think pen testing is a more accurate way of measuring resilience to attack than audits are. I have seen so many ways to cook the books with audits that I have little faith in them as a true measurement of security.

    I also don’t think it’s reasonable to expect a 100 person company to pay the same for a pen test as a 10,000 person company.

    I think the point is that we need true, verifiable security, not security theater. And that poor security has to have consequences before a breach happens. It has to be an ongoing and working function of business–particularly when it’s your data they hold.

  3. I can understand your frustration Michael, and I agree financial penalties are the only way to make companies increase the overall security. When people say that the loss of reputation is enough to motivate companies to a higher security posture they are unfortunately wrong. Heartland is probably going to barely suffer a loss of business in the next few months, and certainly will notice no fiscal lose 2 years from now. The public does not understand the true impact these breaches have. What needs to happen is credit card companies must enforce stricter punishments for PCI and DSS infringements, a company needs to actually lose their credit card processing rights before the standard will truly be taken seriously. Unfortunately that means a lose of business for the credit card companies so they are not going to be in a real rush. The most effective method will be a class action lawsuit by the consumers. A big enough settlement and legal precedence for the compensation of consumers for negligence on behalf of companies controlling private data will be the most effective motivator. Good post Michael, it is linked on our front page.

  4. Michael,

    What you’re suggesting sounds an awful lot like what’s already out there, through other means…essentially, financial penalties for non-compliance to some standard. However, a good deal of what’s out there now goes beyond a simple pen test in order to ensure compliance.

    From GovernmentSecurity: “The most effective method will be a class action lawsuit by the consumers.”

    I’m not sure that’s the case. One of the issues we’re seeing now is that companies see what has happened to TJX, Hannaford, and Heartland, and from their perspective, the real issue isn’t one of security controls (or a lack thereof); it’s public notification, as that’s from whence all other ills follow. The thought is, hey, if I notify and have to tell someone, that’s when all the bad stuff happens.

    We’re all reasonably aware that there’s an economy behind the breaches. Someone gets in to steal data because there’s some financial gain. From a digital perspective, however, a great many organizations are doing very little if anything with respect to digital loss prevention. Some of the legislative (state notification laws) or regulatory (PCI, HIPAA, FISMA, NCUA, etc.) requirements are just beginning to give them what they didn’t have before…visibility into their own infrastructures. I’ve been on engagements for breaches that were discovered by the assessor. Once these organizations begin to get some visibility into what’s happening within their infrastructures, that’s when they have to start notifying, because they start actually “seeing” what’s happening. Unfortunately, these same organizations equate massive dollar values to everything, even where it simply isn’t appropriate…I’ve seen organizations refuse to put a password on a database ‘sa’ account because it would cost too much to do so; based on the data I saw, it was that lack of a password that allowed the bad guy to siphon off the contents of the database. A year ago. And go completely unnoticed.

    I foresee a shift in the dynamic. Right now, there’s a recognized economy with respect to data breaches…bad guys are getting in to steal data, in order to use it for financial gain for themselves…either by using it or selling it to someone else. As the overall security posture of some organizations is not improving, getting the data still isn’t the real challenge. However, since organizations are starting to show their “pain” (ie, public notification), another revenue stream based on the threat of public notification may develop. We’ve already seen how threats work with offshore gambling sites around major sporting events. Some of us are also aware of other breaches that haven’t hit the papers, some of which involved some sort of threat of disclosure by the bad guy.

    It’s unfortunate, but based on initial reactions to breach notification requirements/laws, the progression is going in the wrong direction. I’m sure that the original intent was, “oh, hey…if I have to notify, I’d better tighten up my ship so that I don’t suffer a breach.”, but that’s not what’s happening. Organizations are realizing that they were better off when they had no visibility…in the words for the great philosopher Homer (Simpson), “If I didn’t see it, it didn’t happen.” So we need to make a shift, and target the bad guys, rather than making the victims suffer. I’ve worked with many a customer who, when asked what their desired final outcome was, they said “prosecution”. Well, it used to be that the fear of pursuing a legal remedy was that the breach became part of the public record; talking with Ovie Carroll, it sounds like some of this fear can and has been obviated.

    So, now we’re left with one thing, and it’s that we’re back at square one. Part of the reason why there aren’t more convictions as a result of data breaches is that the security posture of the victim organization is so poor as to provide very little opportunity for law enforcement or other “experts” to collect enough data to figure out what happened, when it happened, and who did it. Organizations that have been breached could have done a great deal more with respect to breach prevention and detection, as well as initial incident response…some are going to say, “hindsight is 20/20”, but that’s not valid in this case, because its what information security professionals have been saying for years.

    H. Carvey
    Author, “Windows Forensics and Incident Recovery”
    Author, “Windows Forensic Analysis”
    http://windowsir.blogspot.com

  5. Michael,

    My point wasn’t to suggest that ‘m against penetration testing as some reasonable measure of security. But what we need are ways to turn that into reality that are workable for the different types of businesses you’d want to apply it to. If not, it becomes just as unworkable as an audit. What does the pentester try?

    1. Social engineering
    2. Planting a rogue employee
    3. Bribery of the company employees
    4. Deep cover rogue employee
    5. Physical brea-ins
    6. Physical break-in with real professionals and circumvention of burglar alarms, etc?

    At what point do you stop pentesting? What are considered reasonable threats that must be mitigated, and what aren’t? Does it change by company type, amount/type of data they are holding?

    It isn’t enough to just say things should be different, what we need are some concrete proposals for how you could possibly craft such a regulatory scheme that would be meaningful. Without that it is hard to take this proposal seriously.

  6. Under your general theory, stewards of personal information would automatically become insurers of that information. This would necessitate a radical shift from current law and deference to consumer contracts, even if those consumer contracts are contracts of adhesion. That would constitute a major shift in public policy.
    That said, I think that the theory is sound, because personal information, unlike other forms of property, is inherently unalienable. If personal information is not alienable/sellable, then the duty to protect the data should not be waivable by contract. Hopefully Tort law will show some progress in this area in coming years.

  7. Andy,

    You’re right that the Devil is in the details. These are all things that would have to be flushed out, and even then not everyone is going to agree. If I could summarize the main points of my thinking, they would be:

    1. Audits are insufficient measures of data security
    2. Companies treat our data as their own, with little regard to our personal risk. There is a prevalent viewpoint that the data is theirs and it’s not. We need to re-educate them to that fact.
    3. We’re assuming most of the risk for their negligence, when they are the ones who need to factor in our risk.
    4. The economics needs to change such that getting and staying secure is more profitable than gambling against a breach.
    5. They won’t do any of this voluntarily, so we need to force it through legislation.

    You’re absolutely right that this needs to turn into a workable plan. For that to happen, we need:

    1. A clear message to send to our legislators. In my opinion, this message is that companies must prove ongoing resilience to attack, and when found to be lacking they need to pay the risk holders (us).
    2. A set of penetration testing standards that are fair. As to your points above as to what should be included, yes, yes and yes. Basically, if an attacker would use it then it’s fair game in a pen test.
    3. A way to compensate those whose information is at risk. Another idea is that, rather than paying people directly, the money goes into a fund of some sort that is then distributed in the event of a breach. But the payments absolutely must be made for ongoing poor security and not just for breaches. We need to change the economics such that staying secure every day is preferable.

    You asked, “What are considered reasonable threats that must be mitigated, and what aren’t? I think that depends on the type of data handled.

    Perhaps baby steps are in order. I will commit to at least proposing the idea to my elected representatives. I welcome anyone’s help in molding this into a more refined proposal rather than just a crazy idea.

Comments are closed.
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Subscribe to our newsletter now!