It’s Time to Pay the Piper

By Michael Starks

piper

Why do companies keep losing our personal information? That, of course, is the billion dollar question. Theories abound, and while we all theorize about the causes, data is still being compromised at an alarming rate.

Allow me to add to the theorizing, fully aware that this is going to sound a bit unconventional. What follows is not so much a concrete theory and solution, but an offering for creative thought. Here’s my take on one of the main reasons breaches happen, followed by a crazy idea about what we can do about it.

Breaches happen because companies are only looking out for number one.

Sorry, you’re not number one. They are.  You are but a meaningless number in a pool of data. They have no attachment to you as an individual and only view your risk as a function of their own. If your risk doesn’t factor into their own, it is casually disregarded. In the event of a breach of your personal information, they will act in their own self-interest. They are unlikely to compensate you for your time, stress, loss of work or anything else directly related to that breach. You get the short end of the stick.

That’s the bad news.  The good news is that it doesn’t have to be this way.  We can change things.

Payment is Past Due: The Action Plan

When our personal risk becomes a real economic factor in the risk of someone holding our information, the balance of the scales will have tipped. Since it is unlikely that companies will find incentives to factor in personal risk, they need to be persuaded through personal privacy and data security legislation.

It might work something like this.  From the multitude of breach statistics collected, we develop a profile of the harm done to a typical person after a breach of a certain type. One would expect, for example, that a lost social security number be more personally harmful than a lost credit card number. That breach profile is then used to assign relative security requirements to companies that wish to deal with that aspect of your data self. The more personal, static and valuable the information, the more stringent the requirement.

To validate that the data is sufficiently protected, the company will be required to undergo independent penetration tests. Audits, while sometimes helpful, are insufficient in that they primarily measure compliance and not the ability to withstand attack. We need to know how safe the data really is.

Here’s where the rubber meets the road. For every failed test, the company will be required to pay premiums to those whose information they are not adequately protecting, proportionate to the amount of risk the test reveals. In traditional insurance models, the insurance company holds risk. You pay them to assume that risk. With this model, the company is putting you in a similar position of risk. Doesn’t it follow that you should be similarly compensated?

In this paradigm, the company doesn’t get to wait until the information is actually breached. They lose the ability to roll the dice, and hope everything is going to be OK, while you remain at risk They face actual consequences, not just for breaches, but for creating circumstances predisposed to a breach. And with ongoing consequences for doing a poor job of protecting information, it then becomes in their best economic interest to get and remain secure.

By now you are undoubtedly thinking thoughts such as, “this won’t work because..” or “but what about.” Good. The idea wasn’t so much to offer a single solution to a complex problem; rather, it was to spark realization that we can change the rules of the game. No longer do we have to be victims. What are the problems with my proposal? How can it be re-worked? What ideas do you have to win back your identity? Throw me a comment or let’s chat in the forums.

On Reports (a perspective)…

By Adam Dodge

Lately, there has been a flurry of activity in the land of security breach reports with organizations such as Debix, Verizon, the Identity Theft Resource Center and the Department of Justice all releasing reports looking at security breaches, breach notification laws and the state of information security in general. As someone who has been in the world of tracking and monitoring breaches for two years now through Educational Security Incidents, I am excited over the increased attention and information that is coming forth and the lessons that can be learned from these breaches. However, it is important to remember that are inherent limitations on the applicability of breach statistics and therefore we all must be cautious about reading too deeply and arriving at conclusions that the information in these reports do not support.

Before we go any further, yes I do develop a similar report each year and yes my report is subject to the same limitations as all of these other reports. My point here is not that all other reports are wrong while the ESI YiR is the shining beacon of truth. The point is that the information delivered in these reports is simply that, information. It is up to the reader to interpret this information in a meaningful way. The problem, then, stems from misinterpretation and this

What do I mean by “misinterpretation”? Well a common problem with the statistics provided in these reports (remember, I’m including my own report as well) is that the numbers are based the sample set and the ability to apply these numbers depends a great deal upon the size of the sample and how randomly the sample was chosen from the total population. Alright, that might not be a good enough answer so allow me to explain further.

The Verizon report has made a big splash in the security world and for good reason. Verizon did an amazing job with this report. If you haven’t read it, go do so now. Seriously, stop reading this and go read the report. It is that good.

However, the report is based around 500 forensic investigations performed by Verzion’s Business RISK team between 2004 and 2007. These 500+ breaches that Verizon has analyzed for this report were not randomly chosen from all breaches that occurred. Instead, the information was mined from the investigations stemming from breaches that were serious enough for a company to reach out and contract with Verizon for assistance. This is a potential point of bias for this survey.

Most companies are not going spend money on investigations for small breaches or those that are easily explainable. Therefore, it is very likely that breaches of data such as information left in public, information accidently placed on a public web site, etc. are underrepresented in the sample Verizon used. It is also likely that smaller companies and non-profit organizations are underrepresented as well since these entities lack the funding that larger, for-profit organizations have at their disposal.

What does this sample bias mean for the validity of the Verizon report? Nothing. Nothing at all. There is no problem with the sample bias of the Verizon report. The simple fact is that all of security breach reports (again, including the ESI YiR) suffer from the same problem. Unfortunately, there is no go way around this problem yet. Everyone that I talk to involved with tracking breaches has the same complaint: There is no centralized reporting of breaches in the United States and those states that do require breach reporting to a central authority have different reporting requirements, litmus tests and public access to breach information.

So I am suggesting that everyone stop reading these reports? Absolutely not. It is not just self-preservation that makes me say this, however much I enjoy my work with ESI. These reports are an excellent way for information security practitioners to track the movement of threats and discover what types of security threats similar organizations are facing. The point of all of these is that each and every one of us (including the media) need to make sure that we are interpreting the data of these reports properly before we remove our firewall because the 2007 ESI YiR said that employee mistakes outnumber hackers as the cause of a breach 2:1 or before we discontinue our security awareness and training programs because the Verizon reports says that 73% of all breaches came from external sources.

How can these reports be so different and yet both be correct? Simple, look to the samples used to compile them.

You are now Liable for Unintentional Medical Data Breach In NY State

by Patrick Romero

Health care employers be warned – an unintentional data breach could now cost you much more than you imagined. A New York State Appellate Court has recently upheld a $365,000 jury award against a health care center that mistakenly disclosed information regarding a patient’s medical information.

A young, unmarried woman who lived with her strict Roman Catholic parents decided to terminate her pregnancy at Long Island Surgi-Center. She gave instructions to Surgi-Center never to call her at home despite providing them with her home telephone number on questionnaire forms. A day after the procedure, a nurse called the number provided to inquire about her condition and to confirm that she had no subsequent medical complications. Unfortunately, the nurse spoke with the woman’s mother and revealed sufficient information to allow the mother to conclude that her daughter had an abortion.

In a 3-2 decision, the Court held that the plaintiff be awarded punitive damages for an unintentional breach of confidential medical information even if there was no malice or malicious behavior by the defendant. As a result, the 2nd Department of New York has expanded the scope of punitive damages to include unintentional medical disclosure regardless of whether the act was done in good-faith.

The case is significant due to the implications for organizations handling medical information. Even though the medical center’s actions were not malicious, intentional or done in bad faith, disclosing the plaintiff’s medical information was grossly negligent and wanton behavior. Based on this interpretation, it appears that it will now be more difficult for healthcare workers to justify disclosure of medical information on mistakes or negligence.

The Court also appeared to have affirmed the jury’s award for punitive damages in order to send a message about the importance of protecting medical information. Punitive damages are seen as a way for the judiciary to espouse a particular public policy and to deter future violations. The Court here is clearly concerned with instances of wrongful medical disclosure and shows itself to be in sync with state and federal legislative efforts to protect confidential information. The opinion does not discuss violations of federal privacy laws such as the Health Insurance Portability and Accountability Act (HIPPA). However, it does mention New York legislation pertaining to the rights of patients in medical facilities like the one visited by the plaintiff.

More and more states are enacting laws regulating the disclosure of private and confidential information. Court cases like this highlight the need for companies to enact strong compliance rules that clearly describe the conditions in which data can be disclosed. These rules need to be properly followed and understood by all employees of an organization. The decision in New York should highlight the fact that even inadvertent medical disclosure can now lead to serious liabilities issues.

Do Data-Breach Laws Give You The Power to Hold Corporations Liable?

By Michael Santarcangelo and Patrick Romero

There are roughly 40 states that have some sort of “data-breach” law or bill being considered that force notification of a company’s security breach (or suspected breach) to their consumers. These laws were enacted as a way to force companies to disclose the possibility that individuals personal information was compromised and that they could potentially become victims of identity theft.

Over the coming months, we’ll spend some time exploring how the different states are handling these statutes. When you peel the layers back a bit, and consider them from different angles, we can learn some interesting elements – useful to us from individual and organizational perspectives.

Even with these new laws in effect, it seems that there is little a person can due to hold a company liable for a data-breach based on their weak security standards. Recently, state governments have begun to change this by imposing liability on the retail business and others, thereby opening the door for consumers to sue companies that do not adequately protect the personal information that they collect.

This is a serious issue that has implications for everyone involved – and ultimately requires clear definitions, mutual understanding and will take years to sort through. In the meantime, we’re going to ignite our series of articles exploring these laws and developments by analyzing some recent events.

Minnesota PCI Legislation
Effective August 1st 2007, Minnesota became the first state to require that all companies handling credit and debit card data comply with the Payment Card Industry (PCI) data security standard (in a future article or podcast, we’ll explore and debate the value of tying the PCI standard to the legislation – Michael).

The state’s new Plastic Card Security Act would prohibit a company from retaining a credit card’s security code data, the PIN verification code number, or the full contents of any track of magnetic strip data. The new legislation is intended to target retailers who continue to store data in violation of PCI standards. The bill also makes it a violation for retailers to a credit card holder’s PIN number longer than 48 hours after authorization of their transaction. Similar bills are pending in Texas, Illinois, Connecticut, and Massachusetts.

The significant of this legislation is important in light of recent ruling by courts that have dismissed class action suits against companies following data-breaches. On August 23, 2007, the US Court of Appeals for the 7th Circuit held that identity-theft monitoring costs paid for by the plaintiffs were not compensable damages under Indian’s security breach notification statute. In Pisciotta v. Old Nat’l Bancorp, the court held that there was no state statute supporting the compensation of incurred costs because “had the Indiana legislature intended that a cause of action should be available against a database owners for failing to protect adequately personal information, we believe it would have made some more definite statement of that intent.” So for the time being, unless you have an actual showing of harm as a victim of identity theft, potential harm will not suffice.

Consequences for the Courts
As more states begin to enact legislation that requires companies to comply with PCI, courts may begin to allow litigants to be compensated as a result of a security break. The argument that courts have made in cases like Pisciotta will clearly be much weaker as states legislatures conspicuously demonstrate their intent to punish companies by enacting specific statutes targeting the security of personal information.

Federal and state courts will feel much more comfortable in their decision to expand their legal theories of liability when supported by statutes that explicitly creates private actions for security breaches. In this context, it is much more likely that Courts will not follow the ruling in Pisciotta until after states pass legislation similar to Minnesota. In other addition, plaintiffs might also receive some relief if a recent bipartisan bill in the U.S. Senate gets passed. The bill, known as the Identity Theft Enforcement and Restitution Act of 2007, was introduced on October 16, 2007 and would give victims the ability to seek restitution for the loss of time and money as a result of identity theft. Such federal legislation could prove to be effective in jurisdictions with no state identity-theft laws.

Consequences for Businesses
Meanwhile, the retail lobby continues to argue against laws that would hold them liable by arguing that these laws would be too costly and burdensome, especially for small businesses. This apparently was the argument that convinced Governor Schwarnenegger to veto a California law that would have mandated the retail industry comply with PCI requirements. While this may be true, legislation in Minnesota limits this burden by exempting businesses with few than 20,000 transactions from their statute. Clearly, there is a way for the legislature of any state to write a statute that can pressure companies to improve their data security standards without crippling small business owners.

While the retail industry will continue to resist such legislation, there is strong support from banks and credit unions, since in the eyes of consumers they often blamed for such breaches. TJX is currently being sued by several banks
who seek compensation for having to re-issue credit cards and credit monitoring to thousands of their customers as a result of a massive security breach earlier this year. Depending on how the case turns out, the burdens and cost of breaches will shift away from consumers, banks, and credit unions but will perhaps be shared by the retailers and others (of course, the consumer pays in the end).

Preparing for the change
As a consequence of new state and federal legislation, the landscape of data security will continue to evolve, sometimes in seemingly dramatic fashion. Individuals and businesses will most likely be able to get their day in court for incurred damages a result of security breaches by a third-party. Industries that have for now been able to get away with having minimum security standards will begin to take notice of their potential liability and hopefully, will improve the way they guard information. While the process is slow, it appears to be inevitable.

This isn’t doom and gloom.

Many of us have already begun to prepare for these changes by improving and writing security policies that make sense and can be understood, improving the process of protecting information and working to involve users in solution through training and awareness. Focus on the fundamentals of information protection and you’ll be less likely to be the test case.

Success is sometimes measured in how you handle mistakes

My good friend Andy Willingham today celebrated one year of blogging. Andy, thanks for a year of sharing ideas, insights and your passions! If you’re not currently reading Andy’s Blog – you’re absolutely missing out. To celebrate a year, he pointed out that FaceTime recently experienced an unpleasant situation where customer information was disclosed. I think many of us realize that no one, and therefore no company is perfect. FaceTime has proven that – and I think Andy presented a balanced view of the situation.

I think in life, the measure of a person is how they address and handle mistakes. I think in business, the measure of a company is not whether a mistake/breach happens, but how the company handles an incident when it happens. We can split hairs over whether this constituted a breach or not. Regardless, customer information was at risk; customer information was disclosed. It’s not clear to me why that information would have been stored on the webserver, but I’m also not familiar with their architecture. Without question, on the scale of public outcry, this is and should be almost a non-issue. Almost.

While I suppose this isn’t exactly the type of event you want to incorporate on the front page of your website, the only public response I could find was in the computerworld article. From what I read in the Computerworld article – FaceTime acted quickly and even notified people impacted. Yet, I was bothered by this response:

However, Capri said no sensitive personal data such as credit card numbers, Social Security numbers or dates of birth was exposed because that information is not collected on the FaceTime Web site.

It’s a fair and valid statement to make. I supposed I would advise a client to make a similar statement, save one exception: I’d leave out the aspect of tying personal information to a limited set of data. I’m troubled by the concept that if it wasn’t a social security number, credit card number or something of the same that no personal information was disclosed. Information of any kind has value – and while this was probably a mistake, I would expect a security company to have taken a different attitude.

It’s time to reboot the security industry

It seems that this year has been dominated by negativity: we have focused on months of bugs, slammed colleagues and users (note: this term needs to end) and even tried to prove through science (!) that people do not understand risk. In fact, many in our industry seem quick to point out that everything is wrong, nothing works and we cannot win (whatever that means).

As I have traveled around the country, hosted some informal gatherings and met with friends and clients, I’ve been struck by how people, in general, look and act. Most of the people I have met in security seem down, rushed, angry and lacking hope.

Are we doomed to an industry filled with negativity?

Open Culture recently ran a story about the (in)famous Stanford Prison Experiment. Reading it reminded me of the first day of my new job after college. My first boss sat me down and told me, “Don’t F*** up, because if you do, the whole world will crush you. If you do a good job, no one will notice, and that’s okay.” In my experience, those words have sometimes been accurate (more than I care to admit). His words stay with me, often in the context of watching how many people in technology are treated, and how they choose to treat others.

Practicing Security Today is like the Famous Stanford Prison Experiment

The Stanford prison experiment was a psychological study of the human response to captivity, in particular to the real world circumstances of prison life and the effects of imposed social roles on behaviour. It was conducted in 1971 by a team of researchers led by Philip Zimbardo of Stanford University. Undergraduate volunteers played the roles of guards and prisoners living in a mock prison that was constructed in the basement of the Stanford psychology building.
— Wikipedia entry (http://en.wikipedia.org/wiki/Stanford_prison_experiment)

In the experiment, the behaviors of both the guards and the prisoners escalated quickly as each took on characteristics of their role — to the point where the experiment was ended early.

You can learn more here:

Wikipedia: http://en.wikipedia.org/wiki/Stanford_prison_experiment
The Official Website: http://www.prisonexp.org/
interesting overview: http://www.holah.karoo.net/zimbardostudy.htm

So, are we the prisoners, or the guards? Short answer: yes.

As “protecting information” has grown in importance, many in the field of security suddenly find themselves in a new and somewhat awkward situation; the shifting demands of the role necessitate the need to influence, and sometimes to enforce. After years of receiving “abuse”, they find themselves in positions of relative power. Sometimes without guidance. With memories of prior treatment, we take a reactive and negative approach to those around us. Perhaps some of our colleagues “assume the position” too much and get a bit carried away?

Some act like the guards. Some act like prisoners. And some started as prisoners only to become guards.

Regardless, this is a situation we cannot accept. Period.

Now, let me be clear – with all the plight in the world today, I’m not suggesting that we, collectively, take our practice of security to the extremes of the prison experiment. I’m not suggesting a direct comparison. I just happened to review an article on the topic a few weeks back and it has stuck with me that our practice of security might be allowing people to embellish their roles.

Reboot the Security Industry

The single most common (and mocked) approach to fix a non-responsive computer is to reboot.

The security industry needs a reboot. We have to flush from memory the bad blood and old experiences and get started with a clean(er) slate. We need a fresh start (or a least a fresh approach).

We tried negative, restrictive approaches that divided people and explained why things wouldn’t work. We said no. No, no, no!

It’s time to stop alienating people. We cannot do this alone. We need help. And while it’s nice to opine about a growing workforce, the real opportunity lies in changing the approach. Security is important, but it need not be punitive. It’s not only possible, but necessary to change the way we practice security to connect with people, demonstrate business value and rely on the art and science of effective communication.

On a recent flight home, I watched the inflight presentation of A Night At the Museum. An entertaining way to pass the time, I was drawn to the story — especially the ending. The main character realized success only after abandoning a process of restriction and segregation to focus on a path of inclusion. Easy to dismiss as another “Hollywood ending,” the box-office success of the movie is due to the underlying strength of the story. People want to be included, and celebrate inclusion.

Stories are natural. They help us make sense of the world around us. We use them to learn, to teach, to reach understanding.

Perfect ending or not, we need more stories.

After the reboot: a new direction

After the reboot, it’s time to cast a new vision. I see a way to practice security mindful of the past, but focused on the present. With a focus on basics, connecting with people, demonstrating business value, and practicing effective communication to share, and to learn. Technology has a role, but its time to build dialogues and foster inclusion.

We have to foster a sense of trust among each other and those we serve. We have to reintroduce the concept of accountability, create a culture that embraces and expects personal responsibility.

We each play a part.

I’m going to keep focusing on better ways to engage, empower and enable people; to make it easier to realize and demonstrate value; to help others liberate their stories through a practice of effective communication.

What about you?