Tuesday, July 3, 2007

Data Roulette

One problem information security officers often have is correctly evaluating the risk posed by poor data privacy practices. Often a “gaming” approach is taken where a manager keeps rolling the dice (a poor security strategy) in the belief that a winning streak will continue indefinitely. Nothing has gone wrong yet, therefore the corporate strategy must be sound, so -- “keep rolling”.

The flaw in the gaming approach comes from the false perception that a consistent security outcome represents a controlled security environment. Unfortunately, there is no such thing as a controlled security environment. The ability to keep data secure ultimately rests on the competence and good intentions of the people who come in contact with the data. While a particular security product or technology may be static, the human employees who constantly rotate within an organization represent a roulette wheel of ever changing personalities and behaviors. Background checks may identify some potential hires with criminal records, but checks and references are less reliable in identifying employees who are careless, indifferent or prone to vindictive behavior. Recognizing what little control an organization has over the behavior of its employee pool should help tilt an ISO's attention toward what can be controlled: the strength of the organization's technologies and procedures.

Wednesday, May 30, 2007

The Actuarial Approach to Data Privacy

Not long ago I was conversing with a person in charge of information security at a Fortune 500 company. We were discussing his hosted data center. I knew from work that I had done for this company that the data center service provider was taking no special precautions to protect highly sensitive data. The primary reason for this was because the service provider didn’t know which data were sensitive. The data center managers had received no information from the company that identified the sensitive data or where it was located. When I pressed the security officer on why this hadn’t been done so that the sensitive data could be encrypted, his answer was, “This is an outsourced service. We trust their employees.”

My next question was the obvious one: “Why are this service provider’s employees any more trustworthy than the employees found at any other company?” His answer was unsatisfactory: “As an outsourcing company, they are liable to screen their employees appropriately.”

While his answer may be correct if the data security game is ultimately about winning the finger-pointing competition, it is clearly a violation of what I call the “Actuarial Rule of Data Privacy”. Actuarial science is all about the probability, based on empirical data, of what will happen within a certain population of people. Typically, of course, actuarial science is applied in the insurance industry and by providers of pension benefits.

The application of actuarial science is way overdue in the realm of data security practices. A simple question like “how many data centers with 50 or more employees have experienced a compromise of sensitive data by one of those employees in the past three years” would be truly useful information. Having this data, information security officers would be able to apply actuarial reasoning to their cost / risk / benefit calculations, helping them to provide a meaningful, defensible basis for their data security decisions.

But the most important contribution of an actuarial approach to data protection would be a shift from the typical conversation of “what if” there were a data breach to the empirically-based “what are the chances that a breach will occur in the next X number of years”. They key difference being that the actuarial approach always reveals that the probability is greater than zero. And therefore, so is the cost of doing nothing to protect your data against even the most thoroughly vetted employees.

Saturday, April 28, 2007

A Test for Artificial Intelligence

Following on my previous entry regarding computer consciousness, I’d like to propose what I believe is an obvious test for artificial intelligence. If someone creates a machine someday that she claims is thinking on its own and is not merely exhibiting an acute case of “Hiya” (see below), I think the proof to support that claim should be easy: show me the magic. In other words, show me that this machine has done something that is beyond my ability to comprehend, and beyond the ability of any other human to comprehend, and I’ll agree you’ve made your case. If you want to show me that you have truly created artificial intelligence you need to make a machine that can do something that no human being can grasp. For example, the fact that I can design and produce a digital camera is proof that my intelligence is not merely an extension of parakeet intelligence. Why? Because the concepts underlying a digital camera are entirely beyond the grasp of a parakeet. Therefore, my intelligence can’t merely be a scaled-up expression of the bird’s intellect; it is of a science beyond the bird's making. Similarly, a machine capable of artificial intelligence should be able to design and produce something beyond my or any other human’s conceptual grasp. Do that and you’ve got my attention. Do that and I’ll agree that you’re the first pioneer to cross over into the uncharted (and unchartable) regions of “Aieee!”

Monday, January 8, 2007

Computer Consciousness

I've long doubted the idea that computers can be "intelligent" in the way that human beings are intelligent. Perhaps a better way to say this is that I've long been a skeptic of the potential of Artificial Intelligence. For one, I've yet to observe anything remotely like artificial intelligence occurring inside a processor or wrapped up in an algorithm or acted out by a machine. What I have observed is something that I think is more accurately described as "human intelligence applied artificially", or HIAA (let's pronounce it "Hiya!"). In fact, personally, I haven't seen anything but HIAA where others see AI (let's pronounce that "Aieee!"). If you know of any "Aieee!" that isn't better described as "Hiya!", then please let me know.

The problem, as I see it, is what I call the "Wizard of Oz syndrome" that lies behind people's hopes for AI. Whenever a new breakthrough in the AI field is celebrated, we are expected to be dazzled by the new model, all the while pretending there isn't a modeler standing behind it. In fact, suspending the connection between the model and the modeler is required if you want to get really excited about the long-range prospects of AI.

Let me explain with a thought experiment on a very inflammatory subject: evolution. Some years ago a scientist at a top university (MIT, I think) demonstrated with a computer and something like a motorized Lego set that the process of evolution could be modeled by computer. The computer started by creating the simplest Lego-type objects (life-forms) and those objects increased (or at least changed) in complexity and capability as the computer processed environmental feedback. The results were touted around the world as "near-proof" that evolution theory can, in fact, explain life as it exists today. I had to laugh, because this was instead "near-proof" of the opposite: that given a sufficiently intelligent modeler, a model can be created that makes things look like they're happening randomly. The scientist forgot to include himself as an element of the experiment. His conclusion was that, given an ever-improved model, an increasingly-improved replica of evolution could be demonstrated. However, if the scientist had taken himself into account, a very different conclusion would have been required: as his own intelligence approached perfection, the outcome of his experiment would appear to be a perfect demonstration of evolution without an intelligent cause. The "without an intelligent cause" part is ironic, no? In reality, that scientist, by being an inseparable part of the experiment, did a better job of demonstrating a theory of God than he did a theory of evolution. This, my friends, is what I mean when I say that AI true-believers suffer from Wizard of Oz syndrome. They just don't want to look behind the curtain.

Tuesday, November 14, 2006

Backfilling The Seven Laws of Identity Management

Folks who work in the Internet security industry know that the principles underlying various up-and-coming identity technologies such as Microsoft's CardSpace have arisen from something called the "7 Laws of Identity". If you're not familiar with the seven laws, you can find them at Kim Cameron's blog at this link: http://www.identityblog.com.

An excellent and well-written reflection on the first of these laws can be found at Bob Blakley's blog. Bob was the former chief scientist for security and privacy at IBM. He currently works as an analyst for the Burton Group. His thoughtful remonstration, titled "The Meta-Identity System" is partway down the page at this link: http://notabob.blogspot.com.

As Blakley points out, Cameron's seven laws aren't really laws; they're better described as seven requirements for identity technologies to work securely and be accepted by consumers. Laws, by contrast, represent the way things happen because they can't happen any other way. Viewed in light of Blakley's deconstruction, Cameron's identity laws might better be described as comprising more of an "identity etiquette", or "identiquette", than to forming a framework of immutable identity truths.

So, to help remedy the absence of a compilation of such truths, Deep Think Diving would like to nominate the following Ten Intractables of Identity Management as a starter kit. Some of the principles found in the kit might well be considered laws (or better yet 'flaws') but won't be rolled up as such in deference to Cameron's prior
claim to the namespace.

So, offered here for your perusal ...

The Ten Intractables of Identity Management

Intractable #1: The Law of Low Assurance. “High assurance” Internet technology, placed in the hands of the average consumer performs with low assurance. Think seat belts. Seat belts are an easily grasped and easily mastered safety technology, yet nearly one-third of drivers bypass the technology even though the risk factor is no less than death itself. Imagine how great the percentage of consumers must be who “opt-out” of learning, understanding and properly employing digital security methods. As an industry, it’s important that we grasp the back pressure of this recalcitrance, as it represents the limiting factor to the success of our cyber security efforts.

Intractable #2: The Law of Innocence. Like freshly-hatched turtles hurrying for the safety of waves, a significant portion of web users at any given time are newly-minted (soft-shell) webphibians destined by mere innocence to end up lodged in the beaks of cyber-gulls. Any comprehensive solution to Internet identity security should give special focus to helping these maiden voyagers succeed in their initial dash across the sand.

Intractable #3: The Law of Over-Confidence. With every increase in the ability of a technology to create user confidence in the source of an email or website, the more convincing (and therefore destructive) the inevitable impersonation of that email or website will be. I sometimes think we have it backwards; we should make all emails and websites look patently fake and threatening so that consumer vigilance never flags. Websites could replace their Verisign and Trust-e logos with logos that say, simply, "Thug-4-life". Sure, e-commerce would slow to a crawl, but no one would be caught napping.

Intractable #4: The Law of Inattention. The more a technology requires a consumer to remain vigilant to security cues, the greater the security gap that will result from an indifferent consumer’s inattention to those cues. Most cyber criminals rely not on breaking cryptographic algorithms, of course, but rely instead on the indifference, inattention and confusion (and, of course, greed) of the consumer. Any technology that can be undermined by these consumer traits will be undermined. (Interesting how the Law of Over-Confidence and the Law of Inattention seem somewhat opposites, yet have the same consequence.)

Intractable #5: The Law of Latency. Phishers, Pharmers and those Nairobi bank managers fretting over unclaimed millions all benefit from an ability to bring their (sin)novations to market more rapidly than the industry’s defensive forces can be mustered. A lone “UniPharmer” therefore will always be more nimble, prolific and effective than any engineering team or standards committee could hope to be. The solution here is to help cyber-scammers organize into fraternal organizations so that their efforts can be bogged down and confused by development schedules and design reviews.

Intractable #6: The Law of Inverse ROIs. E-commerce establishments that adopt identity security solutions are required to make large investments up front while only guessing as to the actual return, which, in fact, may prove to be quite small. Cyber-scammers, on the other hand, face only a small initial investment, the return on which can be expected to be quite large. Phishing and pharming, then, employ better business models than legitimate businesses and will therefore always command an inordinate market share.

Intractable #7: The Law of Misguided Validation. I wonder sometimes why we focus our efforts so heavily on improving methods for validating bona fide emails and websites. Someone should point out (okay, thank you, I will) that validating the good stuff adds no direct value to Internet security. Why? Because bona fide emails and websites pose no threat to the institutions and consumers that use them. Identity and authentication technologies should instead be measured by how they improve a consumer’s ability to directly invalidate fraudulent emails and fraudulent websites. This rule suggests that a technology might best be targeted toward automating, rather than teaching, an invalidation protocol. Currently, the leading technology for invalidating fraudulent emails and websites is the consumer himself and the results of that technology have been less than salutary.

Intractable #8: The Law of Unreasonability. Don't believe for a second that what is unacceptable in the real world can remain acceptable in the virtual world for very long. Imagine how you would feel if some sizeable portion of your snail mail is in fact an attempt to ensnare you into an illegal scam. Imagine, too, that your only defense is to hire your own private security force to help the postman sort through your mail and escort it to your mailbox. Now imagine that, even with your private security force, a goodly portion of the illegal post still gets through. In cyberspace, this is how we do things -- the private security forces being Symantec, Verisign, Messagelabs and similar providers. Eventually, this militia-packed cyber-Somalia will need to be pacified into a strong, centralized public service.

Intractable #9: The Law of the Missing Goodwill Agency. Notice that the concept of “goodwill agency” that is realized in the real world seems absent from the virtual world. In the real world when we need help or protection we enjoy the support of many goodwill providers. If we need a stray cat rescued, we call the Humane Society; if we have a flat, we call AAA; if we need food and clothing, we visit the Salvation Army. E-consumers need goodwill agencies populating the Internet that will respond to them and help protect them individually from identity fraud. If a consumer receives an email that she questions, there should be a well-known URL where she immediately can go to have that email validated or invalidated. This site could be a pro bono validation service organized in the public interest (perhaps funded by financial institutions?). So far, such goodwill agencies have failed to sprout up on the Internet. It's worth noting that in the real world, too, humanitarian forces fear to tread where private security forces thrive.

Intractable #10: The Law of Non-Deterrence. Net neutrality is a passionate rallying cry, but on its dark underside it's also a cyber-scammer’s best friend. Can you think of a time when any kind of criminality was stopped dead in its tracks by the long arm of the forces of neutrality? What the Internet needs is a true movement toward software in the public interest. No, not the ethically-insipid movement toward open APIs, but an ethically-imbued movement toward software development on behalf of the public weal. I'm talking geeks saving damsels from oncoming trains; geeks rushing into flaming buildings to rescue kittens; geeks getting the woman at the end of the show (or the man, or free MSDN Premium, or whatever). Yes, I'm talking the stuff of prime-time network television here. Can't you see it: "Law and Cyber Order".

Got popcorn?