Tuesday, May 24, 2016

Should Ransomware Incidents Trigger Data Breach Notification?

As of January 2016, here in the US, there are only three states that do NOT have mandatory data breach notification laws (and you know who you are...) In practice, unless you run a very self-contained company with little or no out of state commerce, there's a really good chance you are required to report even if you only do business in South Dakota (OK - there I named one. Now I have to worry about the SD legislature feeding my frozen body into a wood chipper. Come to think of it, I was already worried about that...)

The point is if you do business in the US, and you have Personally Identifiable Information which meets any of the disclosure standards, this posting may apply to you. I think there may be quite a lot of notification triggering breaches which are NOT resulting in notification based on the relative prevalence of ransomware plaguing the Internet.

The original standard for defining when a notification trigger happens (and the one most states copied to some degree) is the California law, which says:
California law requires a business or state agency to notify any California resident whose unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person. 
Pretty simple language, considering how many lawyers were likely involved. If we break this down a bit, we can address the applicability of these criteria one element at a time.

  1. Unauthorized person: Can we all agree that cyber criminals, operating by encrypting your data and the ransoming it back to you for bitcoins are unauthorized persons? OK, that was easy.
  2. Unencrypted Personal Information: Well, it may be encrypted now, but since the criminals are the only ones with the keys - I don't think that meets the criteria for exception.
  3. Acquired, or reasonably believed to have been acquired by #1 above: This is the meat of my argument here, so more in a bit. The short version is, software cannot encrypt data without reading it. 
Since in order to encrypt data, you have to first read it - we have to admit that any PII that's been encrypted by ransomware has been read (and then re-written, but we will get back to that) by software under the control of unauthorized persons. If we assume that the ransomware is also operating at the behest or in coordination with a command and control server (at the very least, sharing the encryption key used with the criminals) -- it's not hard to make the case that PII which has been read and then written by software under the control of a cyber criminal meets the criteria which should trigger mandatory breach notification laws. 

A lawyer, or other argumentative persons, might take exception with the term "acquired" as used in the California law. After all, ransomware just encrypts data and leaves it on your disk, right? How could that be considered "acquisition?" If the software could be shown to only read data, but never write - I think you could make this argument. In fact, the ransomware software reads the PII, then encrypts it, and writes it TO A PLACE OF ITS CHOOSING. Generally, that means your disk, but at that point the malware is in complete control of that protected PII, and can write it to local disk, send it up to the command and control server, or just send it out in an email if that's what the malware author chose to do. Since the activities are by nature clandestine, and the command and control activities use encrypted connections (everyone allows TLS/SSL outbound, right?), it's impossible to prove that the PII is not in the hands of the criminals. 

So, what should our "reasonable belief" be, considering the circumstances? Do YOU have the resources to capture a sample of the malware, behaviorally deconstruct it and then know if it only encrypted it? Would it really surprise anyone reading this if some malware writer tossed in a regex looking for SSNs or other account-like data and shipped those all back to the main (evil) office?

Is it more reasonable to believe that the criminals will NOT ship juicy bits back to their office for further exploitation, or that they will be nice and just encrypt it and hold it for ransom? 

**Addendum: Found this HHS paper a few months after this post, which essentially makes the exact same argument (see section 6)


Tuesday, September 25, 2012

Who's mess is this anyway?

If your organization has a security incident of some kind, who's responsible? During the incident and most of the response, assigning responsibility likely isn't useful - but in the aftermath, as much as we would like to all hold hands and sing - there are a number of reasons to identify who's at fault. I realize this isn't a popular viewpoint (especially for folks who feel that they might be the person blamed) but I would like to propose that

  • It's not always as obvious who is at fault
  • In some cases "blame" resides with the organization (or lack thereof)
  • The cause for blame isn't always as simple as you might think

It's pretty easy to point the finger at the IT person responsible for information security. Obviously, if they had been doing their job, the incident would not have happened, right? Wrong, or maybe mostly wrong. No doubt there are a lot of times when the CISO, CSO, Dir IT Security, etc fall asleep at the switch. I did it myself back in 1985 and it wasn't a pleasant day for me or anyone working on our systems. I had followed the instructions for setting up an anonymous FTP server for HP-UX, but failed to clean out the stub password file required in the chrooted directory. The passwords were encrypted, but we had no password quality measures in place (see my earlier post on the subject) and a few of the passwords were easily decrypted. My fault - quickly remedied and no serious harm done.

A few years later, in 1988 the Morris worm hit, and the (fairly benign) payload infected our sendmail servers. In that case, I won't as readily accept the blame. It was the FIRST worm of it's type to propagate using the Internet, it was the original "zero day" worm and there were no good defenses or natural defenses. As far as I'm concerned, this one falls under the category "Defenses Fail." I could have been better prepared for it I suppose, and today I would be.

Not many folks have experience in root cause analysis, and 5 whys gets turned into a reflexive 6th, "Why do this?" nearly every time I suggest it. Folks shy away from getting to the bottom of what happened for a lot of reasons, and I'll try to list a few here:


  1. If it was MY screw-up, I'd rather not be blamed. Understandable, but not helpful from an organizational standpoint. Id rather a team member owned up to the issue so I could help them with avoiding the next. Sure, some places may have a zero tolerance policy for this sort of thing, but in those cases, you should be using the company printers to collate and bind your resume anyway.
  2. If I know I'M not to blame, I still don't want to throw Bob under the bus. Same principle applies here as #1. What we don't identify, we can't fix. Bob's worth helping, and organizationally we're money ahead if we can fix Bob in-place rather than eventually looking for his replacement.
  3. Blame doesn't help anyone. Here again, from an organizational point of view, understanding why an adverse event happened, so that we can address the underlying issue clearly helps almost everyone. We'd all like to get on with our lives after something like this happens, but we'd also prefer to not have to deal with the same problem tomorrow.
  4. We don't have time to figure it out. My favorite. The event just cost you 45 hours of team resources, and since we don't know exactly why, there's a good chance it could happen again (and again.) We can't spend 5 hours trying to prevent the stream of 45 hour events? 
I'm reminded of an incident from a few years ago where a client of mine was being relentlessly attacked and DoSed. That kind of thing happens all of the time, and there's often no easily determined proximate cause, but in this case we eventually discovered that an employee had (while sitting at his desk, using the company network and more importantly, the companies external IP address) REALLY annoyed someone online. That someone decided to shut him up and see if he could get him fired in the process. In this case, there was no existing policy to prevent what had happened (organizational failing) but we also had a little conversation about common sense with the employee while adding a new section to the existing policy documents.

In a well run organization, root cause analysis and remediation work shouldn't make people fear for their jobs, or be prohibitably expensive. It should be just an expected part of any reaction to an adverse event.

When is a phish not a phish?

The correct answer apparently is when it's a legitimate business email, sent from an associate or affiliate, that just tastes, smells and rots in your inbox like a dead phish.  Let's take a little quiz:


  1. You receive an email, purporting to be from your home alarm company, asking you to call them to "verify account information." Should you:
    1. Call that number immediately, just in case they shut down your alarm system
    2. Run in circles
    3. Laugh at the silly phisher and his or her laughably poor attempt at fooling you. 
  2. You look at the email headers for the incoming request and note that the IP address and the domain have no apparent link to your alarm company at all. Should you:
    1. Assume that it's just something odd going on with the Internet, and resume calling the number provided?
    2. Call the FBI, Secret Service and Homeland Security to alert them to this clever ruse?
    3. Laugh even harder at the silly phisher and his  or her inept handling of email headers and cloaking techniques.
  3. You look up every known and available phone number for your alarm company, listed on the web site and elsewhere and see that the phone number in the email does not match anything listed for them. Should you:
    1. Call the number immediately and tell them they need to update their web site?
    2. Assume they just typed in in wrong, but they still need to talk to you?
    3. Add up information gleaned from #s 1 and 2 above, delete the email  - smug in the knowledge that you've outwitted yet another silly phish?
I did none of the above, expecting that there was something stupider to blame, and it turns out I was correct. I looked up a legit number for my alarm company and called them (resisting the urge to tell them I was "alarmed") to see what might be up. I figured that if their customers were getting phished, they would want to know about it. As it turns out, it was a legitimate email, originated from their marketing dept. So, even though they claim (and somehow still insist) that they never share my information with any third party, somehow an email to me, originating from a 3rd party wound up in my mailbox, asking me to call. The phone number (which seems to be used for other campaigns as well, according to the Internet hits it receives) does in-fact go eventually to my alarm company. 

The horrifying thing is, it's not the first time it's happened to me. I'll admit I may scrutinize the odd stuff that lands in my mailbox a bit more than someone with a different profession, but that's actually also the point. If corporate default is to not-so-seamlessly use third parties to (for goodness sake) ask for ACCOUNT VERIFICATION. How is the average person supposed to see the difference between phishing emails and swimming scaly finny emails that breath with gills, but are somehow really legitimate?


Really, I've been seeing this kind of thing for years, dating back to the first time someone asked me if it would be OK to give a third party an server SSL cert for the company, to "make the customer experience more seamless." My response at the time was something along the lines of "Yes, you've seamlessly fooled your customer into believing that they are still dealing with you, and not some nameless vendor of yours. When the lawsuits come, just imagine me saying I told you so, but refrain from calling me..." Honestly, I thought explaining to my customer that handing someone else your server cert was equivalent to lying to their own customers would have a more interesting and corrective effect. I'm older now and realize that PR and marketing folks see lying to customers as the only proper interaction.

Wednesday, June 9, 2010

Two dimentional space and other imaginary consrtructs

Remember in geometry class when you were asked to imagine a single point in space with zero width? That's a "point" a zero dimensional object. Then, they asked you to imagine two such impossible constructs and the series of such zero dimensional objects which lie between (how can something be "between" two objects which have no width?) the two and extensions without curve into infinity on either side, this is - geometrically speaking a line, a mythical two dimensional object having only length (typically infinite length) but no width.

Why rant about impossible constructs that we use to define the world around us, build bridges, shoot laser beams at highly reflective Russian moon landers (http://en.wikipedia.org/wiki/Lunokhod_1) and generally differentiate ourselves from really smart chickens? Because I want to make a distinction between useful imaginary constructs like geometric lines and really damaging ones like "secure um, anything."

The word "secure" is by it's nature an absolute term. To back away from that you need to add modifiers like "somewhat" or (and I've heard and/or used this one) suffixes like -ish. Like an actual two dimensional object, secure doesn't actually exist except in theory. Unlike lines, there is really no useful context in which we describe things as secure or not.

Why even have this discussion? Because I encounter people every day, who are supposedly savvy in the ways of protecting electronic assets but use language like "secure infrastructure" and "secure database" when they should be talking about levels of protection. In truth, there's a sliding scale of how protected (how secure - see you can use it with qualifiers...) things are; from my fairly well protected Timex Sinclair in the corner of my office running my lava lamp (no network connection, and I don't type on it - but an EMP could probably corrupt data and ruin the current rendering of the Experience Music Project in lava) to the City of Bellevue 911 system (http://www.schneier.com/essay-002.html) at the other end of the spectrum. All discussions of risk regarding systems and networks should really be done in this context, or lay people (and geometry teachers) will assume that you mean what you say when you use the unqualified word "secure."

While we're at it, there's also no absolute standard for what is "secure enough." Our ultimate goal in most commercial applications of information assurance should be to make things as secure as the organization needs it to be. In other words, to match the level of accepted (or residual) risk to the level of risk that management deems appropriate. I can hear the howling already "What the heck does MANAGEMENT know about appropriate levels of risk? Those guys are complaining about how antivirus is slowing down their $2000 ultra-thin laptops!" To which I say, understanding and accepting all kinds of organizational risk is a large part of their job. In fact, it's what really defines the role of the highest levels of management beyond being just a high level administrative wonk but if your management doesn't understand cyber risk, you haven't done your job.

Mind you, I'm not saying that it was ever really even possible to DO your job, I'm just saying that a HUGE part of the job is explaining risk to the point where the highest levels of management truly understand it and can make good decisions, and apparently that didn't happen. It may be that your management, for whatever reason is incapable of understanding cyber risk (I'll follow with a rant about what is or is not being taught in business schools some other time.) In that case, it may be completely impossible to convey risk and be understood --- but it still means you haven't done that part of your job. I liken this to sending a carpenter to a job site without any hammers. At the end of what would be a very frustrating day trying to build a wall hammering with rocks and other found materials, there's no new dining room wall. Sure, there was no way to build one without a hammer, but in the end the job still isn't done.

So, let's leave the theoretical ideals safely back in geometry class (unless you want to talk about separable completely metrizable topological spaces - which are totally useful in understanding some kinds of data) and talk about the universe we actually interact with where lines have width (on paper), secure doesn't exist, risk is acknowledged to be a relative measure and we don't eliminate parts of our job because they are really hard, or even impossible.

Friday, April 16, 2010

Stop thinking that... whatever it is you are thinking

I'm pretty sure that we don't have the faintest idea what's going through the heads of IT and even IA folks when they make risk decisions. From the fairly common results, I can say that much of the time, it's wrong, but I don't know HOW it's wrong so I'm not sure what needs correcting in the intricate jumble of education, social reinforcement and organizational baggage that's in the head of everyone (including myself) that makes risk decisions that affect the safety of your personal information.

The way we treat security incidents is pretty dysfunctional in many ways, but for today's ramble I'm really just thinking about the fact that when we do root cause analysis (you're doing root cause analysis, right?) we seem to ignore a step in the chain that could have prevented the whole problem in the first place -- a risk management decision that correctly interpreted the risk as well as the likelihood. If that had happened in most cases, steps would have been taken to prevent the mess you find yourself in. Oh, you say that the risk analysis was flawless and folks just failed to act? I've seen that happen too, in which case need to try and understand how management interpreted:

1. Anvil falling from the sky
2. We are standing where it will land in 10 seconds from now (assuming 9.8 meters per second squared acc.)
3. It looks like a heavy anvil and will hurt when it hits us.

...and came up with the strategy "Stay where we are, we've never been hit by an anvil in the past." In all scenarios, someone isn't thinking the right thing, and we need to figure out what that is to even begin to understand how to correct it.

An example of a field where folks have thought about the fact that almost NOBODY instinctively thinks correctly about the topic is probability and statistics. They've thought about it, and come up with a way of figuring out exactly what wrong thinking is going on in some test cases. Here's an example:

Most people get this wrong (from a paper by Linda S. Hirsch and Angela M. O'Donnell in Journal of Statistics Education Volume 9, Number 2 (2001)):

If a fair coin is tossed six times, which of the following ordered sequences of heads (H) and tails (T), if any, is LEAST LIKELY to occur?

  1. H T H T H T
  2. T T H H T H
  3. H H H H T T
  4. H T H T H H
  5. All sequences are equally likely.


It’s a semi-standardized question to reveal what misconception a person might have about calculating probability. #5 is correct, but a lot of folks will pick #3, as it "doesn’t seem sufficiently random".  The diagnostically useful questions regarding WHY they answer what they did are something like:

Which of the following best describes the reason for your answer to the preceding question?

1.       Since tossing a coin is random, you should not get a long string of head or tails.
2.       Every sequence of six tosses has exactly the same probability of occurring.
3.       There ought to be roughly the same number of tails as heads.
4.       Since tossing a coin is random, the coin should not alternate between heads and tails.
5.       Other _____________________________________

If we wanted to understand how the average (or even average IT person) thinks about risk, it would be helpful to come up with similar tests. 



Tuesday, March 16, 2010

Password Quality

So, do you administer a system with general user accessing it? Your companies AD, online banking, launch codes? Do you require that passwords be "high quality" - in other words do you require that they be of at least a certain length and further require that they use non-alphanumeric characters in their password, such as numbers braces and the like?

In that case, I expect that most of your users are using 7 or 8 character passwords with the numbers 1 or 2 in them, typically at the beginning (for 1) or in the middle (for 2), if they are REALLY trixy they might be doing 133t letter for number substitution. If I guessed the length wrong, I might be able to figure out the minimum by getting an account and then choosing bad passwords until you tell me the rules. Once I know the rules, human nature helps me narrow down the field of passwords I should try in brute-forcing your accounts from the millions down to the hundred thousand or so that fit your more restrictive scheme.

What? You say your scheme isn't restrictive? It only insists on certain quality measures to insure that folks are using "password" or "qwerty" and they could be using 21 character passwords just as easily as the minimum 7. OK, I'm not interested in the folks using passwords like "afttr2U*sdfvS!&Ennadcczxza)0" If they can remember that, their head is too full of passwords for their account to have anything in it of interest. I'm interested in cracking the vast majority of accounts which will do the minimum required to pass your quality tests and end up with "g00d2g01" and then next rotation will choose g00d3g01. (OK, maybe you are doing comparisons to the last 6 passwords, that would help. You are doing at least 6 revs, right?)

My point is that the best password quality testing would actually just test quality and not announce the rules so much. Sure, a pure brute force will crack a 5 character password reasonably quickly, but who does pure brute force cracking anymore? Do you lock account out after some reasonable number of attempts? How long would it take to crack a 5 character password if you lock me out after every 5 tries and I have to either a) Wait for a timer to expire or b) wait for manual reset ??? Somewhere in the vicinity of a thousand years if the timeout is more than an hour or two, and I would hope that some time in the first decade of trying, you would notice that the account I'm attacking keeps getting locked out.

There are reasonably good safeguards to keep attackers from logging in using brute force cracking, which just leaves stupid password trying like "xyzzy" and "Passw0rd" which ANY good password quality test would reject. They way we're trying for quality right now is worse than annoying, it's counter productive in that if gives attackers more information about the password space that they would be cracking than we should be and encourages predictable passwords.

What's with not solving electronic identity problems?

There are a lot of interesting technical problems involved in figuring out a way to reliably identify people online, but none of the ones I'm aware of are insurmountable. At this point, it's ridiculous that we don't have a good way of saying (online, electronically):

  1. I trust identities from entity A, B and Monkey. Anyone they vouch for as being who they say that are I'll (mostly) believe.
  2. If you want me to buy into who you tell me you are, you need to go through the hoops of A, B or Monkey. I don't care if you are already identity registered with C, D and Unicorn. I only trust the list I gave you (via a simple registration, not by answering any of your silly questions...)
  3. I get to choose what identity authority I trust for what purposes and which items of my identity they get to publish.
For example, I would like the State of Washington to issue me a drivers license / Washington resident identity. They get to use that for traffic stops and maybe border crossings, plus I can at my discretion use that identity to authenticate myself to other places that trust the state of WA.

I also would like the Postal Service to issue me an identity which I will keep for a lifetime (or until I believe it's been compromised) I can use portions of that identity code to tell people how to physically mail stuff to me without telling them where I live. There's no reason someone mailing me a rebate coupon needs to know where I physically reside, and if the USPS was tracking a PKI which internally linked to physical addresses, I could give out a code that only the USPS could use to find me. It also means I could change that physical location without much trouble, and for only myself. In the best of all worlds, it also means that I could produce one-time-use physical mailing addresses and get a LOT less junk mail. Abusive spouses could send child support checks directly to the recipient without danger to the abused...

We're currently defaulting to trusting lots and lots of identity managers (Facebook, Twitter, MySpace etc) who:

  • Don't think of themselves as identity managers (with the exception of Google)
  • Aren't checking that you really are even a human, much less the human you claim to be
  • Aren't interested in providing this service
If you don't think we are trusting these places to identify people, just look at some of the friends lists and tell me that every person individually qualifies each request by some other means. Law enforcement and others (like the press) are using the lack of authentication to gather information as they go to press, or prepare indictments. I'm all for supporting good law enforcement, but these are examples of where the current state of affairs is biting people in the hiney.

It would be great to see:

  1. A good cryptographic API that addresses key exchange with multiple PKIs is open source and non proprietary
  2. PKI which conforms to #1 (the more the better) and allows for multiple levels of trust and multi-key encryption providing for field level access to identity information based on the key issued as well as quorum based decryption (three of these 7 keys are required for actions A, D and F)
  3. Open, free access to multiple services (public and private) which use such a system
  4. Integration into online services which use identity in some way
Note that I'm not calling for the death of immunity on the Internet here, just for application of some level of trust for identities that we care about knowing. For those of you that might respond with "Hey, OATH or OpenAuth provides these things" you are partially correct. What's being looked at now is the management of authentication and authorization, rather than identity. It's an important step in the right direction, but essentially does not address the problem that authentication is not identity and identity is a complex object with dozens, possibly hundreds of attributes.