Surprise — the underlying technology matters less to an attack’s success than basic human determination
When a potentially major security flaw gets announced, à la SandWorm, Shellshock, and Heartbleed, those of us in the computer security field can’t be sure it’s a “big one” that would attack or compromise the majority of the computers in the world or your enterprise. Whether the technical methods are familiar or novel, most of the discovered attack methods don’t go big.
We’ve had lots of “big ones” in the past. The Robert Morris worm of 1988 infected around 6,000 computers. That doesn’t sound like a lot today, but back then, it represented about 10 percent of the computers hooked to the Internet. Since then, far bigger and faster-spreading worms appeared, most notably Michelangelo, Code Red, Melissa, SQL Slammer, ILoveYou, and Blaster.
In those heady days, a single infection would turn into a global outbreak in a day or less. The record belongs to SQL Slammer, which infected nearly every unpatched SQL server on the Internet and clients running SQL in about 10 minutes.
Luckily, we haven’t seen a worm go global at such a pace in a while. Gladly behind us are the days when we had to shut down the mail server, get everyone off their computers so that we could clean them up, and call everyone who received one of our infected emails. Then again, maybe I shouldn’t be so confident. Now we have to worry about advanced human attackers that steal intellectual property and money. I’d love to fret about a simple, noncriminal malware program.
We are very bad at predicting what vulnerability will go global. As with our real-life wars, no one can predict which conflict will turn into a global world war until we are in it. As in the digital world, real-life experts are constantly predicting the latest conflict will lead to nuclear Armageddon. But it hasn’t happened.
In the digital world, for an infestation to quickly go global, it must be “wormable,” meaning that a hacker can take advantage of the vulnerability using roving malicious code that bounces from computer to computer, instead of having to manually test each computer. If it can’t be wormed, it probably won’t go international.
That’s the conventional thinking today. Perhaps in the future a malicious coder will mess with a big cloud service and create a new malware propagation method. Viruses, which are malicious piece of code that infect other code or documents to spread, can go global quickly, too. But they aren’t as popular as worms anymore.
However, most worms and viruses don’t go big. Why? Because there is a huge gap between ability and action, between capability and causation.
I don’t know why some malware programs go big and others don’t, but I have noticed a few ways to categorize those that went global:
Vulnerabilities that we knew about, that we worried about, and that still went big, such as Blaster and Michelangelo — these are uncommon
Techniques that come out of the blue and surprise us all, such as SQL Slammer and Code Red
Techniques that we knew about for a while but for unknown reasons take off later than when they were discovered, such as Melissa and ILoveYou
Long-known public techniques used continuously by multiple attackers over long periods of time, such as spearphishing and pass-the-hash attacks (popular today)
Those broad classes don’t help identify what might be a “big one.” What causes an attack to be a “big one” remains a mystery to the computer defense industry. But I believe three nontechnical factors are often involved:
- Motivation and intent: A criminal agency, a spy organization, or another entity wants to use a method to achieve one or more goals.
- Loss of control: The malware coder didn’t seem to realize how quickly his creation would spread, such as with the Robert Morris worm, SQL Slammer, and Melissa.
- Placement and timing: The malware happened to resonate with people. For example, I’ve always believed Melissa went global because its creator promised free porn in a day when free porn wasn’t the norm.
If I had to pick one reason a worm went global, I’d have to go with motivation and intent. Many of the hacks we worried about didn’t happen until a bad guy finally tried it, such as Kerberos ticket manipulation.
I’m sure there are other factors I’m not thinking about right now. But I know that capability and potential are still poorly correlated with actual damage. If we could better predict what will go big, our job would be a lot easier.
You can think of cyber threats the same way the military thinks about weapons of mass destruction: Many nations (even individuals) know how to build weapons of mass destruction. The major entities won’t use them unless absolutely necessary, if ever. But now more entities have access to them than can possibly be controlled over the long term.
Someday a weapon of mass destruction will be used against a major (unsuspecting) population. That day is coming, and we can’t possibly predict when. The capability and potential have been there for a long time; it’s a question of timing.
Even more unsettling, it doesn’t matter if we make it harder or easier to carry off the big attacks that will cause huge disruption. Plus, we are so poor at computer security (in general) that we give attackers dozens to hundreds of avenues to try when they get motivated.