Stuxnet

What is Stuxnet?

Stuxnet was one of the first examples of cyber warfare. It was a cyber weapon, attributed to collaboration between US and Israeli forces, aimed at disrupting the Iranian nuclear program. Estimates of the damage done range from a delay in Iran's nuclear program of between 1 - 5 years, and nearly 1/5 of Iran's centrifuges destroyed (centrifuges enrich uranium to the purity needed for nuclear weapons), though due to the sensitivity of the target, it isn't known exactly how much damage was done. Prior to Stuxnet, it wasn't widely known how much damage cyber attacks could do in the physical world. This attack made it clear that infrastructure across the world was incredibly vulnerable to cyber attacks.

How did the virus spread?

Stuxnet was initially introduced to the environment through a USB device, and proceeded to spread throughout the network to all Windows devices by exploiting a few Microsoft zero days (zero days are vulnerabilities which are unknown to those who might want to mitigate them). The malware also used stolen digital certificates (certificates demonstrate that software is legitimate - stolen certs allowed the malware to pose as legitimate software) which allowed them to escape notice for longer.

Stuxnet spread via USB drives as well as self-replicating, so that it could reach 'air-gapped' machines. Air-gapped machines are those which are physically separated from any other networks (like the internet or an unsecured local network). This is often used as a security measure (as so much malware is spread via the internet), in order to prevent sensitive machines from being hacked. This is generally a tactic employed by secure locations such as nuclear facilities, secure government sites, and some chemical or infrastructure plants.

In order to get around these restrictions, the developers of Stuxnet targeted contractors known (due to government intelligence sources) to work with the Iranian nuclear program. It's believed that the developers infected computers which belonged to 5 companies known to work with the Iranian government, likely with spear phishing emails and potentially with physical access to their networks. The malware self-replicated throughout those organizations, infecting USB drives and other devices on the network. The attacker's hope was that some of those USB drives would be used to transfer information to computers in the Iranian nuclear program, which is how they were able to gain access to the air-gapped networks. Once they had successfully installed their program onto computers used by the Iranian nuclear program, Stuxnet went to work on the centrifuges.

Alternate theories of infection include human spies using USBs to plant Stuxnet on specific servers. This theory is more likely to refer to an earlier version of Stuxnet which only spread via USB infection or by physical infection on a victim machine. The later version, which is commonly referred to as Stuxnet, self-replicated, spreading within networks, as well as via USB. The change in distribution methods likely suggests that the creators had initially had physical access to their targets, but later lost that access to their target systems.

What did Stuxnet do?

Stuxnet was programmed to change the speed at which centrifuges spun. A centrifuge is a large cylindrical tube, many of which are connected in a configuration called a 'cascade'. They spin at very high speeds in order to separate isotopes in uranium gas, in order to enrich the gas for use in nuclear weapons (and power plants). The process is incredibly difficult to perform correctly. Small changes in the speed of the centrifuges will cause them to break and become unusable. There is a level of failure expected in any program which uses centrifuges - especially new programs where the technology is unknown (as was the case for Iran's program). Sabotaging the centrifuges was a very subtle way to sabotage the program without anyone initially realizing sabotage was involved (reports after the fact indicate that the initial assumption was that there was an error in the software or hardware which the program was using).

After entering the network via USB, and spreading throughout a network, Stuxnet checked whether or not the machine has the specific set of configurations it was looking for (one of the machines Iran was using to control its centrifuges). If it didn't, Stuxnet didn't do anything. If it did, Stuxnet would compromise the logic controllers (a controller is basically a small computer system directly connected to physical equipment). The malware would first watch the system's normal operation, and record it. It would then speed up the centrifuges so they would break at seemingly random intervals, while playing back the recording of normal performance, so the engineers observing the process would not notice any unusual behavior.

How was the virus exposed?

As this was a covert military action, the developers did not want this virus to spread beyond their intended targets - the more machines the virus infected, the greater their chances of getting caught. The virus was very carefully crafted to only release the payload (the part of the virus which attacked computers) when it detected the highly specific set of configurations which would indicate that the machine was part of the Iranian nuclear program, so it wouldn't attack machines unless they were the intended target. While this may have been done in an attempt to protect other facilities and machines, the primary reason was probably to help the malware avoid detection as long as possible.

The difficulty with this plan was that spreading a virus only to very specific targets is very difficult. It's similar to releasing a biological virus, in that once it is released, there is little the creator can do to control the spread. Unfortunately, as the malware had infected civilian contractors, and was self-replicating, it spread throughout not just the contractors' networks, but their customers', quickly spreading far beyond its intended target, which is how a small anti-virus firm found a copy on one of the systems they managed. From there, top analysts at a variety of security research firms worked to 'reverse-engineer' the virus (essentially they took apart the code step by step in order to figure out what each piece was doing). Eventually, the analysts understood enough of the code to understand that it was likely targeting the Iranian nuclear facility, from which they could determine the likeliest creators of the virus. Eventually the US and Israel's involvement was confirmed by top officials in the White House.

Future Considerations

Infrastructure attacks will undoubtedly continue to escalate in future years, as the cyber arms race continues. There's no fast and easy technical solution to these types of attacks - the internet was not created to be secure (nor was it ever intended to be used in as many ways as it is now), and many security measures are essentially a band-aid on top of insecure protocols. Further, the increasing automation of critical infrastructure presents a tempting target for nation states. Effective security requires layers of controls in order to protect a network, and a security team to constantly monitor and fine tune the controls to stay up to date with new attacks.

Some industries classed as critical infrastructure (like financial services and defense contractors) typically employ large security teams to address this, while other industries (like manufacturing and utility services) often lack the resources to do so. Additionally, these industries have a heavy reliance on industrial control systems (ICS), a subset of operational technology (OT), referring to computing systems which manage industrial operations. These systems pose unique challenges to secure, as many IT security professionals are unfamiliar with the challenges of OT, while OT professionals are often unfamiliar with traditional IT security (OT, and ICS networks specifically, use different technology vendors, different hardware, different software, and even different network protocols than IT). For many years, the best practice was to air gap industrial networks to protect them. However, attacks like Stuxnet demonstrated the flaw in this security strategy.

Many of these ICS devices, like industrial controllers (such as PLCs and RTUs) do not require authentication or support encrypted communication, meaning that anyone who can access the network can make unfettered changes to the controller. Additionally, these devices are often rarely patched or replaced because the continued stability of the devices is often the top priority and even the chance of a patch breaking a controller is too much risk. In addition, these devices typically have limited CPU power (and therefore are easily overwhelmed by too many requests), expect real-time communication (meaning that a traditional IT vulnerability scan may overburden the system and result in communication delays), run custom operating systems/software (which are unused to traditional IT protocols and not as widely used, tested, or attacked), and face different design considerations (while they are designed to be extremely resilient to power disruptions, dirt, and debris, they're often not designed to withstand lots of network traffic (meaning they can easily be DDoSed)). These differences present a significant challenge to securing ICS networks.

Finally, threat actors have realized this vulnerability - we've already seen a number of attacks on critical infrastructure across the world (for just a few examples, this attack targeting Estonia's internet infrastructure, this attack on Saudi Aramco, the targeting of US election infrastructure, this attack on several banks, or this series of attacks on infrastructure in the Ukraine), and we're likely to see additional attacks as more organizations (like criminal gangs and nation states) gain the ability to launch sophisticated cyber attacks. In fact, infrastructure attacks date back even further, to March 2007, when researchers in the United States carried out a laboratory test in which they hacked into a $300,000 generator, and used a few lines of code to destroy it (termed the Aurora experiment).

Great security at a top financial institution or tech giant is fine - and might even prevent intrusion, but if your employees lose power at home or your remote workers have no wifi, security of an organizations' network becomes less important. While such a scenario is unlikely, it’s far from impossible. It’s not a matter of technical power, but political will to accept the consequences of such actions. If countries feel they have little left to lose they may not hold back from targeting critical infrastructure.

Ideally, the world would adopt a new Geneva Convention - one which addresses cyber war. Unfortunately, no country wants to give up its own offensive abilities (the US intelligence agencies in particular have been quite vocal about this issue in past negotiations) and don’t seem to view the issue with enough urgency (possibly because the most damaging attacks have hit countries far from home).

Interested in learning more?

Show Comments
As seen in: