Скачать 97.96 Kb.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
Bandwidth Cascades: Diffusion Effects of Dyadic Cyber Conflicts
Maj James D. Fielder, USAF
Department of Political Science
The University of Iowa
ABSTRACT: Given that cyber attacks traverse international borders and can inflict collateral electronic damage on neutral parties, how do state-on-state cyber conflicts affect not only the conflicting parties but also the international system? To answer this question, I apply existing conventional war diffusion theory to generate a new theory of positive cyber-spatial diffusion and associated component mechanism. I derive my theory through a diverse comparative case study of two cyber attacks: the 2008 alleged Russian distributed denial of service (DDoS) attack against Georgia, and the 2010 alleged U.S./Israel malicious software (malware) attack against Iran. The case analysis indicates that while conventional and cyber conflict diffusion share similar mechanisms, cyber conflict diffusion diverges on two points: third party intervention (escalation) and/or collateral damage effects (pathogen). My findings also raise additional questions regarding collateral damage effects and state neutrality, the influence of non-state actors such as private corporations, and authentication, or difficulty in identifying attackers. Finally, I reiterate the importance of network security measures to defend against cyber conflict's collateral effects, and then advocates studying information communication technology (ICT) infrastructure density for future research and using simulations to test the proposed theory and mechanisms.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
War diffusion is, to be sure, a relatively rare event, but rare does not mean "unimportant." Many statistically rare events are of considerable interest to scientists, particularly when their consequences are either highly lethal or very costly. - Siverson and Starr 1990, 54
Given that cyber attacks traverse international borders and can inflict collateral damage on neutral parties, how do state-on-state cyber conflicts affect not only conflicting states but also the international system? Conventional conflict diffusion, or the spread of a conflict from two warring states to neighboring states, is extremely rare compared to dyadic militarized disputes: for example, only 94 instances of conflict diffusion occurred out of 3,746 interstate disputes between 1816 and 1965 (Siverson and Starr 1994). Although rare, war diffusion is extraordinarily costly in both lives and materiel: the seemly insignificant number of diffusion cases represents 39 percent of the major international wars during the same timeframe (ibid). In turn, cases of cyber conflict diffusion are even rarer: indeed, the two cases I analyze in this paper may represent the first of their respective types.
The first case is the alleged Russian cyber attack against Georgia in 2008, which coincided with a conventional ground attack, making the engagement the first instance of a cyber attack in conjunction with armed conflict (Ashmore 2009). In this case, the conflict diffused as non-state actors stepped in to aid Georgia, including a U.S. business that acted without the consent of the U.S. Government (Goodman 2010). The case also suggests that during a cyber conflict, unregulated actions of third party actors have the potential of unintentionally affecting U.S. cyber security policy, including cyber neutrality.
The second case is the alleged Israeli malware attack against Iran's Natanz nuclear facility in 2010. The Stuxnet virus is the first-known virus specifically designed to target real-world infrastructure, such as power stations. Unlike the Russian attack that was launched over global information and communication technology (ICT) infrastructure, the Stuxnet virus was introduced into a closed network, meaning a human agent had to physically inject the virus into the facility. Despite this, the virus still spread beyond Natanz and ultimately infected over 60,000 computers worldwide (Farwell and Rohozinski 2011). Thus, rather than diffusing to motivated supporters, the Stuxnet virus literately diffused in similar fashion to a disease pandemic, catching states well outside the initial conflict area unawares.
However, conventional conflict diffusion literature does not fully encompass the emerging realm of cyber conflict. Cyber conflict not only involves state-on-state conflict, but it also embroils non-state actors such as criminal organizations and corporations. Additionally, cyber conflict can also cause potentially catastrophic damage well outside the immediate conflict area due to global information and communication technology (ICT) interconnectivity. I attempt to address this nascent issue by generating rather than testing a theory of positive cyber-spatial diffusion, or that cyber conflict between actors A and B will spread to actors C through third party intervention (escalation) and/or collateral damage effects (pathogen). While my proposed theory cannot be tested solely through two case studies, I provide theoretical component mechanisms with which researchers can revaluate the theory as new data become available.
I employ the diverse comparative case study method, or comparing two or more cases with the same outcome but different causal factors1. The diverse method best fits this analysis for three reasons. First, their respective outcomes resulted in different diffusion processes: the Russia-Georgia case attracted outside support from non-state actors, supporters, while the Israel-Iran case inflicted unintentional damage to third parties. Second, the diffusion mechanisms differ, as the Russia-Georgia case is an example of an open loop attack (launched over public ICT infrastructure) and the Israel-Iran case an example of a closed loop or air gap attack (required deliberate infection using a USB memory stick). Third, the Russia-Georgia case involved contiguous enduring rivals engaged in a conventional conflict, while the Iran-Israel case involved non-contiguous states that have never engaged in direct military conflict. From this, the analysis generates a theory and component variables that future researchers can test through alternate case studies or quantitative methods as new data becomes available.
The comparative method is also particularly suitable for analyzing rare phenomena and offers a number of strengths, including comparison of variables difficult to measure quantitatively, generating new theories and hypotheses, exploring causal mechanisms, and converting specific phenomena into generalizable variables (Przeworski and Teune 1971). Ideally, a diverse case study will generate the maximum range of possible causal factors and outcomes for use in future research; for my purposes, factors and outcomes of cyber conflict diffusion. In reality, two cases studies are not enough to uncover every possible combination of causes and outcomes; however, two cases are suitable for objectively documenting events, generating theories, and identifying and defining variables for future research.
Admittedly, I cannot falsify positive cyber-spatial diffusion due to purposeful selection bias, or only selecting cases where the expected outcome occurs: both cases resulted in cyber conflict diffusion (Geddes 1990). Ideally, one should pick cases where causal factors do not result in the expected outcome; or, cyber conflicts that do not diffuse beyond the conflicting actors. However, selecting on the dependent (outcome) variable is appropriate for exceptionally rare cases (Dion 1998). By doing so, the researcher can trace processes that resulted in the given outcome and also generate theories that can later be tested with broader data. Finally, rare and unique cases may never reoccur, meaning selecting on the outcome is the only reasonable method for logically describing and assessing the event.
Since this is a descriptive study, I meticulously describe each case in order to logically uncover similarities and/or differences between the cases (Collier 1975). I analyzed open source news and computer science technical reporting to build the case descriptions. Additionally, the U.S. Army and U.S. Air Force War Colleges have generated a wealth of relevant literature--while potentially biased towards U.S.-centric policy studies, the services are still at the forefront of cyber warfare theory. In the next section, I discuss the literature under three subheadings: introductory information on cyber conflict, similarities and differences between conventional and cyber conflict diffusion and discussion on two cyber attack methods relevant to the two cases.
Cyber Conflict in Brief
Cyber warfare is defined as the penetration of information and communication technology (ICT) networks for the purpose of disruption or dismantling to make them inoperable (Valeriano and Maness 2010, 3). Although fought over wires, cyber attacks can result in severe economic and physical damage. Cyberspace also does not recognize international borders, and cyber attacks against the infrastructure of other states can have severe, cascading effects within and across states (Beidleman 2009). For example, in 2003 the "Slammer" computer worm infected thousands of ICT systems throughout the world; in particular, the worm shut down an Ohio nuclear power plant’s safety monitoring systems for five hours (ibid). Information congestion created by the worm's scanning and replication routine crashed the plant's computerized display panel used to monitor crucial indicators such as coolant systems, core temperature, and external radiation sensors. Thankfully, the breach did not pose a safety hazard, since the plant was offline for repairs. However, Slammer also damaged an Ohio electric utility's critical supervisory control and data acquisition (SCADA) network (Poulsen 2003).
The most calamitous scenario is an "electronic Pearl Harbor," in which critical infrastructures (banks, subways, power, etc.) are disrupted to the point that the functioning of government and society are severely degraded (Eriksson 2006; Apps 2010; Gjelten 2010). For instance, in 2007 scientists from the Idaho National Laboratory launched an experimental cyber attack against a power plant to test whether or not they could overload its system purely through electronic attack. In short order, the infection caused several turbines to fail, which further damaged additional equipment throughout the facility (Ashmore 2009). Soon after the experiment, actual hackers succeeded in infiltrating electric companies in undisclosed locations outside the U.S. and, in at least one instance, shut off power to multiple cities. Some economists estimate that a shutdown of electric power to any sizable region for more than ten days would stop over 70 percent of all economic activity in that region (Bruno 2008).
In the physical world, governments have a near monopoly on large scale use of force. But resources and combat mobility are costly; thus attacks from the inexpensive informational realm can be launched against expensive physical resources with great effect (Beidleman 2009). While a few states like the United States, Russia, Britain, France, and China are reputed to have greater capacity than others, it makes little sense to speak of cyber dominance in terms of conventional power: indeed, dependence on complex ICT systems creates vulnerabilities in large states that can be exploited by less powerful actors (Nye 2010). Furthermore, while military analysts can use algorithms to predict physical attacks and effects, it is not easy to identify data flows, determine exactly where they are coming from, or understand the data sender’s intentions.
The low cost, anonymity, and network security asymmetries also imply that smaller, less powerful actors--both state and non-state--have more opportunity in cyberspace than in the physical world. Still, cyber power depends on resources: controlling infrastructure, building networks, coding software, and deploying human talent. A teenage hacker and a large government can both do considerable damage over the Internet, but that does not make them equally powerful in the cyber domain: cyber-weapon proliferation into the hands of non-state actors does not replace governments as the most powerful actors (Hearn 2010; Nye 2010). However, the only cyber attack ever claimed by a government dates to 1982, when Reagan administration officials admitted to deploying malware that caused a massive gas pipeline explosion in Soviet Siberia (Neild 2010). Despite the lack of documented state-on-actor attacks, however, the resources required suggests that well-organized, coordinated and sustained cyber conflicts will likely be initiated by states.
Conventional versus Cyber Conflict Diffusion
Conventional conflict diffusion occurs when conflict between two actors (usually measured as states) affects the probability of a similar event occurring in neighboring actors (Most and Starr 1990). Scholars have found that war at one point in time may affect the likelihood of subsequent war participation, and also results in longer and more destructive conflicts, with World Wars I and II being premier examples (Siverson and Starr 1991; Colaresi, Rasler and Thompson 2008). Conflict diffusion is also explained through spatial diffusion, which occurs when new conflict participation initiated by nation A increases (positive spatial diffusion) or decreases (negative spatial diffusion) the likelihood that other nations B will participate in subsequent conflicts: or, the transfer of one state's war behaviors to other states (Most and Starr 1980). Siverson and Starr (1991) further refine the definition through infection, which occurs when outside actors intervene in a conflict: indeed, observers have compared conflict diffusion to epidemics, with the steadily-enlarging conflict infecting other states like a disease (pp. 8; also, Most and Starr 1980). The likelihood of conventional conflict diffusion increases if the conflicting states are contiguous, if the originating attacker and defender enduring rivals (i.e. have a history of repeated conflict), and as the duration of conflict increases. Moreover, conventional wars tend to remain contained since initiators target states that will not likely receive third party support: when targets receive third party assistance, the chances of the initiator succeeding fall considerably (Gartner and Siverson 1996). Likewise, my theory of positive cyber-spatial diffusion also contends that the probability of conflict diffusion increases based on rivalry, contiguity, conflict duration, and third party intervention.
Proximity and interdependence are also useful in framing cyber diffusion through the lens of conventional diffusion. First, proximity in conventional conflict reduces the distance decay associated with projecting combat power (Bueno de Mesquita 1981; Vasquez 1995; Gleditsch 2002). Previous research has found that nations that share a border with a warring state are nearly five times more likely to become involved in war themselves than countries that border peaceful states (Most and Starr 1980). Proximity, though, is theoretically irrelevant in cyber conflict since cyber diffusion transcends geographically contiguous borders. However, assuming that only states have the resources necessary to launch and sustain major cyber attacks, proximity to a major cyber power may carry similar localized diffusion risks. Second, interdependence suggests that actors are locked into forms of collective dependence that structures their interaction (Lake and Morgan 1997; Gleditsch 2002). Through interdependence, outcomes can occur that are unintended yet still shared by all the interdependent actors. Furthermore, physical proximity of interdependent actors makes certain events more locally relevant, as proximity increases opportunities for conflict or cooperation (Most and Starr 1989; Gleditsch 2002). Moreover, governments have increasingly recognized that disputing borders through force is costly, and that control of networks of finance, information, and transportation is much more important than control over physical territory (Simmons 2005). Physical interdependence also influences cyber conflict; however states vary in susceptibility to cyber attack, as damage may depend on the robustness of the state's ICT infrastructure.
Yet, while cross-border ICT infrastructure may influence attacks, border contiguity is not a necessary condition for cyber conflict diffusion. Aggressors and defenders can launch cyber attacks to and from any system, and collateral damage can theoretically affect any portion of the global network. Additionally, cyber conflicts can escalate and propagate rapidly; unlike conventional conflicts, cyber conflicts require stop watches rather than calendars for duration measurement. Finally, third party interlopers in cyber conflict will not necessarily be states, as cyber tools and methods are available to non-state actors down to the individual level. At the same time, cyber conflict can also inflict damage on non-state actors, either through retaliation or collateral effects. To that end, I use the comparative method to generate the proposed theory and component mechanisms.
Based on understanding of conventional conflict diffusion, I assess that cyber conflict diffuses through escalation (based on conventional infection) or pathogen (based on conventional spatial diffusion). Escalation is purposeful third party intervention, and pathogen is the spread of conflict across shared ICT infrastructure. Similar to conventional diffusion, proximity and interdependence influence both the escalation and pathogen mechanisms of cyber conflict diffusion. Borders likely influence rivalries, and the decision to launch a cyber attack is, in turn, likely a function of rivalry. Yet, cyber conflict also transcends local proximity and interdependence, since ICT information flows are not limited by geography. This transcendence is further illustrated through the two different types of cyber attack depicted in the case studies.
Diffusion through Two Methods of Cyber Attack
While state and non-state actors can use numerous methods to launch cyber attacks, the two cases involve distributed denial of service (DDoS) attacks and malicious software infection, or malware. The first technique, DDoS, is the easiest and the least sophisticated cyber attack, and can be used by the most novice hackers with malicious intent. DDoS attacks flood particular Internet sites, servers, or routers with more requests for data than the site can respond to or process. The effect of such an attack effectively shuts down the site thus preventing access or usage. Sites important to the functioning of governance or commerce are therefore disrupted until the flooding is stopped or the attackers disperse (Valeriano and Maness 2010). Such attacks are coordinated through botnets, or a network of computers that have been forced hijacked through software infection by a remote user, and then coordinated to launch simultaneous DDoS attacks (Clark and Knacke 2010).
Botnet networks are also massive in scale: a 2007 cyber attack against Estonia was launched through over a million unsuspecting botnet computers (Caulkins 2009). The relation between DDoS and diffusion is two-fold. First, an aggressor--state or non-state--can launch a DDoS attack to and from any location, regardless of border contiguity. In fact, launching through neutral locations is one means by which the aggressor can disguise the attack origin. Second, DDoS attacks create network congestion that radiates outward from targeted systems. Second, although ICT data pipelines are generally decentralized and thus allow information to flow along paths of least resistance, DDoS congestion can still decrease regional data transmission speeds, which degrades general data access (Gu, Liu and Chu 2007).
The second technique, malware infection, is the most potent form of cyber attack, and the damage inflicted can be widespread and potentially lethal. Examples of malware include logic bombs, virus and worms: the first, logic bombs, are programs that cause a system or network to shut down and/or erase all data within that system or network (Clark and Knacke 2010). Next, viruses are programs which attach themselves to existing programs in a network and replicate themselves with the intention of corrupting or modifying files. Finally worms are essentially the same as viruses, except they do not need to attach themselves to existing programs. All of the above methods have the potential of doing real physical damage to state infrastructure: for example, the ILOVEYOU virus of 2000 cost over $1 billion in lost data and computer damage (Deibert and Stein 2002; Valeriano and Maness 2010). Malware can also spread very rapidly, as highlighted in Figure 1 below: conventional conflict diffusion occurs comparatively at a snail’s pace, often requiring months or years to expand beyond the original conflicting parties.
FIGURE 1: Example of Malware Diffusion--the 2004 “Witty” Worm
The 2004 “Witty” worm infected approximately 12,000 computer systems across the globe in less than 30 minutes, averaging 11 million network probes per second. The worm did irreparable physical damage to an unknown number of systems, and at the time of its spread there was no antivirus antidote (Shannon 2007, 20).
Despite the severity, though, malware can only attack if there is a network vulnerability, or security hole through which malware can enter a system undetected (Goodman 2010). Even if network entry points are secure, an individual with physical access to a system can still "air gap," or deliberately install malware onto a system. The relationship of malware to cyber conflict diffusion is its pathogenic nature. Malware literally diffuses in similar fashion to a biological infection, and any "non-inoculated" system can catch the disease. As demonstrated with the Stuxnet example, malware can spread even beyond systems not connected to open ICT networks, since users can easily transfer malware (purposely or accidentally) on removable media such as Universal Serial Bus (USB) drives.
Considering the literature holistically, cyber conflict diffusion theoretically carries incredible risks and potentially catastrophic outcomes. But, cyber conflict diffusion is a nascent phenomenon, unlike conventional conflict diffusion literature's rich (albeit destructive) empirical history. Conventional diffusion theory is certainly useful and relevant for exploring cyber conflict diffusion. Yet, cyber conflict diffusion features component mechanisms that diverge from conventional conflict, and thus requires a new theoretical framework. Moreover, state-sponsored cyber conflict is so new that there is little evidence with which to test cyber conflict diffusion. To that end, I examine two existing cases in order to propose a new theory of cyber conflict diffusion, as well as assessed component mechanisms.
Each case is organized and compared through five narratives: a summary of the attack, identifying the perpetrator, the effects of the attacks, and the broader ramifications of each. These five narratives were chosen in order to standardize criteria for gathering, organizing and evaluating available evidence. By using the same format for each case, I could then logically elucidate similarities and differences between each case. The theory of positive spatial diffusion and its component mechanisms were generated after comparing the cases.
Case 1: The Alleged Russian DDoS Attack against Georgia, 2008, Attack Summary
On August 8, 2008 unknown attackers launched a coordinated DDoS attack against Georgian government websites at the same time that Russian forces were engaged in combat with Georgian forces. The attack blocked banking, media and government websites, disrupting the flow of information throughout Georgia and to the outside world (Ashmore 2009). These botnet-driven DDoS attacks were accompanied by a cyber blockade that rerouted all Georgian Internet traffic through Russia and also blocked electronic traffic in and out of Georgia (Beidleman 2009). Computers belonging to U.S., Russian, Ukrainian, and Latvian civilians with no connections to the Russian government carried out the attacks, and private citizens were also invited to join in the fight: Russian language websites distributed instructions on how to flood Georgian government websites, and some sites also indicated which target sites were still active and which had collapsed under attacks (Economist 2008; Boyd 2008; Goodman 2010). Based on subsequent network activities, analysts speculate intruders may have implanted malware “time-bombs” to launch more strikes at will in the future (Goodman 2010). The attack may have also compromised back end databases, such as stored bank account and transaction information.
The attacks also drew in third party defenders. The Ministry of Defense and the President relocated their respective websites to U.S.-based Tulip Systems servers, and the Ministry of Foreign Affairs moved their website to an Estonian server. Soon after, Tulip servers came under DDoS attack, meaning that a private corporate entity compromised United States neutrality. Google also provided assistance to Georgia’s private business websites, and Computer Emergency Response Teams from Poland and France helped collect Internet log files and analyze Internet Protocol (IP) data from the attacks (Tikk, Kaska, Rünnimeri, Kert, Talihärm, and Vihul 2008; Goodman 2010).
Identifying the Perpetrator
Researchers at Shadowserver, a volunteer group that tracks malicious network activity, assessed the command and control server that directed the attack was based in the United States and had actually been online several weeks before the assault (Markoff 2008). Additional analysis also indicated another attack domain was located in the United Kingdom under the ownership of a user with a Russian (.ru) email address and Irkutsk, Siberia contact telephone number Rios, Tenreiro de Magalhães, Santos, and Jahankhani 2009). Russia has never claimed responsibility; yet, Russia had a vested interest in the attack, and the attack’s sophistication and timing suggested state involvement. This aligns with the previous discussion on cyber conflict attribution: although non-state actors could have planned and executed the attack, it is unlikely due to the resources required.
ICT statistics indicate that Georgia had 7 Internet users per 100 people in 2008, which has since risen to 32 per 100 people. In contrast, Estonia, which experienced a similar attack in 2007, had 57 Internet users per 100 people at the time (I4CD 2011). The relatively low number of Georgian Internet users in 2008 at the time reflects the nation’s low infrastructural capacity and lack of dependence on IT-based infrastructure. Cyber attacks should have less impact on low density ICT infrastructures than on high density ICT infrastructures where vital services like transportation, power and banking depend on Internet access (Markoff 2008). However, Georgia has few cross-border landline Internet connectivity options, namely Turkey, Armenia, Azerbaijan, and Russia. Georgia's Internet infrastructure is particularly dependent on Russian information pipelines: nearly half of Georgia’s thirteen connections to the Internet passed through Russia as of 2008 (Tikk, et al. 2008).
Thus, Georgia's data dispersion options were limited, which made it a good target for coordinated cyber assault and isolation. Georgia’s loss of crucial government websites severed Internet communication in the early days of the Georgian-Russian conflict, when the government had a vital interest in keeping information flowing to citizens and the international public (ibid). The cyber incidents also reflected on provision of public services: as a consequence of the attacks, the National Bank of Georgia ordered all banks to stop offering electronic services, an outage that lasted for 10 days (Beidleman 2009).
This was the first documented instance of a cyber attack conducted in conjunction with a conventional attack, and was similar to conventional use of field artillery prior to a ground force attack (Caulkins 2009; Ashmore 2010). The Russian operation against Georgia also highlights two cyber deterrence issues: scalability and temporality (Goodman 2010). First, scalability refers to the wide variety of effects that a single capability can achieve in cyberspace. In the physical world, capabilities have a limited set of purposes: a tank, a nuclear weapon, and a plain rock all have generally predictable effects. In cyberspace, a single tool can achieve a wide array of effects, making it much harder to predict the scale, let alone how the effects may diffuse.
Next, temporality refers to the instantaneous nature of cyber attacks. The physical world, hampered by friction, gives defenders the benefit of early warning of opponent mobilization efforts. In contrast, uncovering cyber indicators such as botnet arrays, packet sniffers, and network reconnaissance intrusions may indicate some kind of future malice, but not when, how, against whom, and for what purposes they will occur. Short- and long-term effect of cyber attacks must also be considered: unlike kinetic force, cyber attacks can be designed to cause only temporary effect during a particular timeframe (Tikk, et al. 2008). While the attacks did not permanently damage Georgian Internet infrastructure (insertion of potential time bomb infections aside), damage caused by the attacks was most acute at the time when Georgia most needed system access.
Finally, private industry operates the majority of the global Internet system. Thus, even if a state is not a belligerent in a cyber conflict, unregulated actions of third party actors have the potential of unintentionally impacting the state, including cyber neutrality, as illustrated by U.S. companies assisting Georgia without knowledge or approval of the U.S. government (Korns and Kastenberg 2009). Outside retaliation against companies such as Tulip Systems also raises sovereignty questions: how should a state respond to attacks against private actors?
Case 2: The Alleged U.S./Israeli Malware Attack against Iran, 2010, Attack Summary
Global network security firms first identified The W32.Stuxnet worm in June 2010. Stuxnet was primarily written to target an industrial control system (ICS) or set of similar systems, such as those used at Iran's Natanz nuclear facility. Specifically, the worm was designed to reprogram code on programmable logic controllers (PLCs) while at the same time hiding changes from equipment operators. To increase their odds of success, the worm authors scripted a vast array of software components designed to overcome malware countermeasures (Falliere, Murchu, and Chien 2010). This suggests the authors had detailed knowledge of Siemens’s industrial-production processes and control systems and access to the Natanz facility’s blueprints. In short, Stuxnet was the work neither of amateur hackers nor of cybercriminals, but of a well-financed team (Economist 2010). For security reasons SCADA systems are not usually connected to the internet; thus, Stuxnet was designed to spread via infected memory sticks plugged into a computer’s USB port. It can also copy itself on to other removable devices and spread across local networks via shared network folders and print spoolers, and can even be inserted in an Adobe .pdf file and sent over email (Economist 2010; Byers, Ginter and Langill 2011). The worm is also written in multiple programming languages, meaning it can infect language-specific systems (Chen 2010).
Stuxnet is designed to spreads aggressively once inside a network. Within a few hours of infection, the worm would likely spread to systems connected directly or indirectly to compromised computers. The Stuxnet worm was also designed to contact command and control servers over the Internet for new instructions, communicating in plain text to circumvent intrusion monitoring systems searching for program language codes, and then using local peer-to-peer communication to update itself over non-networked systems, again through removable media (Byers, et al. 2011). Then, if Stuxnet finds the correct PLC model, it starts one of three sequences to inject different code “payloads” into the PLC. The first two were designed to send Iran’s nuclear centrifuges spinning out of control (Albright, Brennan, and Walrond 2010). The second recorded what normal operations at the nuclear plant looked like, then played those readings back to plant operators so that it would appear that everything was operating normally while the centrifuges were actually tearing themselves apart. Plant personnel may have heard centrifuges breaking, but the centrifuge attack happens so quickly that the operators were unlikely to have time to enact countermeasures (Albright, et al. 2010; Broad, Markoff and Sanger 2011; Byers, et al. 2011).
Identifying the Perpetrator
But, what potentially ties the worm to Israel? Security analysts found the number string 19790509 in a specific Windows registry key. The value may be a random string and represent nothing; but if read in date format, the value may be May 9, 1979. On May 9, 1979, Iranian Jew Habib Elghanian was executed by a firing squad in Tehran--the first Jew and one of the first civilians to be executed by the new Islamic government, prompting the mass exodus of Jews from Iran (Falliere, et al. 2011). Analysts also found the following project path inside the worm’s driver file: b:\myrtus\src\objfre_w2k_x86\i386 \guava.pdb. Guavas are plants in the myrtle, or myrtus, family. The string could have also no significant meaning; however, Myrtus could be “MyRTUs.”RTU stands for remote terminal unit, which is a synonym for PLC. In addition, myrtle is the name Hadassah in Hebrew, and Hadassah can also be Esther. In the Torah, Esther learned of a Persian plot to assassinate the king. With this foreknowledge, the Jews then lead a preemptive strike against the Persians to prevent the assassination (ibid). Symantec Corporation, however, cautions against drawing attribution conclusions, given that attackers would have the natural desire to implicate other parties.
Additionally, some versions of the worm struck their targets within 12 hours of being written (based on time stamps within the worm code), indicating that the coders had infiltrated targeted organizations. To do so, the attackers needed to gather detailed intelligence, as each PLC is uniquely configured. Configuration documents may have been stolen by an insider or even retrieved by an early version of Stuxnet or other malware infection. Once attackers had the design documents and potential knowledge of the facility's computing environment, they would develop a new version of Stuxnet (ibid). Attackers would also need to setup an experimental environment that would include the necessary hardware (such as PLCs, modules, and peripherals) in order to test their code. The full cycle may have taken six months and five to ten core developers, as well as numerous other support personnel (ibid). This indicated that the worm required significant resources to write, execute and control once launched, let alone required physical access to the facility.
Although the worm did not halt Iran's uranium enrichment program, it did slow the program considerably: analysis suggests that the worm destroyed over 1,000 of Iran's 8,528 centrifuges, or approximately 12 percent of Iran’s total (Albright, et al. 2010; Director General Report 2010). Moreover, while the broad array of self-replication methods may have been necessary to ensure the worm would find targets, Stuxnet also caused noticeable collateral damage by infecting machines well outside the target area. As of September 2010, data from Microsoft and Symantec identified approximately 100,000 infected systems worldwide, with 60 percent of the infected machines in Iran, 18 percent in Indonesia and 8 percent in India, with additional infections detailed in Figure 2 below (Economist 2010; Falliere, et al. 2011). Unfortunately, Stuxnet is now available for global analysis and offers a premier template for modification and attack against other industrial targets: countries may feel justified in launching their own attacks against neighboring states, perhaps even using a modified Stuxnet code (Chen 2010; Broad, et al. 2011; Farwell and Rohozinski 2011).
FIGURE 2: Geographic Distribution of Stuxnet Infections
Source: Falliere, Murch and Chien 2011, 6
Stuxnet is considered one of the most complex and well-engineered worms ever captured--unparalleled to any previous malware, according to Internet security company Kaspersky Lab--and is the first-known malware specifically designed to target real-world infrastructure (Chen 2010; Byers, et al.; 2011; Fildes 2011). From this, numerous cyber experts assesses that Stuxnet was a state-level attack: even if planned and executed by private hackers, other cyber experts argue that only states have the resources to hire professionals and overwhelm other states (Neild 2010; Weber 2010). For many years, the United States and United Nations have pursued various methods to disrupt Iran’s ability to illicitly supply its nuclear programs. In contrast to overt military strikes, there is an appeal to cyber attacks aimed at a centrifuge plant built with illegally obtained, foreign equipment, and operating in defiance of United Nations Security Council resolutions (Albright, et al. 2010). Even if U.S. and Israeli hands are completely clean, the worm's effects played into their favor with minimal material cost. In regards to diffusion, the highly-specific Stuxnet virus's rapid global spread suggests that even a cyber weapon with extraordinary targeting capability cannot be easily controlled once unleashed.
THEORY OF POSITIVE CYBER-SPATIAL DIFFUSION
Based on analysis of the case narratives, I propose a theory of positive cyber-spatial diffusion posits that cyber conflict between nations A and B will spread to other nations C. Both case studies demonstrated diffusion to states C, but the mechanisms were different. In the Russia-Georgia case, the conflict first diffused to other states C by using ICT systems inside other states to launch the attack (mostly the U.S. and Europe); and second, by drawing state and non-state support for both sides. Publication and distribution of the attack tools permitted individual users to join the attack on Georgia, while state governments and private corporations stepped in to support the Georgian government. This raises a new facet of diffusion not found in conventional conflict literature: diffusion to not only to states, but also to non-state actors.
In the U.S./Israel-Iran case, the conflict literally diffused as the worm spread to other states C. But, the other states C were not purposeful actors: states C were not drawn into the conflict, nor were systems outside of Iran damaged (although removing the worm from systems incurs resource and time costs). But, the spread of the worm may identify Iran's nuclear relationships with other states, as tracing the worm's spread identifies network connections (Montgomery 2005). The Natanz facility is not connected to the Internet, meaning that not only did the worm require physical introduced to the system, but it also needed someone to move it to an Internet-connected system. In benign form, an individual may have simply connected an infected USB drive to his home computer, allowing the worm to spread without conscious user intent. However, the worm could have also travelled on materials traded between plant workers and outside members familiar with Iran's nuclear program (ibid).
Generated Component Mechanisms
In addition to theory, the cases also identify different component mechanisms. First, Russia and Georgia are considering enduring rivals: Russia and Georgia have engages in more than 18 militarized interstate disputes since the 1991 collapse of the Soviet Union, exceeding Diehl and Goertz’s (2000) threshold of 6 MIDs for an enduring rivalry. Russia also attacked Georgia over open infrastructure that crossed their mutual borders, and also launched attacks from international locations. The attack was also not designed to outright destroy ICT infrastructure; rather, it was used to temporarily disable Georgian ICT systems during the initial ground offensive, when needed most by the Georgian government. Finally, framing cyber conflict diffusion through expansion, third-parties then stepped in to support Georgia, resulting in escalation attacks against their own systems outside of the conflict area.
In the Iran case, the attack occurred between verbal rather than enduring rivals, and was designed to destroy physical infrastructure. The attack had to first overcome Natanz's closed-loop infrastructure; however, once installed, the worm spread rapidly, resulting in neutral-state infections far outside the target area, referred to as pathogen in this paper. Diffusion within this case is the spread of the conflict outside of the target, but against targets that are unwitting and unknowing participants. Additionally, the Iran case suggests that the target contiguity is not necessary as long as the attacker can identify exploitable vulnerabilities. These two cases, then, generate the following theoretical component mechanisms. Future researchers can use these proposed mechanisms as variables for testing my proposed theory, or can also use them as foundational descriptions for building new theories. Additionally, these component mechanisms are not all-encompassing, and future research will also ideally generate additional mechanisms as new data becomes available.