Law in the Internet Society

World War III

-- By CharlotteSkerten - 05 Nov 2017

In the beginning...

The Internet began in the US over 50 years ago in response to growing concerns that the Soviet Union could wipe out the US telephone system through nuclear strike. In response, the US Department of Defense’s Advanced Research Projects Agency developed a computer network system utilizing packet switching that would enable government leaders to communicate through interconnected computers on a single network. This decentralized system of power would enable the US to transmit orders and control their armies even if Washington DC was wiped out by nuclear radiation. The network was extended overseas, ultimately evolving into the internet we know today.

In the 25 years since the end of the cold war, the horrors of Nagasaki, Hiroshima and the Cuban Missile Crisis have been largely forgotten. Most people have become comfortable with (or happily oblivious to) the fact that there are still 15,000 nuclear weapons in the world, many on high trigger alert. Despite 122 states voting for a UN nuclear weapon prohibition treaty earlier this year, we seem as unlikely to revert to a nuclear-free world as to one without the internet. We have, in short, “learned to stop worrying and love the bomb”.

But the development of North Korea’s nuclear program, and the US response, have reignited fears of another world war – and this time the relationship between the internet, society and weapons of mass destruction will be dramatically different.

The original threat: nuclear weapons

Many nuclear arsenals are being ‘modernized’, including through increased connectivity to the rest of the war-fighting system. This introduces new vulnerabilities and dangers, including that nukes can be sabotaged by both state-sponsored and private hackers. In 2010, 50 nuclear-armed Minuteman missiles in Wyoming suddenly went offline and disappeared from their monitors. Communication was re-established remotely an hour later. It was eventually discovered to be an incorrectly installed circuit card, but the lost connection could have allowed hackers to control the missiles – which are designed to fire instantly as soon as they received a short stream of computer code, and they are indifferent about the code’s source.

Nuclear weapons control systems are often ‘air-gapped’ from the open internet.  But the ‘stuxnet’ attack in Iran nevertheless demonstrates the impact that a sophisticated adversary with a detailed knowledge of process control systems can have on critical infrastructures. The stuxnet malware infected Siemens computers that controlled and monitored the speed of centrifuges at Natanz, slowing them down to a level that did not allow for the enrichment of uranium required for nukes, while making everything appear normal on their monitors. Although the computers were air-gapped, the malware was spread via infected USB flash drives. The attackers (widely believed to be American and Israeli-sponsored) first infected computers belonging to companies that did contract work for Natanz, which provided a gateway for infection when their computers became interoperable with those at Natanz.

Because North Korea has thousands fewer nukes than the US, and its infrastructure remains disconnected, hacking its nuclear weapons systems would be more difficult. Based on their diametrically opposed reliance on the internet, North Korea has been ranked first, and the US dead last, as to their cyber-conflict preparedness. But like all complex technological systems, those designed to govern the use of nukes are inherently flawed. They are designed, built, installed, maintained, and operated by human beings. We lack adequate control over the supply chain for critical nuclear components – hardware and software are often off-the-shelf. And today’s systems must contend with all the other modern tools of cyber warfare, including spyware, malware, worms, bugs, viruses, corrupted firmware, logic bombs and Trojan horses. The possibility of insiders facilitating illicit access to critical computer systems exponentially increases these risks.

The new threat: killer robots

Perhaps even more terrifying is the new arms race for lethal autonomous weapons systems (a.k.a ‘killer robots’) designed to select and attack military targets without intervention by a human operator. Killer robots involve statistical analysis of data sets as a complement to algorithms that use the data to do something, for example finding data patterns that identify a target and moving the robot forward. Because each person active on the internet has become a dense cluster of data points linked to other people’s clusters of data points, the physiology of the net creates the perfect breeding ground to develop killer robot technologies.

While robotized soldiers may (or may not) still be some time away, other lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly capable of firing autonomously, is used along the border of the Korean Demilitarized Zone. The gun is the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. Prototypes are now available for land, air and sea combat.

The threat of killer robots has been the subject of much disagreement, including between Elon Musk and Mark Zuckerberg. But these weapons undoubtedly violate the first principle of robotics. Like nuclear weapons, they have real potential to cause harm to innocent people, and to global stability. Killer robots are also likely to violate international humanitarian law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and proportionality, which requires that damage to civilians is proportional to the military aim.  Additionally, if decisions are delegated to an autonomous hardware or software system engaged in battle, then can anyone be held responsible for resulting injury or death?


We have been told that the next world war will occur over the internet. But it may still involve cold war tools like nukes, with vastly increased risks because of the internet society in which we now live, or novel tools such as killer robots that exercise artificial intelligence. The ability to control weapons of mass destruction no longer lies only in the hands of governments, but nuclear systems can be controlled, and killer robots created, by civilians. And the purpose of these weapons is the efficient ending of human life.

Fact-checking could have been much tighter. Loss of contact with Minuteman silos does not mean that someone could have remotely fired the missiles. Stuxnet didn't slow down centrifuges, it sped them up until they were self-damaging. Fuel-cycle equipment isn't in itself weaponry.

Nor is it clear why, for example, you say that autonomous weapons will only be used against military targets. I first wrote about this subject a dozen years ago, and even then the political as opposed to military consequences of robot infantry were quite clear.

But I think the primary route to improvement here is to clarify the central idea motivating the essay. Your "conclusion" isn't really a conclusion because the idea from which you began isn't sufficiently explicit. The premises are ones that, by and large, your essay succeeds in making clear: Tools of warfare, large and small, are gaining intelligence as most "things" are gaining intelligence. The risks of accidental warfare, including perhaps nuclear war, have risen (although at the moment most human beings would probably say that it is old-fashioned human frailty, rather than technological change, that is moving the hands of the clock closer to midnight). Autonomous weaponry eliminates important political limits on the use of state violence. On those premises, what is your additional idea that you want to communicate to the reader? Put it at the top of the draft, in a sentence or two, so the reader knows what you want her to understand. Then you can develop the idea in the context of the valuable material you have in this draft, tightened somewhat. That would enable a real conclusion, that is, a re-presentation of your original idea in a form that the reader can take further for herself, under her own intellectual steam.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 03 Dec 2017 - 19:21:41 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM