The politics of international chemical weapon justice: The case of Syria, 2011–2017

There has been near-universal condemnation of the use of chemical weapons in the Syrian conflict. The international community has nevertheless struggled to make progress on holding the perpetrators to account. This article reviews developments at the international level in terms of Syrian chemical weapon justice between 2011 and 2017. It argues that there have been substantive disagreements between states on the rationale and means of justice in the Syrian case. It also argues that international initiatives have been tightly intertwined with developments in chemical disarmament and conflict resolution processes as well as the broader war. The article describes progress and challenges to chemical weapon justice in a number of distinct formal international mechanisms during the period studied. The analysis concludes by contextualizing international responses—including the U.S. tomahawk strikes against a Syrian airbase—to the Khan Shaykhun chemical attack of April 2017.
KEYWORDS: Civil war, peacebuilding, arms control and disarmament, United Nations, international regimes

Contemporary Security Policy

KEYWORDS: Civil war, peacebuilding, arms control and disarmament, United Nations, international regimes



Automatic Dread

AI war

In this piece I review the rather directly entitled monograph  ‘Why the United States Must Adopt Lethal Autonomous Weapon Systems’ written by Major John. W Brock II. The book is timely, vivid, concise and powerfully written- which makes it a useful work to critique. In my response, I take issue with some of the pessimistic working assumptions in the work. Specifically, I suggest that it would be naïve to completely discount the prospect of  effective  suppression, prohibition and stigma norms emerging around Lethal Autonomous Weapon Systems (LAWS) at the global level. I initially present a summary of the book’s central argument- before outlining my own thoughts.

Argument Summary (as presented in the book)

1) Artificial Intelligence and Robots are game changers

  • Lethal Autonomous Weapon Systems (LAWS) will bring about the third revolution in military affairs and radically transform  broader society
  • If the US do not develop LAWS they will be at a strategic disadvantage

2) That East and West have differing views on Artificial Intelligence (AI) technology.

  • Eastern countries see AI as an economic saviour which can improve their society
  • Western culture treats AI with ‘paranoia, anxiety and scepticism’
  • This is causing the West to lose the global robotic arms race and lose economic dominance
  • Many countries are now developing LAWS; Including Russia and China
  • However, differing cultural perspectives are creating divergence between those willing and unwilling to adopt the new technology
  • Currently, the US has a self-imposed ban on such systems
  • However,  the US will eventually engage with such systems regardless of current  moral and ethical objections.
  • If the US falls too far behind its adversaries it may never be able to catch up

3) The US is losing it’s military edge due primarily to it’s over reliance and investment into traditional military capabilities

  • The US military is trapped in it’s human centric way of thinking- which is currently preventing the US from maintaining it’s technological edge and from defeating future adversaries
  • While the US is still investing heavily into military personnel; Russia, China and other Eastern states) have sort to replace costly human soldiers.

4) The US must adopt new Lethal Autonomous Weapon System focussed approach.

  • The US must lift it’s own self-imposed ban on LAWS
  • The US must train solider to improve the effectiveness of  AI technology; rather than investing into augmenting human capacities- this will help the US keep it’s edge

Initial thoughts..

The central thrust of this book is that of a warning. It highlights the risk of technological surprise to the US from determined adversaries. The book emphasises the idea that a large scale sub-nuclear war between the US and China is inevitable- and argues that LAWS will be essential to winning. A few years ago, we saw similar arguments which accompanied US Defence investment into human augmentation technologies; a policy which this book criticises as been too conservative.  In this sense, it is worth remembering that the work comes at time in which the Third Offset Strategy (which informed priorities in the current model of US defence technology investment) is being re-evaluated. It is clear then, that this work will be of value to those seeking military funding in areas relevant to LAWS.

The book comes at a very specific moment in the history of LAWS. On one hand, the march of technological progress (specifically in the military sector) appears inevitable- in at least some forms. However,  boundary work on the legality/ ethics of AI technology is still evolving; it is not yet clear in what respect LAWs will come to be ostracised or normalised as part of warfare. What is clear however, is that such boundary work will not be centred on the type of long-standing categorical prohibitions which exist around chemical and biological weapons- and will emerge with reference to broader humanitarian principles. They will involve the playing out of a number of processes domestically in key states, as well as at the international level.

As Richard Price observed, in his study of the chemical weapon norm; revolutionary military technologies always face initial resistance- and sometimes long-standing stigmas and controls emerge, and sometimes they don’t- this is not predetermined and is a product of  the interaction of political  struggle, happenstance and broader historical factors. The normative landscape around these weapons, as with other weapon systems, will continue to evolve. This will involve pitched battles around different forms of system and different forms of use.   As it currently stands, it is clear that campaigns in the West have been successful in putting the burden of evidence on Western governments who must demonstrate the efficacy of certain types of weapon before they are developed and employed.

It is apparent, that while analogies might be drawn between the  historic emergence of ethical and legal frameworks directed at earlier emergent military technologies; LAWs will have a distinct history. One thing which is clear however, is that campaigns have worked in the past; and that such campaigns do not need to secure a comprehensive and universal prohibition to affect state behaviour.

Thinking beyond the history of disarmament. The argument in Major Brock’s book can also be contextualised in broader norms surrounding societal assessment of technology in Western states. Central to this area of policy is the idea that advanced societies are dependent on the development and exploitation of new technologies- to provide ever increasing productivity and security. Developments which will often at the same time raise challenges to social order and security.

In contemporary understandings, technological progress is seen as inevitable; but publics are often framed as irrational, fickle and squeamish. For this reason, new technologies are conceived as vulnerable and prone to public backlash. In this context, the often stated cautionary tale is that societies might miss out on the potentials of a technology due to short-sighted prohibition. Just behind this reality, but often initially obscured, also often sits the idea that specific areas of technology often compete with other tech fields for resources. Emerging technologies also often face resistance within militaries, and from other technologists who have invested resources into more established tech.

This all means that the vulnerability of early technology, and the potential that it is irretrievably and unwisely condemned to the dustbin of history, stems not only from the more dramatic threat of draconian regulation – but from flighty and fickle investors- be that the state or in the private sphere. This then, allows the Author to sensibly talk about the vulnerability of LAWS in the US.  However, the importance of this vulnerability is dependent on if we agree with the diagnosis of the international system provided by the author.

Finally, this work can be contextualised in the broader global security environment;  with the US cast centre stage as the declining hegemon. Intrinsic to this perspective is the idea that attempts to conserve US global dominance are not only desirable but inevitable. And further to this, that the relative power gains made by other states- who are likely (if not guaranteed) to vigorously exploit this area of technology for their own ends, will likely undermine the security of the US.

The essay is written then, to create and integrate a specific vision of this technology into the US policy maker and public imagination- at a time in which support is being sort for LAWS investment in the US. I am not arguing that the claims made in the work about either the technology or the broader international system are not shrewd- or the purpose of the work misguided or ethically questionable. However, it is clear to me that this type of polemic tends to serve to marginalise from our field of vision some important ideas and considerations; in both moral, analytical and more practical terms.

Automatic Dread…

Even if the projections for this technology in this work prove to be only partially true, automation will continue to transform a wide range of sectors in society. It will also reflect (if not drive) the contemporary shift in global power relations which is already well on the way. However, it is clear to me that the working assumptions about the disruptive potential of technology should be opened up to critical appraisal.

In essence, I agree that these arguments reflect accessible, and perhaps even seductive understandings and characterisations. However, they do not speak to the more complex socio-economic and political factors (both international and domestic) in play which shape the design and implementation of innovation policy as well as long term military planning, investment and adoption. Some key criticisms are now discussed.

First, it is true that early debates about the prohibition of technologies, as well as line drawing and labelling in prohibition campaigns tend to frame bans in binary terms ( i.e prohibited or normalised). However, in reality prohibition and norms against use tend to be partial ( in terms of scope and observance)- relying on logics of denial, dissuasion and suppression. Furthermore, It is not possible to confidently claim that a specific area of weapon development will (or will not) be insulated from broader moral evaluation and societal control in the longer term; or how this would feed back into internationally embedded norms over time. Indeed, A key  thrust of the adoption of LAWS in the US is the idea that other states are not holding back- and so will continue to develop and employ the technology in the long term. Something which would make any attempt at control, including self-denial and moral leadership ineffective. However, ethical relativism on a specific technology or even innovation more broadly does not preclude the possibility of practical collaboration in terms of arms control and disarmament; especially if powers see benefit from keeping technology out of the hands of others.  The Author argues that China have no interest in this- reflected in it’s current investment, security interests, and broader cultural acceptance of autonomous technology.  However, even if all of these are accepted as working assumptions none of these factors would preclude the emergence of international control agreements; as a result of overlapping interests, or trade-offs with other issues at the international level.

Another theme in the work is that AI is a complete game changer in terms of military strategy and the character of war. However, while technology does have significant inertia it never fully transcends established political military cultures- even in more centralised authoritarian states. Nor will AI ‘end history’ and transcend the constraining aspects of the broader international system; including but not limited to; threat/response dynamics between dominant and emergent powers, emergent competitor technologies, counter-AI strategies which might undermine it’s utility (including the ‘surprise of the old’) and evolving economic and social models. This ofcourse needs substantial unpacking- and the strength of this argument is not only contingent, but dependent on the time scales we are thinking in. The key point is we must be careful in placing specific technological trajectories too centrally in our visions of the future; in strategic or more cultural terms.

Finally, is worth remembering that it is the role of military planners to consider worst-case scenarios. However, a worry is that such scenario thinking inadvertently creates a reality which could have been avoided. This is not a wide eyed call to give peace a chance; but it is a call to open them a little wider.

Brett Edwards

Cover Image is a screen shot from the Black Mirror TV series currently available on Netflix.

ICRC – Humanitarian perspectives on the changing face of war

Technology, Terrorism, and Armed Conflict in the 21st Century

This week, we are looking at something a bit different to our usual fare. We are considering an E-briefing from the International Committee of the Red Cross (ICRC), via their journal the International Review of the Red Cross. The ICRC is the most important NGO in terms of warfare. It is hugely influential in terms of the law of armed conflict, and state policy towards humanitarian aspects of war. Here we discuss their piece ‘Humanitarian perspectives on the changing face of war‘ (available here). It considers how war, and its relationship with victims of war, has changed over the past few centuries. It is an interesting read, and I encourage you to check it out. 


Here’s what we thought: 


This briefing paper from the International Review of the Red Cross gives a short summary of something of the changing nature of warfare. Though the article is brief, and…

View original post 429 more words

UK Signals New Autonomous Weapons Doctrine – But what has become of LAWs Verification Debate?

tom hobson | the dreary mantle

In September 2017, the UK media and a great deal of the international technology and defence press, announced that the UK had laid out a new doctrine in regards to autonomous weaponry. The Guardian reported, on 10th September, that Britain’s military will “commit to ensuring that drones and other remote weaponry are always under human control, as part of a new doctrine designed to calm concerns about the development of killer robots.”[1]

The UK’s announced position on autonomous weaponry was released as part of the August 2017 “Joint Doctrine Publication 0-30.2 – Unmanned Aircraft Systems.”[2] As well as catching the attention of the relatively mainstream press, the document unsurprisingly provoked renewed discussion of the role for autonomous robots in the military, and the feasibility of a potential, pre-emptive ban.

The press framed the Ministry of Defence’s new doctrine as an almost direct response to the August 2017…

View original post 975 more words

Techno-Utopianism, Coded Ethics and Some Confusion on the Three Laws

tom hobson | the dreary mantle

A recent installment of The Inquiry asked “Can we teach robots ethics”? The discussion in the episode provides some really interesting food for thought. Anyway, it reminded me that, for a while, I’ve wanted to write about something that troubles me in discussions of AI, and particularly military AI uses: The idea that coding ethics is either possible or desirable.

This is a fairly quick stab at the topic, and is mainly framed around Arkin’s Governing Lethal Behaviour from 2008. It’s in no way intended to be an exhaustive treatment of the topic – or indeed, of Arkin’s work… But hopefully, this kind of criticism can generate useful further discussion.

This is Part One. Part Two to follow.

Robert Arkin’s “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture” (Arkin, 2008) is widely regarded as a keystone text in the field of ethics and autonomous military (non)lethal technology…

View original post 4,327 more words

“It was the war, the whole bloody war”

tom hobson | the dreary mantle

I’ve written on this subject before (at considerably more length) but I wanted to offer a couple of reflections on responsibility and the “ethical moment” in war…

I recently rewatched the rather brilliant (and long – oh, so very long) The Cruel Sea. In one of the films most iconic scenes, we witness Captain Lockhart racked with guilt having, he believes, caused the unnecessary deaths of a number of sailors. Finding Lockart in this state,  Ericson tells him:

“No one murdered them. It’s the war, the whole bloody war. We’ve got to do these things and say our prayers at the end.”

The scene is generally seen as depicting a typical mix of tragic stoicism and, perhaps unusually, a consciously anti-war sentiment.

There is something else though – a pathology of absolving of responsibility for violences done in wartime. This idea has been explored throughout the history…

View original post 142 more words