A recent installment of The Inquiry asked “Can we teach robots ethics”? The discussion in the episode provides some really interesting food for thought. Anyway, it reminded me that, for a while, I’ve wanted to write about something that troubles me in discussions of AI, and particularly military AI uses: The idea that coding ethics is either possible or desirable.
This is a fairly quick stab at the topic, and is mainly framed around Arkin’s Governing Lethal Behaviour from 2008. It’s in no way intended to be an exhaustive treatment of the topic – or indeed, of Arkin’s work… But hopefully, this kind of criticism can generate useful further discussion.
This is Part One. Part Two to follow.
Robert Arkin’s “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture” (Arkin, 2008) is widely regarded as a keystone text in the field of ethics and autonomous military (non)lethal technology…
View original post 4,327 more words