jump to navigation

Machine Ethics is the Future March 11, 2011

Posted by Sean Welsh in ethics.
trackback

I have been tinkering with the idea of resuscitating my old Master’s Thesis on Machine Ethics. I started it in 2006 but dropped out before my topic confirmation hearing for various reasons: politics, mania and relationship collapse to name but three. Alas, the laptop on which I downloaded all the research papers when I was still working at a certain University has failed (dead hard drive as far as I can tell). I was not too worried about my drafts because the main reason I dropped the topic was that my drafts were not really making any sense. So I was hitting Google Scholar looking for papers and found a pile of them at a blog called Common Sense Atheism with links to pdfs. (Thanks Luke!)

Most of the authors of the articles are people I met at the AAAI Symposium on Machine Ethics that was held in Washington DC in the Fall of 2005.

The heart of the Machine Ethics problem centres on a series of questions:

  • What makes a decision moral?
  • What data is involved in such a decision?
  • What processes are followed?
  • Can these processes be automated?
  • Can they be embedded in an autonomous system (e.g a robot)?
  • Should they be?

I have been pondering these questions on and off for the past 5 years and am now feeling like they are coherent enough to be put out into the blogosphere.

What I am looking to do is model moral decisions in software. The action of forcing ethics into software requires you to be very clear about what you doing.

My MacBook Pro has no common sense, no altruism, no empathy, no education, no consciousness even, its perception is feeble, mouse clicks and keyboard strokes, and yet I am expecting it to host the Ultimate Ethical Algorithm that Will Save the Planet, Deliver World Peace, Foster Universal Siblinghood, End Oppression of All Species et cetera, et cetera…

What I am saying is that a computer really requires you to spell out (in software code) exactly what you are doing. You cannot gloss over vague bits like you can with human beings. You cannot appeal to “intuition” – whatever that is. The code forces clarity upon your philosophical thinking.

This, I believe, is a Good Thing.

However the Quest to get the Ultimate Ethical Algorithm that Will Save the Planet, Deliver World Peace, Foster Universal Siblinghood, End Oppression of All Species et cetera, et cetera… has a few hurdles to get over.

Hurdle 1. There is no widespread agreement, even amongst moral philosophers, or rather, especially not amongst moral philosophers as to what constitutes “the correct ethical theory”.

Hurdle 2. If you do through some stroke of genius come up with “the correct ethical theory” it may or may not be programmable. Some ethical theories are obviously more computable that others. Utilitiarianism, for example, is obviously computable and has been since 1789 when Jeremy Bentham published The Principles of Morals and Legislation – which has the variables required clearly laid out ready to be plugged into a program (the Felicific Calculus) well in advance of the invention of computing hardware and software.

Virtue Theory, by contrast, is not obviously computable. A utilitarian moral calculation is pretty tractable with standard decision theory. Virtue theory is more nebulous, you would need consciousness installed on your MacBook to get it to run. This, I suspect, is not on Apple’s short to medium term agenda. (Note to Self: register the iThink trademark…)

But even if it turns out “the correct ethical theory” turns out to be computable, you then have to deal with …

Hurdle 3. The big scary one…

The Future doesn’t need us… otherwise known as the Terminator Problem. In a nutshell, this hurdle is basically saying anyone building thinking machines is betraying humanity because these things will inevitable get smarter than us and terminate us. So people often argue that such machine should not be build at all. Robots need to be pretty dumb and obviously under human control.

Personally, I think this is paranoid. Most people who dream up these plotlines have vivid imaginations and would not know a web service from a java class…

Anyone who can program knows that you build machines that are pretty darn predictable and that can fail gracefully so I don’t regard the Terminator Problem as a particularly scary problem. It is little more than a sign of human insecurity and fear of the unknown.

When it comes to programming, I would say bluntly that about 99% of the human population has no clue.

It is far more likely that when the tech gets to that level humans will be queuing up to migrate their fragile consciousnesses and perishable identities into these robust, resilient and relentless triumphs of engineering.

Put another way, I am not scared of the Terminator. I am going to be first in line to migrate into the Terminators. By the time these things get released, I suspect they might have more sensible names. You know, like cars or Apple products or something.

iTerminate (tm and patent pending) šŸ™‚

Comments»

No comments yet — be the first.

Leave a comment