Comments on Prolegomena to any future artificial moral agent – Allen, Varner and Zinser (2000) March 12, 2011Posted by Sean Welsh in ethics.
add a comment
Journal of Experimental and Theoretical Artificial Intelligence, 12, 2000, pp 251-261.
As artificial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an artificial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing artificial moral agents result both from controversies among ethicists about moral theory itself and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a “Moral Turing Test” is considered and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata.
Comments on the paper
The authors begin by briefing stating the case for machine ethics.
Allen et al write:
Robots possessing autonomous capabilities to do things that are useful to humans will also have the capability to do things that are harmful to humans and other sentient being. How to curb these capabilities for harm is a topic that is beginning to move from the realm of science fiction to the realm of real-world engineering problems. As Picard (1997) puts it: ‘The greater the freedom of the machine. the more it will need moral standards’.
Clearly we need machine ethics. Machine ethics can be defined as the ethics that can be implemented in machines.
Attempts to build an artificial moral agent (AMA) are stymied by two areas of deep disagreement in ethical theory. One is at the level of moral principle: ethicists disagree deeply about what standards moral agents ought to follow.
They are not kidding. Here is a brief selection of ethical theories you might like to think about implementing to control the behaviour of a robot.
- Divine command theory
- Moral relativism
- Natural law theory
- Other forms of consequentialism
- Other forms of deontology
- Moral pluralism
- Virtue theory
- Moral particularism
- Asimov’s Three Laws
It gets worse. There are many variations of the Divine command theory. There are various factions of the Jewish schools, various factions of the Christian schools, various factions of the Muslim schools. There are numerous variations of utilitarianism, Marxism and Buddhism and indeed all the above.
No ethical theory has the epistemological status of say Newton’s Law. We can all agree on the scope of the application of Newton’s Law and it’s degree of accuracy for projects on this planet. This cannot be said about even the most popular of moral theories in the list above.
Allen et al outline a second problem. This is more conceptual. Actually it is ontological.
Apart from the question of what standards a moral agent ought to follow, what does it mean to be a moral agent?
Speaking bluntly, I think the question of ‘being’ should be de-scoped from early implementations of machine ethics. I don’t think it is seriously worth pursuing yet.
I take the view that the focus of machine ethics should be on the discovery and development of ethical decision procedures for machines than can realistically be built in the short to medium term.
The idea of digital being is interesting indeed fascinating but in the short term we will be more concerned with acceptable and efficient behaviour of robots built for specific purposes (e.g. mining, transport, eldercare).
Put this way, machine ethics must learn to crawl before it aspires to govern (or save) the planet. But that is not to deny that there will be a common thread between the ethics involving in crawling and the ethics involved in planetary government.
Allen et al point out that the requirements of your moral theory will impact your technical implementation. (I think a detailed description of the data required and the decision procedure involved in some of the above theories would be a very worthwhile undertaking.)
In Kant and Mill, for example, as Allen et al point out, there are very different moral principles tied to very different conceptions of what a good moral agent is.
In Mill to be good the robot merely has to act so that it tends to promote happiness.
For Kant to be good the robot must go through certain specific cognitive processes. (Only a good will is good.)
I think the challenge for virtue theory at first glance as high as the Kantian challenge. Exactly how do I go about programming beneficence? What does it mean to assert that good action follows from good character?
Then again, if you have a particular concrete task in mind for your robot, this question is a lot easier to answer in a specific situation than in general.
A taxi robot is good if it takes the shortest practicable route, does not crash, does not get stuck in traffic and uses the minimum amount of energy. A mining robot is good if it extracts ore from the ground conveys it to a ship or processing plant.
What more do we need to worry about?
So long as the robots stay ‘on mission’ I see little serious difficulty in their programming from an ethical perspective. Though obviously there will be situations that crop up in such domains that will require some ethical decision-making especially as the capacity of the machines for autonomous action increases.
Allen et al go on to discuss some high-level problems with consequentialist and deontological approaches to machine ethics.
There are various problems with the consequentialist approach. There is the computational ‘black hole’ problem. When do you stop evaluating possible consequences?
I am not too concerned about this. In practical terms the decisions of a taxi robot or mining robot will be relatively simple to compute in terms of future consequences. Thinking about abstract generalities will lead you to processing black holes but having a concrete task in mind will mitigate these processing issues.
Similar problems exist for deontology. The brute force required to implement the Categorical Imperative appears daunting. Allen et al write:
To determine whether or not a particular action satisfies the categorical imperative, it is necessary for the AMA to recognize the goal of its own action, as well as assess the effects of al l other (including human ) moral agents ’ trying to achieve the same goal by acting on the same maxim. This would require an AMA to be programmed with a robust conception of its own and others ’ psychology in order to be able to formulate their reasons for actions, and the capacity for model ling the population-level effects of acting on its maxim, a task that is likely to be several orders of magnitude more complex than weather forecasting (although it is quite possibly a task for which we humans have been equipped by evolution).
Likewise the Golden Rule has its issues.
Similarly, to implement the golden rule, an AMA must have the ability to characterize its own preferences under various hypothetical scenarios involving the effects of others’ actions upon itself. Further, even if it does not require the ability to empathize with others, it must at least have the ability to compute the affective consequences of its actions on others in order to determine whether or not its action is something that it would choose to have others do to itself. And it must do all this while taking into account differences in individual psychology which result in different preferences for different kinds of treatment.
Virtue approaches at first glance have their problems too.
The basic idea underlying virtue ethics is that character is primary over deeds because good character produces good deeds.
But honestly, how could an open source robot lie? Such a machine could be built (and in my view should be built) to be ethically transparent. A robot would be literally shameless. Its internal states could be precisely logged. Its algorithms would be clear.
I am not overly worried about the idea of a moral Turing Test for robots. I would rather that the algorithms controlling their behaviour were open source and that all their decisions were logged in real time to enable review of their function.
I am wary about bottom-up approaches to machine ethics. Most of the ‘nightmare scenarios’ involving the imminent human extinction at the hands of our digital creations are based on the premise of machine learning. The machine becomes superior to us and decides to enslave us (as in The Matrix) or destroy us (as in Terminator) or to implement martial law for our own good (as in the film version of I, Robot).
The ideas that a future superintelligence might just want to play a game (as in Wargames) I find far more persuasive.
But at this stage, notions such as machines becoming self-aware and developing malevolence are as remote as building consciousness.
However projects such as Cog are moving in that direction. Shame is a part of moral education in humans. Emotions provide moral knowledge – shame lets you know the action is morally wrong but Deep Blue’s lack of passion makes it a more reliable chess player.
Allen et al conclude as follows:
This is an exciting area where much work, both theoretical and computational, remains to be done. Top-down theoretical approaches are ‘safer’ in that they promise to provide an idealistic standard to govern the actions of AMAs. But there is no consensus about the right moral theory, and the computational complexity involved in implementing any one of these standards may make the approach infeasible. Virtue theory, a top-down modelling approach, suffers from the same kind of problems. Bottom-up modelling approaches initially seem computationally more tractable. But this kind of modelling inherently produces agents that are liable to make mistakes. Also as more sophisticated moral behaviour is required, it may not be possible to avoid the need for more explicit representations of moral theory. It is possible that a hybrid approach might be able to combine the best of each, but the problem of how to mesh different approaches requires further analysis.
I agree it’s exciting. The obvious approach is de-scoping. Let’s define the ethical requirements for specific machines in specific domains not machines ‘in general’. These will be well documented in human readable manuals about procedures, safety, best practice and so on. Mining robots should follow the same procedures (morally) as human miners. Taxi robots should obey the same rules as human cabbies.
A hybrid approach can compensate for the lack of ethical theoretical agreement. For example software could take a set of inputs and see if it passes the test of the Categorical Imperative and produces happiness. If we can tick Kant’s boxes and Mill’s boxes in a few milliseconds this might be more productive than worrying about which box is the right box to tick. And they we can see if the machines own learning’s agree as well.
Allen et al state:
We think that the ultimate objective of building an AMA should be to build a morally praiseworthy agent.
This is fine as an ultimate goal but in the short term it is, I think, so difficult as to inhibit progress.
They sum up:
Systems that lack the capability for knowledge of the effects of their actions cannot be morally praised or blamed for effects of their actions (although they may be ‘blamed’ in the same sense that faulty toasters are blamed for the fires that they cause.) Deep Blue is blameless in this way, knowing nothing of the consequences of beating the world champion, and therefore not morally responsible if its play provokes a psychological crisis in its human opponent. Essential to building a morally praiseworthy agent is the task of giving it enough intelligence to assess the effects of its actions upon sentient beings, and to use those assessments to make appropriate choices. The capacity for making such assessments is critical if sophisticated autonomous agents are to be prevented from ignorantly harming people and other sentient beings. It may be the most important task faced by developers of artificially intelligent automata.
I see the design of a morally praiseworthy agent as more problematic. This is akin to building a conscious, knowing thing. This is far more difficulty that simply defining the rules well-behaved robots should follow in their functional domains.
Thus I argue for limited scope in machine ethics. Or at least a clear understanding of the feature set being asked for.
As a postscript, I like the term Artificial Moral Agent (AMA) I think it is useful to contrast with the term Human Moral Agent (HMA).
Machine Ethics is the Future March 11, 2011Posted by Sean Welsh in ethics.
add a comment
I have been tinkering with the idea of resuscitating my old Master’s Thesis on Machine Ethics. I started it in 2006 but dropped out before my topic confirmation hearing for various reasons: politics, mania and relationship collapse to name but three. Alas, the laptop on which I downloaded all the research papers when I was still working at a certain University has failed (dead hard drive as far as I can tell). I was not too worried about my drafts because the main reason I dropped the topic was that my drafts were not really making any sense. So I was hitting Google Scholar looking for papers and found a pile of them at a blog called Common Sense Atheism with links to pdfs. (Thanks Luke!)
Most of the authors of the articles are people I met at the AAAI Symposium on Machine Ethics that was held in Washington DC in the Fall of 2005.
The heart of the Machine Ethics problem centres on a series of questions:
- What makes a decision moral?
- What data is involved in such a decision?
- What processes are followed?
- Can these processes be automated?
- Can they be embedded in an autonomous system (e.g a robot)?
- Should they be?
I have been pondering these questions on and off for the past 5 years and am now feeling like they are coherent enough to be put out into the blogosphere.
What I am looking to do is model moral decisions in software. The action of forcing ethics into software requires you to be very clear about what you doing.
My MacBook Pro has no common sense, no altruism, no empathy, no education, no consciousness even, its perception is feeble, mouse clicks and keyboard strokes, and yet I am expecting it to host the Ultimate Ethical Algorithm that Will Save the Planet, Deliver World Peace, Foster Universal Siblinghood, End Oppression of All Species et cetera, et cetera…
What I am saying is that a computer really requires you to spell out (in software code) exactly what you are doing. You cannot gloss over vague bits like you can with human beings. You cannot appeal to “intuition” – whatever that is. The code forces clarity upon your philosophical thinking.
This, I believe, is a Good Thing.
However the Quest to get the Ultimate Ethical Algorithm that Will Save the Planet, Deliver World Peace, Foster Universal Siblinghood, End Oppression of All Species et cetera, et cetera… has a few hurdles to get over.
Hurdle 1. There is no widespread agreement, even amongst moral philosophers, or rather, especially not amongst moral philosophers as to what constitutes “the correct ethical theory”.
Hurdle 2. If you do through some stroke of genius come up with “the correct ethical theory” it may or may not be programmable. Some ethical theories are obviously more computable that others. Utilitiarianism, for example, is obviously computable and has been since 1789 when Jeremy Bentham published The Principles of Morals and Legislation – which has the variables required clearly laid out ready to be plugged into a program (the Felicific Calculus) well in advance of the invention of computing hardware and software.
Virtue Theory, by contrast, is not obviously computable. A utilitarian moral calculation is pretty tractable with standard decision theory. Virtue theory is more nebulous, you would need consciousness installed on your MacBook to get it to run. This, I suspect, is not on Apple’s short to medium term agenda. (Note to Self: register the iThink trademark…)
But even if it turns out “the correct ethical theory” turns out to be computable, you then have to deal with …
Hurdle 3. The big scary one…
The Future doesn’t need us… otherwise known as the Terminator Problem. In a nutshell, this hurdle is basically saying anyone building thinking machines is betraying humanity because these things will inevitable get smarter than us and terminate us. So people often argue that such machine should not be build at all. Robots need to be pretty dumb and obviously under human control.
Personally, I think this is paranoid. Most people who dream up these plotlines have vivid imaginations and would not know a web service from a java class…
Anyone who can program knows that you build machines that are pretty darn predictable and that can fail gracefully so I don’t regard the Terminator Problem as a particularly scary problem. It is little more than a sign of human insecurity and fear of the unknown.
When it comes to programming, I would say bluntly that about 99% of the human population has no clue.
It is far more likely that when the tech gets to that level humans will be queuing up to migrate their fragile consciousnesses and perishable identities into these robust, resilient and relentless triumphs of engineering.
Put another way, I am not scared of the Terminator. I am going to be first in line to migrate into the Terminators. By the time these things get released, I suspect they might have more sensible names. You know, like cars or Apple products or something.
iTerminate (tm and patent pending) :-)
Waiting for the Three Wise Men… August 27, 2010Posted by Sean Welsh in politics.
add a comment
A week after the Federal Election the outcome remains uncertain. The fate of the nation lies in the hands of the Three Wise Men: Katter, Windsor and Oakeshott. There is also Wilkie an independent independent who has won Denison and Crook the WA National that defeated Wilson Tuckey in O’Connor who is being decidedly coy about his support for the Coalition. However, Wilkie and Crook are political neophytes (quite unwise men in their utterances to date) and are simply not playing the cool hand that the far more experienced Three Wise Men are. The Green likewise has been unwise perhaps blowing his chance to be Speaker – and getting a Minister’s staffing resources – by siding with Gillard too soon.
Then again, the numbers being what they are, and the fact that stability and process reform is something the 3 Wise Men want as well as the Greens would both indicate that the Green will probably become Speaker and only vote with the Government to break deadlocks.
As to which way they will go, it seems to me more likely that the 3 will back Abbott over Gillard. Katter was a minister in the Bjelke-Petersen government and wants a dam built in his electorate. Yes, a dam. Remember those? Windsor chose to back the Coalition over Labor when he was a State MP enabling Greiner to form a government. Oakeshott comes across as very idealistic and keen on reform of parliamentary process and culture.
So they have made 7 requests of the caretaker government – and after some arge and barge Abbott has agreed to let the 3 Wise Men get briefings from public service officials.
The serious negotiations will start only when the postals are counted and the seats are declared. The 3 are all unassailable in their electorates. They are all experienced and capable men. It seems their ambitions (mostly to reform parliamentary process) are laudable. So I am content enough to wait patiently for them to decide who to support.
However, the pro-Labor vote in their electorates is 8, 13 and 20 per cent. Thus propping up Gillard seems to me relatively unlikely.
What will probably happen is that the Nationals will back their demands for more regional funding and the Coalition will get the nod and the guys in the metropolian areas will pay the bush a lot more attention for the next three years.
And why not?
Abbott will not be able to call an early election as the 3 Wise Men will tell toute le monde that they will support the formation of a Labor govt.
So I imagine the Green or perhaps Wilkie will become an independent Speaker accepting a mandate to reform parliamentary process (a cause much beloved of minors and independents) giving Oakeshott the buffer he wants to permit more stable government. One or more of the 3 might get a Ministry to seal the deal or they might choose to remain outside the government. It would be truly audacious to lure Kevin Rudd across with an offer of the Foreign Ministry but I think having a Green or Independent Speaker more likely – purely for stable majority buffer purposes.
The Mining tax will be scrapped and some “Royalties for Regions” type funding initiative will come to pass. The NBN might morph into an RBN (Regional Broadband Network).
Who knows? Bob Katter might even get the Hell’s Gate Dam built…
That would be a thing. Though it is a State matter, there is the considerable power of tied Commonwealth funding to encourage these things.
Abbott’s anti-Wild Rivers Private Member’s Bill will probably become a government bill. Or then again with a Green speaker it might be discreetly dropped. A necessary quid pro quo…
We live in interesting times…
Into the Wild May 30, 2010Posted by Sean Welsh in reviews.
add a comment
Picked this up on DVD the other day. Fascinating film – apparently based on a true story. Though I found some of the choices the protagonist made frankly unforgivable. OK, he had issues with his parents but not even sending his sister a note having just bailed out of college, given away his law school college money and dropped out to live a hobo life on the road, was to me was beyond the pale but the story carried me along. Mainly because of the protagonists determination to suck the marrow out of life and the uncompromising philosophical stance of the protagonist. It is quite a tragedy but eloquently told with great visual style. As it turned out life sucked the marrow out of the protagonist – Alexander Supertramp – real name Chris McCandless – but the ending while tragic is poignant and beautiful. Overall, a fascinating tale. I’ll be buying the book.
Harry Brown May 23, 2010Posted by Sean Welsh in reviews.
add a comment
A very gritty picture. Probably the grittiest thing Michael Caine has done since the 70s classic Get Carter. Very good and very realistic. The film pitilessly represents the drabness of life in a London housing estate. A lot of these places have severe social problems. I enjoyed the film. Caine’s character is played superbly well as you would expect. I can’t ever remember seeing Michael Caine in a dud film. Everything he does is class. Supporting cast are all good. There’s nothing quite like a good revenge flick.
Definitely well worth seeing.
The Foundations of Western Civilization April 13, 2010Posted by Sean Welsh in ethics.
add a comment
According to the Herald the Institute of Public Affairs – a “conservative think tank” – recently held a dinner to launch The Foundations of Western Civilisation Program. http://www.smh.com.au/opinion/politics/how-the-west-was-lost-a-lack-of-faith-in-civilisation-20100411-s0ow.html.
Apparently John Howard spoke as did Cardinal Pell. Cardinal Pell’s speech has been put online by his media people and it is a very interesting, well-researched speech – as you would expect from a Doctor of the Church. http://www.sydney.catholic.org.au/news/latest_news/2010/201041_857.shtml One might not agree with every thing the Cardinal says but you can be sure that it has a clear moral point, and is well-argued and coherent even if you dislike the moral point and the argument!
He starts with a story about a Chinese academic and how they have sought the secret of Western dominance of the modern world.
In 2002 a group of tourists from the United States visited the Chinese Academy of Social Sciences in Beijing to hear a talk by a Chinese academic who prefers to remain anonymous. Speaking in the plural for unnamed fellow thinkers, he described their search for what accounted for the pre-eminence, the success of the West all over the world. Their studies ranged widely. Originally they thought the main reason was more powerful guns; then it was Western political systems, before considering the claims of the Western economic system.
Finally, and I quote “in the past twenty years, we have realized that the heart of your culture is your religion: Christianity. . . . The Christian moral foundation of social and cultural life was what made possible the emergence of capitalism and then the transition to democratic politics. We don’t have any doubt about this”
Interesting observation. Though personally, I am comfortable with the Guns, Germs and Steel explanation of Western dominance. (It is about better guns and more diverse and competitive political and economic systems.) And no doubt the Protestant Work Ethic and the Spirit of Capitalism has a lot to do it.
Another quote from a Chinese writer in an article comparing Market Economies with Churches and Market Economies without Churches.
“These days Chinese people do not believe in anything. They don’t believe in God, they don’t believe in the devil, they don’t believe in providence, they don’t believe in the last judgement, to say nothing about heaven. A person who believes in nothing can only believe in himself. And self-belief implies that anything is possible – what do lies, cheating, harm and swindling matter?”
The lack of religious belief it seems it being blamed from the huge problem of institutional corruption in China. I am not sure if I buy that. Indonesia has similar problems with corruption and there are Mosques all over the Archipelago. I daresay one could point to several Christian jurisdictions in Africa and Latin America with massive corruption issues as well.
Pell goes on to give atheists a bit of a kick. He thinks plague is the correct plural term for a group of atheists. A gaggle of geese, a plague of atheists. I like that… though I don’t agree with it :-) Quite a suitable barb from a Cardinal though.
Later in the article Pells calls for undumbed down study of philosophy, history and English literature. Hear, hear!
Very interesting article. Worth a read.
Alas, there is at this time, no further information on the program available on the Institute’s web site which is somewhat disappointing. No doubt it will be made available in due course.
Solar April 12, 2010Posted by Sean Welsh in reviews.
add a comment
Picked up Ian McEwan’s latest and read most of it while on a reef trip off Port Douglas of all places. It’s quite an interesting read. I saw the writer on TV discussing climate change and the story does feature some snappy lines on that subject. The protagonist is quite amoral or rather morally inconsistent. One would expect nothing less from Ian McEwan but he lacks the concentrated evil of some of the characters in his early books. Still, it was a realistic depiction of a fairly unappealing character. Overall, it was an entertaining story though I felt it was not quite as good as Chesil Beach or Atonement. Just not moving at the end. It kinds of splutters out into an unresolved mess.
Even so, worth the read.
A Single Man March 28, 2010Posted by Sean Welsh in reviews.
add a comment
A Single Man was a little surprising at first what with Colin Firth of all people playing a gay lead. But once I got over that and into the story I was captivated by its haunting beauty. In essence, the film is a protracted elegy of lost love told with restraint, delicacy, beauty and overwhelming poignancy. Quite the best love story I have seen in years. Julianne Moore is superb in her short part. The film is beautifully shot and the soundtrack is terrific.
Highly recommended. Bring a hankerchief.
Tales of Ordinary Madness March 21, 2010Posted by Sean Welsh in reviews.
add a comment
I picked this up on DVD the other day. I was seriously impressed by the opening which was very cool in a beat poet way.
Style is the answer to everything.
A fresh way to approach a dull or dangerous thing
To do a dull thing with style is preferable to doing a dangerous thing without it
To do a dangerous thing with style is what I call art
Bullfighting can be an art
Boxing can be an art
Loving can be an art
Opening a can of sardines can be an art
Not many have style
Not many can keep style
I have seen dogs with more style than men,
although not many dogs have style.
Cats have it with abundance.
When Hemingway put his brains to the wall with a shotgun,
that was style.
Or sometimes people give you style
Joan of Arc had style
John the Baptist
I have met men in jail with style.
I have met more men in jail with style than men out of jail.
Style is the difference, a way of doing, a way of being done.
Six herons standing quietly in a pool of water,
or you, naked, walking out of the bathroom without seeing me.
The rest of the movie was a tale of drunken poet being drunk and tarting around. Quite an unattractive person. But the poems when you get to them are very good. Overall, the file was an interesting portrait of an artist’s life but the opening I thought was excellent. Very cool.
The Hurt Locker March 16, 2010Posted by Sean Welsh in reviews.
add a comment
This was a very fine picture. Compelling tension from the start, sustained all the way through the film. Very graphic and very realistic it was a worthy winner of Best Picture and Best Director. I am amazed the film was directed by a female as it was such an utterly male picture. Barely a hint of romance in the film. Instead it focuses on the addiction of war. Nice detail with the bomb disposal tech. The opening credits say ‘war is a drug’. I can see why. It’s kind of like politics only more dangerous.