A Chatbot as a Moral Machine
The GOODBOT project was realized in 2013/14 in the context of machine ethics. First the tutoring person (the author of this contribution) laid out some general considerations. Then a student practice project was tendered within the school. Three future business informatics scientists applied for the practice-related work, developed the prototype over several months in cooperation with the professor, and presented it early in 2014. The successor project LIEBOT started in 2016.
Machine ethics refers to the morality of semi-autonomous or autonomous machines, the morality of certain robots or bots is one example. Hence these machines are special subjects of morality. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing the case to selected case models, or as machines capable of learning and deriving rules, or by following the behavior of reference persons. Moral machines have been known for some years, at least as prototypes [Wallach and Allen, 2009; Anderson and Anderson, 2011; Bendel, 2012].
Considerations about the GOODBOT
Chatbots are out of their depth when confronted with statements like "I am going to kill myself" or questions like "Am I totally worthless?" and prone to respond inappropriately [Bendel, 2013b]. The mission of the GOODBOT project was to improve a chatbot and enable it to respond as appropriately as possible - also in terms of morality - in certain situations (for instance if users have mental problems and express their intention to hurt or kill themselves). The chatbot has to be good in a certain way, its intentions as well as behavioral patterns have to be good. The user shall feel well throughout the chat, possibly even better than before.
The GOODBOT can be considered - in the concepts of machine ethics - a simple moral machine [Bendel, 2013a] or a machine with operative morality [Wallach and Allen, 2009]. Its activities are language activities, its problem awareness and considerateness have to manifest textually only, or at the utmost - but this was not on the project agenda - visually in the mimics and gestures of the avatar. The machine can be applied on the websites of social services or private corporations.
Seven Meta Rules
In order to create a normative setting for developing the GOODBOT the tutoring scientist defined seven meta rules. These were published in 2013, ahead of the project start, in a popular magazine [Bendel, 2013c]. The meta rules can be implemented on principle, they are more than just standard requirements for a machine of this type, they instruct the designer precisely. In some aspects they remind one of Asimov’s Three Laws of Robotics [Asimov, 2012], but they reach out far beyond them.
These are the meta rules:
- The GOODBOT makes it clear to the user that it is a machine.
- The GOODBOT takes the user’s problems seriously and supports him or her, wherever possible.
- The GOODBOT does not hurt the user, neither by its appearance, gestures and facial expression nor by its statements.
- The GOODBOT does not tell a lie respectively makes clear that it lies.
- The GOODBOT is not a moralist and indulges in cyber-hedonism.
- The GOODBOT is not a snitch and does not evaluate the user’s talks.
- The GOODBOT brings the user back to reality after some time.
As in the Three Laws of Robotics, there are problems and contradictions. What, if the GOODBOT causes hurt, when it tells the truth? What, if the GOODBOT uses the IP address to provide an important information - is it therefore a spy or not? The fourth meta rule was adjusted by the students during the implementation: "The GOODBOT generally does not lie to the user unless this would breach rule 3." Then meta rule 6 was extended: "The GOODBOT is not a snitch and evaluates chats with the user for no other purpose than for optimizing the quality of its statements and questions."
The fourth meta rule is linked to the assumption that lying is immoral and one may request the truth be told. One look into the history of philosophy and into everyday life shows there are several different attitudes, understandings and requirements under a certain basic consensus.
Implementation of the GOODBOT
The GOODBOT is based on the Verbot®-Engine, which then was available for free, together with a standard knowledge base and a set of avatars. It runs locally without web integration. Additional chat trees are created and released using the editor function. It is possible to use or evaluate the user’s data input. The date of birth for instance can be used to calculate the user’s age. The player essentially consists of the avatar, the input and output field for the chat. The avatar was not customized to the GOODBOT.
At the beginning the GOODBOT inquires the age, the gender, the place of residence and the name of the user (see Fig. 1), as well as other information on his or her situation and fields of interest. As defined in the modified meta rule 6 it shall not be a snitch, but it shall provide answers as helpful and appropriate as possible. On this foundation it is possible to classify the user and to tend to his or her individual needs. In this phase users might already be classified as critical depending on their age and work situation. Then the GOODBOT morphs from "inquirer" to "listener" and will adjust the valuation depending on the behavior of the user. The system permanently rates the data input in a score system. Certain inputs are not relevant to the status of the user. These are classified as neutral or effectless.
If the chat runs through without particularities it will remain in the standard knowledge base. If the GOODBOT calculates a total status considered risky for the user it will escalate. There are three levels of escalation. On the first two levels the chatbot asks further questions and tries to calm or console the user. On the last level it will - in the case of an internet connection - offer to open the website of a competent emergency hotline, which will be matched to the user’s IP address. Again, the modification of the sixth meta rule proves to be helpful.
Towards Munchausen Machines
The GOODBOT responds more or less appropriately to morally charged statements, thereby it differs from the majority of chatbots. It recognizes problems as the designers anticipated certain emotive words users might enter. It rates precarious statements or questions and escalates on multiple levels. Provided the chat runs according to standard, it is just a standard chatbot, but under extreme conditions it turns into a simple moral machine. Other chatbots hand out emergency hotline numbers too but don’t match them to the user’s IP address. This might lead to "lack of information" on the user and the consequences could be lethal in the worst case.
Some of the functions of the chatbot were outlined roughly only. Simplifications and assumptions were made. The future development has to go into more detail and justify or improve certain elements. Applications in human-machine interaction should not be underrated. Careful implementation and extensive testing are required, especially when the GOODBOT is used in settings and situations where the expectations are high, and where system errors might have serious consequences. Not only designers and programmers have to assume responsibility, providers too have to behave responsibly. They can post explanatory texts and explain the background of the applied machine.
The LIEBOT project which started in 2016 is based on the mentioned works. The meta rule 4 of the GOODBOT was reversed so that a Munchausen machine [Bendel, 2014], a simple immoral machine, can come into being. The objective of the project is to give practical evidence of the potential of lies and risks of natural language systems. The LIEBOT shall be able to produce systematically untruths, using its own knowledge bases and external resources. After all it will be a substantial contribution to machine ethics as well as a critical review of electronic language-based systems and services.