Guide: How can we make robots more trustworthy? Can they be given moral cultivation? On this topic, the author interviewed Colin Allen, Distinguished Professor of the Department of Philosophy of Science and History of the University of Pittsburgh, and Professor of Yangtze River, Xi'an Jiaotong University.
On December 18, 2018, the EU Expert Group on Artificial Intelligence issued a draft code of ethical code for artificial intelligence. In the context of many people worried that artificial intelligence replaces humanity and undermines ethics, the draft aims to guide people to create a "reliable artificial intelligence." How can we make robots more trustworthy? Can they be given moral cultivation? On this topic, the author interviewed Colin Allen, Distinguished Professor of the Department of Philosophy of Science and History of the University of Pittsburgh, and Professor of Yangtze River, Xi'an Jiaotong University.
Q: What is the "morality" of artificial intelligence?
Allen: The "moral" of artificial intelligence, or "moral machine" and "machine morality", has many different meanings. I classify these meanings into three categories. In the first meaning, the machine should have the same moral abilities as humans. In the second meaning, machines do not have full human capabilities, but they should be sensitive to morally relevant facts and can make decisions on the basis of facts. The third implication is that machine designers consider the ethics of the machine at the lowest level, but do not give the robot the ability to focus on moral facts and make decisions.
For the time being, the machine envisioned in the first meaning is still a scientific fantasy. Therefore, I skipped the discussion of it in the book "Ethical Machines" and was more interested in exploring the machines between the second and third meanings. At the moment, we want designers to consider ethical factors when designing robots. This is because robots may be taking more and more work in the public domain without direct human supervision. This is the first time we have created a machine that can operate unsupervised. This is the most essential difference between the ethical issues of artificial intelligence and some previous ethical issues. In such an "unsupervised" situation, we hope that the machine can make more ethical decisions, hoping that the design of the machine should not only focus on safety, but also on the value of human concern.
Q: How to make artificial intelligence ethical?
Allen: The first thing to say is that human beings are not completely moral, they willIt is not an easy task for a person to develop into a moral person. The essence of human beings is to do things based on self-interest, without considering the needs and interests of others. However, a moral agent must learn to restrain one's own desires to facilitate others. The robots we are building now do not have their own desires, nor do they have their own motives because they have no selfish interests. Therefore, the morality of training artificial intelligence and training people is very different. The problem with training machines is how we can empower the machine to be sensitive to what is important to human moral values. In addition, does the machine need to realize that its behavior can cause human suffering? I think it is needed. We can consider programming to make the machine act in this way, and do not need to consider how to give the robot priority to the other's interests, after all, the current machine does not have self-interested instinct.
Q: What kind of model should be used to develop the ethics of artificial intelligence?
Allen: We discussed the model of machine morality development in Moral Machines, and thought that the pattern of “top-down” and “bottom-up” is the best answer. Let me first talk about what it means from top to bottom and bottom up. We use these two terms in two different ways. One is the engineering perspective, that is, some technical and computer science perspectives, such as machine learning and artificial evolution, and the other is an ethical perspective. Machine learning and artificial evolution do not start with any principles. They simply try to make the machine conform to a specific type of behavioral description, and when a given input causes the machine to act in this way, its behavior can conform to a particular type. Bottom up." In contrast, the “top-down” approach implies a clear pattern of rules that are given to the decision-making process and attempts to write rules to guide machine learning. We can say that in the engineering field, “from the bottom up” is learning from the data, while “top-down” is pre-programmed with certain rules.