Inteligencia Artificial
(2011)
Nick Bostrom
Eliezer Yudkowsky
Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith
Frankish (Cambridge University Press, 2011): forthcoming
The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other
morally relevant beings, and to the moral status of the machines themselves. The first
section discusses issues that may arise in the near future of AI. The second section
outlines challenges for ensuring that AI operates safely as it approaches humans in its
intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider
how AIs might differ from humans in certain basic respects relevant to our ethical
assessment of them. The final section addresses the issues of creating AIs more
intelligent than human, and ensuring that they use their advanced intelligence for
good rather than ill.
Ethics in Machine Learning and Other Domain‐Specific AI
Algorithms
Imagine, in the near future, a bank using a machine learning algorithm to recommend
mortgage applications for approval. A rejected applicant brings a lawsuit against the
bank, alleging that the algorithm is discriminating racially against mortgage
applicants. The bank replies that this is impossible, since the algorithm is deliberately blinded to the race of the applicants. Indeed, that was part of the bank’s rationale for
implementing the system. Even so, statistics show that the bank’s approval rate for
black applicants has been steadily dropping. Submitting ten apparently equally
qualified genuine applicants (as determined by a separate panel of human judges)
shows that the algorithm accepts white applicants and rejects black applicants. What could possibly be happening?
Finding an answer may not be easy. If the machine learning algorithm is based on a
complicated neural network, or a genetic algorithm produced by directed evolution,
then it may prove nearly impossible to understand why, or even how, the algorithm is
judging applicants based on their race. On the other hand, a machine learner based on decision trees or Bayesian networks is much more transparent to programmer
1
inspection (Hastie et al. 2001), which may enable an auditor to discover that the AI
algorithm uses the address information of applicants who were born or previously
resided in predominantly poverty‐stricken areas.
AI algorithms play an increasingly large role in modern society, though usually not labeled “AI”. The scenario described above might be transpiring even as we write. It
will become increasingly important to develop AI algorithms that are not just powerful
and scalable, but also transparent to inspection—to name one of many socially important
properties.
Some challenges of machine ethics are much like many other challenges involved in
designing machines. Designing a robot arm to avoid crushing stray humans is no more morally fraught than designing a flame‐retardant sofa. It involves new
programming challenges, but no new ethical challenges. But when AI algorithms take
on cognitive work with social dimensions—cognitive tasks previously performed by
humans—the AI algorithm inherits the social requirements. It would surely be frustrating to find that no bank in the world will approve your seemingly excellent
loan application, and nobody knows why, and nobody can find out even in principle.
(Maybe you have a first name strongly associated with deadbeats? Who knows?)
Transparency is not the only desirable feature of AI. It is also important that AI
algorithms taking over social functions be predictable to those they govern. To ...
Regístrate para leer el documento completo.