1 - Introduction
It's quite difficult to define precisely the word decision, but each of us generally agrees that he has already experienced the concept. Every human being, right or wrong, think that in many occasions he has made a choice between different alternatives. Whether he exercises his free will or bends to somekind of causal necessity, is another (philosophical) question we do not enter into. The intuitive notion of human free will in choosing between various alternatives will suffice for our talk. On the other hand, it is important to specify what we mean when using the expression "Artificial Intelligence" (AI). There are at least two different views about AI. The first one equates AI to "Sciences ofthe Artificial" (Simon, 1969), or the science of designing and building computer-based artifacts performing various human tasks. Adopting this view has the advantage of throwing out most of the philosophical discussions about the nature of intelligence and the feasability of the AI project. This view of AI has relatively few links with decision to the extent that an artifact cannot properly be saidto make a decision. The decision, if any, has of course previously been made by the designer of the system (at least if he is able to trace, for any input set of data, the instructions trigerred). In other words, the concept of "decision" is antinomic to the idea of program. When a task is programmed, the decision no longer exists since the actions are determined according to each possiblesituation that may occur (see Pomerol, 1992a, 1992b and Lévine-Pomerol 1995, for a discussion and the consequences for Decision Support Systems (DSS)). But even if an artifact does not make any decision, its designer has previously modelled a decision process embedded in the system. And this is a first question for us : how to model and program decision processes in the artifacts ? The most naturalanswer to this question is that "it suffices" to observe how people make the decision in the task at hand and to reproduce the process into the machine. So, even if we adopt a view of AI not referring to "human intelligence", we have to deal with human reasoning.
Ce papier doit être présenté comme tutorial à EURO XIV, à Jérusalem, en Juillet 1995.
We thus reach the second definition of AI,relating to its cognitive side. We know that AI is often regarded as the science of knowledge representation and reasoning (Newell and Simon, 1972). If we therefore think about AI as a science aimed at mimicking human beings, then it obviously has a non void intersection with decision. The difficulty is that each human being may have his/her own way of reasoning and deciding, at least at thepreference level. In this case AI becomes a science of the persons (subjects), a subjective science. This point of view has already been advocated by some authors (e.g. Dubois in Courbon et al., 1994). Following this idea, AI is the science of the design and development of systems mimicking not mankind (genericity), but a given human being (subjectivity). Let us note that, on the contrary, up to now,AI has considered that it makes sense to look for generic properties and representations rather than developing specific and subjective skills. It is only recently that AI has been sensitised to subjectivity, and consequently to decision since it is generally acknowledged that decisions are personal. Since the introduction of utility functions by the economists (see e.g. von Neumann andMorgenstern, 1944), it is actually accepted that two "rational" decision makers, confronting the same situation, may make two different decisions depending on their subjective probabilities. This debate about genericity vs. subjectivity has existed since the origins of decision theory. Some researchers defend the idea that everybody decides or should decide in the same (rational) manner ; they represent...