MXPA01006431A - Model and method for using an interactive rational agent, multiagent server and system implementing same - Google Patents

Model and method for using an interactive rational agent, multiagent server and system implementing same

Info

Publication number
MXPA01006431A
MXPA01006431A MXPA/A/2001/006431A MXPA01006431A MXPA01006431A MX PA01006431 A MXPA01006431 A MX PA01006431A MX PA01006431 A MXPA01006431 A MX PA01006431A MX PA01006431 A MXPA01006431 A MX PA01006431A
Authority
MX
Mexico
Prior art keywords
agent
rational
formal
architecture
statement
Prior art date
Application number
MXPA/A/2001/006431A
Other languages
Spanish (es)
Inventor
David Sadek
Philippe Bretier
Franck Panaget
Original Assignee
France Telecom
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom filed Critical France Telecom
Publication of MXPA01006431A publication Critical patent/MXPA01006431A/en

Links

Abstract

The invention concerns a model and a method for using an interactive rational agent as the node of a dialogue system and/or as element (agent) of a multiagent system comprising the following steps:defining a conceptual architecture of an interactive rational agent;formal specification of the different components of said architecture and their combination enabling to obtain a formal mode;defining the software architecture implementing the formal architecture;defining mechanisms for implementing the formal specifications;the rational agent being capable thereby to communicate with another agent or a user of the system through a particular communication medium.

Description

MODEL AND PROCESS OF IMPLEMENTATION OF A RATIONAL AGENT THAT DIALOGATES, SERVER AND SYSTEM OF MULTIPLE AGENTS FOR THE APPLICATION OF THE SAME FIELD OF THE INVENTION The invention relates to a model and a process of implementation of a rational agent that dialogues, as the nucleus of a dialogue system or a system of multiple agents. The invention is applied to human-agent interaction systems (human-machine dialogue), but also to agent-agent interaction systems (inter-agent communication and cooperation). It applies to information servers.
BACKGROUND OF THE INVENTION Although the conception of man-machine dialogue systems has been seriously studied after more than thirty years, few systems that foreshadow real use are available today. Most of the demonstrators that have been developed show the best possible ability of a system to coordinate some simple exchanges with a user in a stereotyped structure (adapted for a particular task) and a framework of application of obligation.
These systems are generally limited to illustrating this or that characteristic of an evolved interaction such as, for example, the understanding by the machine of more or less complex statements (contextual in oral or written natural language, possibly combined with other means of communication), or in certain very limited cases, to the production of cooperative responses. These systems are still far from meeting all the conditions required for a natural use of such systems as the "partners" that invite guests to dialogue in these well-trivialized application frameworks. The reasons for this situation are two orders. On the one hand, the conception of dialogue systems is a complex undertaking, since it accumulates problems related to the conception of intelligent artificial systems and those related to the modeling and formalization of natural communication. When one is interested in oral dialogue, the problems linked to the automatic recognition of the word increase this difficulty. On the other hand, much work has addressed dialogue as an isolated phenomenon, which is about identifying external manifestations in order to inculcate them as such to an automatic system. These works have (deliberately or not) done totally (or partially), the economy of the link between the problem of dialogue and that of the intelligence of the system and, therefore, of a deep formal study of the cognitive foundations of dialogue.
BRIEF DESCRIPTION OF THE INVENTION Let us now briefly recall the classical methods of dialogue that have been developed up to the present. First there are the structural methods that follow a computer vocation or a linguistic vocation. They are interested in the a priori determination of an interaction structure that accounts for the regularities of the exchanges in a dialogue (the most simple are the adjacent pairs such as the question-answers, the suggestions-acceptances). These methods make the hypothesis that this structure exists and that it is finitely representable and that all the dialogues, or at least a large part of them, can be circumscribed there. The structural methods consider that the coherence of a dialogue is intrinsic to its structure and thus concentrates on the context (the text that accompanies it), making more or less directly the difficulty in the deeply contextual character of the communication. These limitations are without denomination for what is the interest of structural methods as a basis for models of intelligent interaction. We also have the classic differential methods. These methods, also called oriented projects, consider an intervention in a communication situation not only as a collection of signs (for example a sequence of words), but as the observable realization of communicative actions (also called according to the context, acts of language or dialogue), such as informing, requesting, confirming, committing. These methods have hinted at a formidable potential for the study of communication and in particular cooperative dialogue. However, they copy or imitate the abbreviations (which lead them to invoke the empirical or structural complements that make them weak), and the representations of the use of knowledge that unfortunately often lead to aberrations. The depositor has developed a new method that is based on rational interaction or a rational agent that dialogues.
In this new method, the depositor has sought, in principle, to maximize the coexistence of the interactions between the users and the automatic services. You can refer to the following publications made about the object: Sadek 91a: Sadek M.D. Mental attitudes and rational interaction: towards a formal theory of communication. Doctoral Thesis in Computer Science, University of Rennes I, France, 1991. Sadek 91b: D. Sadek Acts of dialogue are rational plans. ESCA Procedures Tutorial and Research Workshop on the Structure of Multimodal Dialogue, Maratea, Italy, 1991. Sadek 92: Sadek: M.D. A study in the logic of intention. Procedures of the 3rd. Conference on the Principles of Representation and Reasoning of Knowledge (KR'92), pages 462-473, Cambridge, MA, 1992. Sadek 93: Sadek M.D. Fundamentals of dialogue: rational interaction. Acts of the fourth summer school on Treatments of natural languages, pages 229-255, Lannion, France, 1993. Sadek 94a: Sadek M.D. Mental attitudes and foundation of cooperative behavior. Pavard, B., editor, Cooperative Systems: from modeling to conception, Octares Eds., pages 93-117, 1994. Sadek 94b: Sadek M.D. Theory of communication = principles of rationality + models of the communicative act. Procedures of the AAl '94 Workshop Planning for an Interagene Communication, Seattle, WA, 1994. Sadek 94c: Sadek M.D. Towards a theory of reliable reconstruction: application to communication. In (SPECOM94): 251-263. Sadek et al 94: Sadek M.D., Ferrieux A., & Cozannet A. Towards an artificial agent as the nucleus of a system of spoken dialogue: A progress report. Procedures of the AAl '94 Workshop on Integration of Natural Language and Voice Processing, Seattle, WA, 1994. Sadek et al 95: Sadek MD, P. Bretier, V. Cadoret, A. Cozannet, P. Dupont, A. Ferrieux , & F. Panaget: A cooperative spoken dialogue system based on a rational agent model: A first implementation on the AGS application. Procedures of the ESCA, Taller de Investigación Tutorial on Spoken Dialog System, Hanstholm, Danemark, 1995. Sadek et al 96a: Sadek M.D., Ferrieux A., Cozannet A., Bretier P., Panaget F., & Simonin J. Dialogue cooperative human-computer: the AGS demonstrator. In (ISSD 96) (and also Procedures of the ICSLP '96, Philadelphia, 1996). Sadek et al 97: Sadek M.D., P. Bretier, & F. Panaget. ARTIMIS: The Natural Dialogue meets with the Rational Agency. Procedures of the 15th Conference International Set on Artificial Intelligence (IJCAI'97), Nagoya, Japan, pp. 1030-1035, 1997. Bretier 95: P. Bretier. The cooperative oral communication: contribution to the logical modeling and to the application of a rational agent that dialogues. PhD thesis in Computer Science, University of Paris XIII, 1995. Bretier et al 95: P. Bretier, F, Panaget, & D. Sadek. Integration of linguistic capacities in the formal model of a rational agent: Application to cooperative spoken dialogue. Procedures of the AAAI'95 Fall Symposium at Agencia Racional, Cambrige, MA, 1995. Bretier & Sadek 95: P. Bretier & D. Sadek. Design and implementation of a theory of rational interaction as the core of a cooperative spoken dialogue system. Procedures of the AAAI'95 Symposium Autumn in the Rational Agency, Cambrige, MA, 1995. The coexistence of the interaction is manifested among other things by the capacity of the system of negotiate with the user, for his ability to interpret requests taking into account the context, for his ability to determine the intended intentions of the user and to have with him a flexible interaction that does not follow a preconceived plan at a time. Such a system must also be able to provide the user with solutions that he has not explicitly requested but which are nonetheless relevant. Actually, there is currently no intelligent dialogue system in service for a real application due to the complexity of each of these tasks and the fact of the difficulty of adapting all those characteristics that the interaction can be described as coexistence. The technology developed by the applicant is based on the basic principle that is: for an automatic system to engage in intelligent dialogues, this system can not be simulated by an automaton. More precisely, the coexistence of dialogue can not be known the external environment of a pre-existing system: this coexistence must, on the contrary, naturally emerge from the intelligence of the system. The object of the present invention is the realization of a logical agent by virtue of which its construction is rational. The union of the appropriate principles makes it equally communicative and cooperative. On the other hand, the technology developed by the applicant also allows the implementation of a rational agent that dialogues as the nucleus of a dialogue system and as the agent of a system of multiple agents. In this second application (multi-agent system), communication between such agents is no longer done using natural language, but a form language (logic), adapted to their agent interaction capabilities. The invention also has, in particular, a model and a process for implementing a rational agent that dialogues as the nucleus of a dialogue system or a system of multiple agents. According to the invention, the method of implementing a rational agent that dialogues as the nucleus of a dialogue system and / or as an element (agent) of a multi-agent system, comprises the following stages: - definition of a conceptual architecture of a rational agent that dialogues, formal specification of the different components of this architecture and their combination that allows to obtain a formal model, and is mainly characterized in that it also includes the stages: definition of a logical architecture that implements the formal architecture, - definition of the mechanisms of application of the formal specifications, the rational agent is thus apt, to dialogue with another agent or with a user of the system through any means of communication (vocal or written: screen, keyboard, mouse, etc.) the different components of the formal model are in the same unified formal framework (logical theory) and with the same formalism. The generic nature of the mechanism and the principles give the model independence with respect to application, media and language. The definition of the application mechanisms is done in such a way that a direct correspondence between these mechanisms and said model is obtained. The formal specification of the different components of the formal architecture and their combination comprises a level of action more than rationality, a more level of communication and more level of cooperation. The definition of the logical architecture that implements the formal architecture includes: a unit rational that includes a layer of implementation of the action level of rationality, a layer of implementation of the action level more than communication, a layer of implementation of the level of action more than 5 cooperation, correspond respectively to the axioms I i of the formal model. The definition of the logical architecture that implements the formal architecture also includes: a generation model and a model of understanding that implements a layer of a natural language level. The rational unit, the generation module and the comprehension module implement the formal model application mechanism. 15 A formulation module is apt to transcribe a logical statement produced by the rational unit in natural language for the use of the system. The comprehension module is suitable for interpret a statement of the user in a logical understandable statement of the rational unit. The invention also has as its object, a rational agent that dialogues placed as the core of a dialogue system and / or as an element (agent) of a multi-agent system, comprising: - a definition of a conceptual architecture, - a formal specification of the different components of this architecture and their combination that allows obtaining a formal model, mainly characterized in that it comprises: - a definition of a logical architecture that implements formal architecture, a definition of the application mechanisms of the formal specifications made by a rational unit that includes: data that includes predefined action schemas and axiom schemes that depend on the application, - Knowledge base that depend on the application that comprise a semantic network and inter-concept distances, an inference engine to apply the mechanisms of the formal specifications by means of data and the knowledge base in order to receive a logical statement, understand it and be able to provide a logical statement in response, According to another characteristic, the data include implementation data of a formal model that includes: - an implementation layer of rationality axioms, an implementation layer of communication axioms, an implementation layer of cooperation axioms, corresponding respectively to the axioms of the formal model. According to other characteristics, the agent also includes: - a model for generating a statement in natural language from a logical statement from the rational unit and a comprehension model to provide a logical language statement to the rational unit From a natural language utterance, these modules also implement a layer of communication level in natural language. Another object of the invention is an information server, comprising means for applying a man-machine dialogue system, whose core is based on the implementation of a rational agent that dialogues, such as those defined above. The invention also relates to a multi-agent system comprising communicating agents, each agent comprises means for applying an interaction, the system comprises at least one agent whose core is based on the implementation of an agent rational dialogue such as those described above.
DESCRIPTION OF THE FIGURES Other features and advantages of the invention will appear clearly with the reading of the following description by way of non-limiting example, and with respect to the Figures, in which: - Figure 1 represents the logical architecture of a rational agent that dialogues, - Figure 2 represents the architecture of the rational unit and its knowledge base, Figure 3 represents in a more detailed way the logical architecture of an agent that dialogues, as the nucleus of a dialogue system (particularly oral), - Figure 4 represents an architecture that shows a rational agent that dialogues as the nucleus of a system of multiple agents. We recall that the rational agent method that dialogues that has been done by the depositor and that has been the object of publications, is guided by the principles of rationality, communication and cooperation formed in a theory of rational interaction. HE can refer for this purpose to the publications cited above that comprise the method of "rational agent that dialogues". The definition of a conceptual architecture of a rational agent that dialogues is given in the annex of the description. This definition has been the subject of a publication in "Scientific Council of France Telecom, Technical Compendium No. 8: Intelligent Interfaces and Images" October 1996, pages 37-61. Next, we can refer to the scheme of Figure 1. According to the invention, the applicant has applied these principles by means of a rational unit 100, which constitutes the core of each agent and which determines its actions with external events, whether that they are requests (requests, answers, confirmations, etc.) of human users or requests of other logical agents (this is the case when an agent is the nucleus of multiple agents). The rational unit 100 is driven by an inference engine that automates the reasoning in accordance with the principles of rational interaction that the agent programmer can adapt or enrich, in a declarative manner, depending on the task to be performed.
For this purpose, as will be explained below, these arguments are guided by the predetermined axiom schemes (listed in the annexes), and introduced in the unit by the agent programmer in a declarative manner, depending on the task that must be performed. meet said agent. Figure 1 illustrates the schema of a logical architecture of an agent in the event that each architecture is applied to the constitution of a dialogue system with the users. Figure 1 represents, therefore, the architecture of an agent in interaction with the user, through, as will be observed, of an understanding module 150 and a generation module 160. This architecture corresponds to a first family of possible application that is the user-service interaction (coexistence). In order to allow dialogue with the users, the rational unit 100 is linked to an outward interface 140. This interface therefore comprises the compression module 150, which receives the natural language statements, and interprets these statements in a logical statement that will be introduced in the rational unit premium 100.
The interface also comprises the generation module 160, which expresses the reaction of the rational unit 100 in a natural language statement intended for the user. In this framework, the rational unit 100 is the central entity of the service to be made, whether the proportion of information (train schedules, course of the stock market, weather forecasts), reservations or purchases, or even the search for information from of the Internet network. The principles of cooperation implemented in the rational unit and the natural language treatment modules ensure an interaction of coexistence with the user. This interaction can be carried out directly by the word, integrating into the dialogue system thus formed, recognition and synthesis modules of the word (not represented in this Figure). However, the rational unit 100 can itself constitute the core of an autonomous logical agent. In this framework, this unit interacts with other logical agents by means of a communication language between the agents, such as the "Agent Communication Language" (A.C.L. adopted as the standard by the FIPA consortium). The services that the agent can provide are, for example, transactions in the markets l electronic, network administration tasks, information dissemination. - These two forms of interaction can be combined so that after the interaction in the natural language with a user, an agent replaces any task by the interactions in ACL language with others. Logical agents distributed over public or private networks. We will now detail the functionalities of the logical architecture of the rational unit 100, this detailed architecture is illustrated by the scheme of Figure 2. First, the rational unit 100 implements the principles based on the theory of rational interaction, whose objective is to formalize and automate the rational behavior of an agent in a situation of interaction with other agents or service users. This theory is based on two major notions: the notion of the modal logic, on the one hand, whose objective is to allow the representation of the mental attitudes of the autonomous agents, and the notion of the acts of the language on the other hand, whose objective is to specify the effects of communication about the mental attitudes of the agents.
The contribution of the theory of rational interaction is to formalize these two domains and especially the interaction between them. The state of an agent at a given moment in an exchange of communication is thus characterized by a set of mental attitudes. The mental attitudes that can be represented are, for example, the belief, usually noticed by the operator K, and the intention, noticed by the operator I. These operators are indicated by the agent, for which it is a question of representing the mental attitude. In a dialogue with the system s and the user u, Ks designates the operator of the belief for the system and Ku, this same operator for the user. The acts of language that can be modeled are among others, information acts and requests. The modeling consists of a logical statement or a logical language, for example: Ks Iu Fact (< s, report If (u, p) >) this logical statement is translated as follows: the system knows (operator K) that the user u has the intention (operator I) that a certain communicative act is carried out, which is that, s report au if a certain proposition p is true or false, in a shorter way: "he knows that he wants him to inform him of the truthfulness of p". The defined logical language allows expressing the general behavioral principles that will determine the reactions of the rational unit. An agent will be cooperative if it adopts the user's intentions u. This can be expressed as follows: Ks Iu s - > Is f Such schemes of axioms of a very general scope are already predefined by the theory of interaction and are part of the rational unit of an agent. Nevertheless, the programmer of the rational unit can define more specialized new schemes for a given application. The whole of the reasoning guide schemes of the rational unit 100 and therefore their reactions to environmental requests. The calculation of these reactions is effected by the inference engine 101. The rational unit 100 therefore comprises a data set 101, comprising the axioms of the formal model of the rational agent that dialogues. These data implement the layers of rationality of communication and cooperation of the agent.
The requests of the environment for example, the requests of the users, or of those other logical agents, are transmitted to the rational unit 100 in the form of an ACL logical statement of the theory of rational intent. The inference engine 101 is apt to calculate the consequences of this statement and in particular the answers or requests for eventual clarifications to provide the interlocutor (which is a logical agent or a human user), but also other non-communicative interactions. Specifically for a given statement, the inference engine 101 examines whether it has a behavioral principle that can be applied to this statement to deduce the logical consequence (s). That procedure is then applied to these new consequences until the possibilities are exhausted. Among all these consequences, the inference engine 101 isolates the communication actions or others that it must perform, and that then form the reaction of the rational agent. The first stage of the inference procedure is done under the normal form of the treated statements in order to ensure that each statement is presented only under a single given syntactic form and, in order to ensure the classification and comparison of the statements. This putting in normal form also allows to ensure a first application of the simple principles of reasoning. The procedure of inference is then for each statement treated, to verify whether this statement corresponds to one of the schemes of axioms 102, which codify the principles of retained rational behavior. The mechanism of this verification is mainly based on the unification operation of the Prolog language. The set of these axiom schemes can be modified by the rational unit programmer 101, which can remove or add axiom schemes or modify existing ones to fine-tune the behavior of the rational unit. These modifications can be made dynamically. In this case, the rational unit modifies its behavior little by little. The whole procedure of inference is controlled so that the rational unit does not enter into infinite reasoning. The completion of this procedure is thus assured.
The reasoning of the rational unit is based on a set of data that depends strongly on the application sought by the rational agent. When it is desired that an agent provide the schedules of a Td, it is necessary that data about the stations and the connections between them, as well as temporary notions, are available to them. The set of these data is structured in a knowledge base 120 in the form of a semantic network. The semantic network 120 allows to express the notions of classes and subclasses, and of instance of each class. It also defines the notion of the relationship between the classes that is applied to the different instances of the classes. For example, for an agenda-type application, the semantic network 120 will include at least the classes "persons" (whose instances will be the set of people known in the agenda), and "function" (whose instances will be the known functions). These two classes are related to the -function-of. To indicate that the person Jean is Publicists, the semantic network includes the fact Prolog: the -function -de (Jean, Publicist). An access to the semantic network 120 is performed at any time during the procedure of inference when the consequences of inference depend on the nature of the data. In the agenda application, for example, if the user requests what Jean's work is, the response of the rational agent will depend on his interrogation of the semantic network 120. The semantic network 120 may also have notions of semantic proximity that they are partially useful to produce the cooperative reactions of the rational agent. The aim is to provide relative distances to the different instances of the semantic receptor, these distances are determined after the application at the time of creation of the semantic network. The instances of the semantic network 120 are thus projected in a metric space whose dimensions are the different relations of the network. For example, the publicist function will probably be determined as semantically closer to the engineer in marketing than to the garage function. This construction allows two symmetrical operations called relaxation (or relaxation) and restriction of obligations. The relaxation of obligations is focused on providing answers close to the next applications of the initial request when the response to it does not exist. If, for example, the agenda is asked which are the engineers in marketing and it does not exist, the inference procedure can trigger a relaxation stage in order to provide the coordinates of the advertisers. The restriction, on the other hand, focuses on seeking to specify a request that is too long. If there are 500 publicists registered in the agenda, a restriction stage will give the most discriminating dimension of this too large set (for example the society or work of the publicist), in order to be able to have a pertinent question to verify the user's request. Figure 2 also allows to illustrate that the rational unit 100 of a rational agent comprises a generic part independent of the application and a dependent part of the application. It can be considered that the inputs and outputs of the rational unit 100 are those enunciated in ACL. The shaping under the normal form of these statements and the inference procedure are independent of the application, as well as a majority of the axiom schemes that guide the behavior of the system. Without However, certain of them are adapted or created especially for the application of the semantic network that contains the application data. The network 120 must, most of the time, be able to respond to these requests for restriction and / or relaxation of the part of the inference engine 101 as will be seen in more detail below. In this case, this network must have notions of semantic distance between the instances, as has already been said. The scheme of Figure 3 illustrates in more detail the logical architecture of an agent according to the invention. The natural language comprehension module 150 interprets a statement of the user in a logical statement understandable by the rational unit 100. The vocabulary dealt with by this module depends in part on the service that the rational agent must provide. This part that depends on the application is mainly present in the semantic network 120 of the rational unit, which explains that the comprehension module 150 uses numerous data from the semantic network 120. The understanding module 150 is apt to take into account the user's statement as a series of small syntactic structures (more often, words), which each activate one (or several in the case of synonyms) from the semantic network 120. The link between the user's input vocabulary and the semantic network 120 is made therefore, by means of a table of activation of the concepts 131 that indicates which semantic notions correspond to the words (or series of words) of the vocabulary. These activated notions depend in part on the desired application, but they also represent much more general concepts such as denial, intentions and user knowledge, existence, cardinality, etc. The comprehension module therefore has a list of active concepts (including several, in the case of synonyms). It is apt to transform them into a logical statement formed by a process of semantic completion. This process is based on the hypothesis of semantic connectivity of the user's statement, that is, that the concepts he has evoked are related to each other. Module 150 is apt to be linked among them, by the relations present in the semantic network, even if necessary creating new concepts.
The process determines the notions understood in the user's statement. It is possible to indicate that certain relationships are incompatible with each other in a statement by the user. This controls, the search possibilities of the completion process. Semantic completion invokes a weighting function 132, which allows you to set a numerical weight for each relationship of the semantic network, which represent the veracity of the vocations of this relationship. In this way, the completion process takes into account a notion of truthfulness when it must determine which concepts are understood by the user. These weights also allow to associate a cost to each possible interpretation in case of synonyms. Thus, a single statement will be easily retained by the understanding module, that of the lowest cost. To facilitate semantic completion, it is also possible to specify that certain pairs of concept-relation or concept-concept are implicit. If only one of the concepts has been evoked and if the statement studied is related, the corresponding relation will be added since it is understood in an almost certain way. For example, in an application that gives the course or activity of the stock exchange, the statement "I would like the CAC 40", will be implicitly completed in" I would like the course of the CAC 40. "On the other hand, the comprehension module 150 must take into account the context of the user's statement, for this, it has concepts previously evoked at the same time by the user and by the agent itself, in their responses to the user, a part of them can be used at the time of the completion process, even, it will be indicated for all the relations of the semantic network, if it is pertinent to conserve them in the context The comprehension module 150 does not use the syntactic or grammatical analyzer, this allows correctly interpreting the syntactically incorrect statements, which is particularly important in a context of oral dialogue (and use of vocal recognition), the syntax of spontaneous speech is more In addition, since the analysis is done by small syntactic compounds, it is not necessary to build a grammar that will try to anticipate the set of possible statements by users. Finally, the only part that depends on the user's language is the table that links the vocabulary used in the concepts of the semantic network. The semantic data of the network represent, in effect, the universal notions. This point particularly facilitates the transfer of an application from one language to another language. The generation module 160 fulfills the inverse task of the understanding module. It is apt to transcribe a sequence of communicative acts produced by the rational unit 100 in a statement of the user's natural language. The generation processes operate in two phases. The first phase consists of making all the decisions regarding the linguistic choice that is offered to verbalize the sequence of communicative acts provided at the entrance of the module. For this, the generator 160 uses, among others, the elements of the context of the dialogue to construct the statement most adapted to the current situation. So, in a calendar application, the module 1260 must make a choice between the equivalent formulations such as "Jean's phone number is" or "Jean's number is" or "her number is", "is ..." according to the context of the dialogue.
The objective of this first phase is to construct an intermediate representation of the utterance, using a notion of abstract linguistic resources 133. An abstract linguistic resource represents either a lexical resource 135, for example, common names, verbs, adjectives, or a grammatical resource, that is, the syntactic structure. The second phase uses this abstract representation to construct the final statement. It is a stage of treatment that does not require the strict application of grammar rules. Among these phenomena is, for example, the determination of the order of the constituents of the statement, the concordance between its constituents and the decline of the verbs. The compression modules 150 and generation 160 use written texts as input format, respectively output format. If one wishes to perform a vocal interaction with a rational agent, he or she must attach the recognition and synthesis modules of the word. The recognition module 170 inscribes a voice signal of the user in a text corresponding to the pronounced utterance. This module 170 is, for example, indispensable when a Rational agent is used as a telephone server: the only possible interaction is, therefore, vocal. The rational unit comprises the semantic network that models the data that is last manipulated, forming together the core of a logical agent. As such, this agent can communicate with other logical agents, for example, through the network. The primitives of communication ACL defined by the theory of rational interaction, constitutes a communication language between agents that allows them to perform unambiguous interactions. The agents formed by a rational unit 100 and its semantic network 120 without the interaction compounds in natural language (modules 140 and 150), are particularly well adapted to the use of the ACL communication language between the logical agents to form the systems of multiple agents, such as those represented in figure 4. The invention has been applied with a SUN Ultral station (provided with a 166 processor). megahertz) and in a SUN Ultra 2 station (which has two 64-bit processors and a frequency of 300 megahertz). A live memory is used whose size can be at least 32 megabytes. The maximum response time of the system is 2 seconds on the Ultra2 platform and 5 seconds on the Ultra 1. The connection with a digital network can be made by means of a digital network card with the integration of the RNIS-Basic Rate Interface service. The three modules that have been described, the 150 understanding, 160 generation and the 100 rational unit have been implemented in Prolog (Quintus version 3.3 for Solaris 2.5). The communication between the different modules and the recognition and synthesis systems of the word is carried out by a program written in C language, a prototype of the invention has been developed under Solaris, but a version that does not include the recognition and communication modules. Synthesis of the word has been provided by WINDOWS NT 4.0. ANNEXED ELEMENTS OF THE LOGICAL FORMALIZATION FRAMEWORK: FORMAL SPECIFICATIONS The concepts of mental attitudes (belief, uncertainty, intention), and that are manipulated, are formalized within the framework of a first order modal logic (see In the publication: Sadek 91a, 92 , for the details of this logic). Formalism aspects have been briefly introduced, which are explained in the following discussion. Next, the symbols -, Ú, Ü and & represents the classic logical connectors of denial, conclusion, disjunction and implication, and "y $, the universal and existential quantifiers; p represents a closed formula (denoting a proposition), f and d, the formulas schemas, of i and j (sometimes h), the schematic variables that denote the agents, it is indicated as | = f the fact that the formula f is valid.The mental attitudes considered as semantically primitive, namely belief, uncertainty and choice (or preference) are formalized respectively by the modal operators K, U, and C. The formulas such as (i, p), U (i, p), and C (i, p), can be read respectively "i believe (or think that) p (is true)" , "I am not sure of (the veracity of) p" and "I want p to be really true." The logical model adopted by the operator K accounts for the interesting properties for a rational agent, such as the consistency of their beliefs or their capacity for introspection, formally characterized by the idez of the logical schemes such as K (i, f) E > 0K (i, 0f), K (i, f) _ > K (i, K (i, f)), and 0K (i, f) E > K (i, 0K (i, f)). For uncertainty, the logical model also guarantees the validity of the properties desired as for example, the fact that an agent can not be unsure of their own mental attitudes (i_ = 0U (i, M (i, f)), M belongs to (K, 0K, C, 0C, U, 0U etc.}.) The logical model for choice involves properties such that the fact that an agent "assumes" the logical consequences of his choices (is = C (i, f) ÜK (i, E > )) &C (i, y)), or that an agent can not choose the course of events in which he thinks he is already (* _ = K (i, f) I > C (i, f) ). The attitude of intention that is not semantically primitive is formalized by the operator I that is defined (in a complex way) from the operators C and K. A formula such that I (i, p) can be read as "i has the intention to perform p ". The definition of the invention imposes an agent does not seek to achieve what it has not already achieved (* 2 = I (i, f)]? K (I, f)), and guarantees the fact that an agent does not intend to realize the lateral effects of their intentions (thus "to have the intention of connecting with a network and knowing that it can contribute to hinder it, does not imply (necessarily), have the intention to contribute to hinder the network"). In order to allow the reasoning of the action, it is included in addition to individual objects and agents, sequences of events. He language contains the terms (in particular variables e, ei, ...) that run through all these sequences, a sequence can be formed by a single event (which can be the empty event). In order to be able to talk about complex planes, events (or sequences oti: a2, or the non-deterministic choice aa_ | aa2) The schematic variables a, ai a2, are used to denote expressions of action. , Fact and Agent (i, a), such that the formulas Feasible (ap), Done (ap) and Agent (i, a) respectively mean that (the action or expression of action a must take place after p is true, a just comes to have a place before p is true, and denotes the only agent of the events that appear in A. A fundamental property of the proposed logic is that the modeled agents are in perfect agreement with themselves about their own mental attitudes.Formally, the schema f = &K (i, f), of is governed by a modal operator formalizing a mental attitude of the agent and, it is valid (Sadek 91a, 92) The following abbreviations are used, true is the always constant promotional constant: Feasible (a) = Feasible (a, True) Fact (a) = Fact (a, true) Possible (f) = (? E) Feasible (e, f) Kif (i, f) = K (i, f) v K (i, nf) Kref (i, ? d (x)) = (3, y) K (i,? d (x) = y): Agent i knows the (or the object is a) d where i is the defined description operator (producer of terms ) such that: * f (? d (x)) =? yf (y)? d (y)? Vz (d (z) = > z = y) Uref (i,? D (x)) = (? Y) U (i,? D (x) = y PRINCIPLES OF RATIONALITY AND MODEL OF ACTION Two principles of rationality establish the link between the intentions of an agent and their action plans (Sadek, 91a, 94b). The first principle stipulates that an agent can not have the intention of having a given proposition without having for it the intention of doing one of the actions that he thinks the proposition in question has for effect, and for which there is no particular objection to it being made. Formally, this is expressed by the validity of the following scheme: I (i, p) = > I (Fact (a_ | ... an)) where ak are all actions such that: - p is the rational effect of ak (that is, the reason why ak is planned); - the agent i shares the action ak: Kref (i, ak) - -? C (i, -.Possible (Done (ak))) The second principle stipulates that an agent who intends to be made a given action, necessarily adopts the intention that this action is feasible, if you do not think that it already is: this is formally expressed by the validity of the following scheme: I (i, Done (ak) => K (i, Feasible (ak) vi (i, K (i, Feasible (ak))) The solution to the problem of carrying out an action is directly linked to the very expression of the principles of rationality. It is considered that if the real effects of an action can not be predicted, it is possible to say (in a valid way) what is expected of the action, in other words, the reason why it is selected. In fact, that is what is expressed by the first principle of prior rationality. This semantics of the effect of an action, within the framework of a rational behavior model, allows us to overcome the problem of the predictability of the effects. As an example, here is a simplified model (for the expression of preconditions) of the communicative act of informing the veracity of the proposition: < i, Inform (j, f) > Precondition: K (i, f)? -IK (i, K (j, f)) Effect: K (j, f) The model is axiomatized directly within the logical theory through the principles of rationality above, and the following scheme (the reactions are not, therefore, manipulated by a planning process like the data structures, such as what is done in the framework of the classical method of the oriented plan, but it has a logical semantics within the theory same): K (h, Feasible) < i, Inform (j, f) < = > K (i, f)? -, K (i, K (j, f)) Note that the previous principles specify them alone (without extralogical artifice) a planning algorithm, which reductively produces the action plans by inference of the chains causes of intentions.
FORMALIZATION OF SOME PRINCIPLES OF COOPERATIVE BEHAVIOR It can be seen (Sadek 91a, 94a), for a detailed proposal of a cooperative behavior model within a formal theory of rational interaction.
ADOPTION OF THE INTENT OR MINIMUM PRINCIPLE OF COOPERATION A priori, nothing about specifically rational bread restricts or forces an agent to do (as long as it is a little) cooperative and, in particular, to react to requests from others (such as answering the questions asked). This limiting condition, which we call the minimum principle of cooperation, is a particular case of the property following the adoption of the intention: if an agent i thinks that an agent j has the intention of realizing a property p, and that the same does not have the opposite intention, then i will adopt the intention that j know (one day) that p has been done. This property is formally translated by the validity of the following formula scheme: (k (i, I (j, p))? -pl (i-, p)) = >; I (i, K (j, p)) Together, the two properties guarantee that an agent will act sincerely, and therefore cooperate. In addition, it is important to note that they experience much more than a principle of minimal cooperation. In fact, they express a principle of "dry" cooperation. They translate the fact that when an agent is aware of the objectives of another agent, then it will help him to achieve them, as long as this does not contradict his own objectives.
THE RELEVANCE Most of these remarkable types of cooperative responses are manifested by the communication of an information supplement with respect to what has been explicitly requested. However, the amount of additional information depends strongly on the applicant's presumed interest in this information, and in particular, on his or her stated intentions. The notion of interest is very contextual and is quite delicate to establish in the general case. Conversely, there is information that evidently are not relevant to the interlocu those for example (supposed) already known by him. In other words, avoiding redundancy is a component of cooperative behavior, which can be expressed as follows, in terms of elementary property (in fact, it is not primitive but is derived directly from the very definition of the concept of intention): if an agent i intends to let an agent know a proposition p, then i should think that he does not know it already. This is formally translated by the validity of the following scheme: I (i, K (j, p)) = > K (i, -, K (j, p)) THE ADJUSTMENT OF BELIEFS A corrective response is generated with the intention of correcting a belief of the interlocu judged wrong. This belief generally replaces an inferred presupposition (by implication (Grice 75)), from the recognized communicative act. The intention in question is generated by an agent each time his belief is subject to a proposition that does not believe that his interlocuis competent, is in contradiction with that of his interlocu This is formally translated by the validity of the following scheme: K (i, (p? K (j, -? P)) = > I (i, K (j, p)) REACTION TO APPLICATIONS In a system that communicates, an agent can not be resolved by not recognizing a phenomenon that has been observed. To account for this character, the following double property is formulated: the first part of this property stipulates that with the emission of a phenomenon that an agent perceives and which, either can not associate an intelligible event, or either any event that can be associated is unacceptable in view of this belief, the agent will adopt the intention of knowing what he has done, typically generating a request for repetition. The second part of this property, which is less general than the first, relates only to the case where the agent can not according to his mental state, accept any event realizable by what he has observed, the agent will adopt the intention to let the acof the event know of his approval face to face of what he has "understood", that in terms of linguistic statements, can be manifested, for example by the statement that the agent is prohibited from admitting the act in question. Formally, the two parts of this property are expressed by the validity of the following two schemes, the predicates Observe (i, o) and Realize (or, e), respectively meaning that the agent has just observed the observable entity or (such as a statement, for example), and that the observable entity or is a way to perform the event e: (i) (? e) Fact (e)? -, Kref (i, Fact (ej) = > I (( i Kreft (i, Done (e2))) (ii) (Vo) (Ve) [Observe (i, o) and Perform (or, e)? Agent (j, e) and -iKref (i, Done (e) , j) = > I (i, K (j, -, K (i, Done (e)))) HARMONY WITH OTHERS The behavior of an agent in a universe of multiple cooperative agents would appear, in these main components, as a generalization of their behavior in front of it. (For example, it must be valid for an agent to be sincere, coherent and "cooperative" with him / her). Also, an agent should not do so in such a way as to lose the information to the other agents. In particular, you should not seek uncertainty for others as an end in itself, except eventually, if you think that this is the "good" attitude you have to take face to face with a given proposition. The one that assumes that he has already adopted this attitude. To account for this behavior, the following property is proposed: (i) C (i, Possible (U (j, f))) = > rt where r? can, for example, account for the fact of the election for another agent in a future in which he is not sure of a proposition, imposes this future only as a transitory stage towards a situation of knowledge. Formally Ji can be: C (i, (Ve) (Feasible < (e, U (j, p)) = > (? E ') feasibble (e; e', Kiif (j, p)) vü (i, p) A similar property can occur with regard to the search for ignorance by others. For example, an agent i that wants an agent j not to believe anymore (ie, is not more insecure of) a given proposition p, that he does not believe (resp. Is not insecure) and wishes that he adopt the same attitude that he face to face of p. The following property is then proposed: (i) C (i, Possible (-K (j, f))) = > r2 (ii) C (i, Possible (-O (j, f))) = > r3 where the conditions r3 and r3 will have a form similar to the conditions r3 (the proposed schemes (i), (ii) and (iii) remain valid if the operator of choice c is replaced by the intention operator I). We voluntarily state these conditions incompletely specified, because their precise expression depends on the way in which one wants the modeled agent to behave. They can, for example, be simply reduced to the constant proportional False. Whatever it is, they have no bearing on the rest of the theory. According to what we have chosen to put in the conditions rk, we can validate the schemes such that: THIS IS BAD -I (i, -Kif (j, f)), -I (i, Kref (j, ¿xf ( x))), I (i- Uif (j, f)) = > I (i, Kif (j, f)), or I (i, -Uref (j, f (x))) = > I (i, Kref (j, Lxf (x))).
THE LOGICAL ACCESS TO THE "BLACK BOXES" OF MANAGEMENT OR THE OBLIGATIONS OF THE DOMAIN The functions of "black boxes" of management of obligations of the domain: relaxation, restriction, overinformation, are directly "accessible" from the logical framework that formalizes the behavior of the rational agent (see (Bretier 95)). By way of illustration, "access" to the overinformation procedure is done through the following scheme, where SURIF is a meta-predicate: K (i, (I (i, K (j, p)))? SURIF (p, q)) = > I (i, K (j, q))) This scheme expresses the following property: if an agent í intends for an agent j to create a proposition p, and that i think (because of its overinformation function), that the proposition q can be a pertinent overinformation of p, then i will adopt the intention that j will also believe the proposition q. It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (10)

  1. Having described the invention as above, the content of the following claims is claimed as property: 1. A procedure for implementing a rational agent that dialogues as the core of a dialogue system and / or as an element (agent) of a system of Multiple agent, comprising the following stages: definition of a conceptual architecture of a rational agent that dialogues, formal specification of the different components of this architecture and their combination that allows obtaining a formal model, characterized mainly in that it comprises the definition of a logical architecture that implements the formal architecture, this definition consists of: - a definition of the mechanisms of application of formal specifications that includes: -data that include predefined axioms schemas and axiom schemes that depend on the application, -base of knowledge that depends on the desired application comprising a semantic network and inter-concept distances, -an inference engine to apply the mechanisms of formal specifications by means of data and the knowledge base in order to receive a logical statement , understand it and be able to provide a logical statement in response, the rational agent is thus destined to dialogue with another agent or with a user of the system through any means of communication. 2. The implementation procedure according to claim 1, characterized in that the definition of the logical architecture that implements the formal architecture comprises is carried out by a rational unit comprising an implementation layer of the rationality axioms, an implementation layer of the axioms more than communication, a layer of implementation of cooperative axioms, corresponding respectively to the axioms predefined by the formal model.
  2. 3. The implementation process according to claims 1 6 2, characterized in that the definition of the architecture logic that implements the formal architecture also includes: a generation model to transcribe a sequence produced by the rational unit in a statement of natural language of a user, and a module to interpret a statement of the user in a logical statement understandable by the unit rational; these modules implement, as a consequence, a communication layer in natural language.
  3. 4. The implementation procedure according to the preceding claims, characterized in that the implementation of the mechanisms for applying the formal model is carried out by the rational unit, the generation module and the understanding module.
  4. 5. A rational agent that dialogues placed as the nucleus of a dialogue system and / or as element (agent) of a multi-agent system, comprising: - a definition of a conceptual architecture, a formal specification of the different components of this architecture and its combination that allows obtaining a formal model, characterized because it comprises: a logical architecture that implements the formal architecture and comprises a rational unit designed to implement the mechanisms of application of the formal specifications, this unit comprises for this; -data that include predefined action schemas and axiom schemes that depend on the desired application, -a knowledge base that depends on the application that comprise a semantic network and inter-concept distances, -an inference engine to apply the mechanisms of the specifications formal by means of data and the knowledge base in order to receive a logical statement, understand it and be able to provide a logical statement in response.
  5. 6. A rational agent that dialogues placed as the core of a dialogue system and / or as element (agent) of a system of multiple agents according to claim 5, characterized in that the data comprise the implementation data of a formal model comprising: an implementation layer of rationality axioms, an implementation layer of the communication axioms, an implementation layer of axioms of cooperation, corresponding respectively to the axioms of the formal model.
  6. 7. A rational agent that dialogues placed as the core of a dialogue system and / or as element (agent) of a system of multiple agents according to claim 5 or 6, characterized in that it also comprises: - a module for generation of a statement in natural language from a logical statement issued by the rational unit and a comprehension module to provide a statement in logical language to the rational unit, from a statement in natural language, these modules thus implement a layer of communication in natural language.
  7. 8. A man-machine dialogue system, characterized in that it comprises an agent that dialogues according to any of the preceding claims.
  8. 9. An information server characterized in that it comprises means for applying a man-machine dialogue system according to claim 8.
  9. 10. A system of multiple agents comprising communicating agents, each agent comprises means for applying an interaction, characterized because it comprises at least one people whose core is based on the implementation of a rational agent that dialogues according to any one of the preceding claims.
MXPA/A/2001/006431A 1998-12-23 2001-06-22 Model and method for using an interactive rational agent, multiagent server and system implementing same MXPA01006431A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR98/16374 1998-12-23

Publications (1)

Publication Number Publication Date
MXPA01006431A true MXPA01006431A (en) 2003-11-07

Family

ID=

Similar Documents

Publication Publication Date Title
AU773217B2 (en) Model and method for using an interactive rational agent, multiagent server and system implementing same
Chakrabarti et al. Artificial conversations for customer service chatter bots: Architecture, algorithms, and evaluation metrics
Truex et al. Deep structure or emergence theory: contrasting theoretical foundations for information systems development
Churcher et al. Dialogue management systems: a survey and overview
Appelgren et al. Interactive task learning via embodied corrective feedback
Jameson et al. Cooperating to be noncooperative: The dialog system PRACMA
Green et al. Interpreting and generating indirect answers
US11847575B2 (en) Knowledge representation and reasoning system and method using dynamic rule generator
Wilson Flogging a dead horse: the implications of epistemological relativism within information systems methodological practice
Logan et al. Modelling information retrieval agents with belief revision
Dimitrova et al. Maintaining a jointly constructed student model
Allen ARGOT: A system overview
Balkanski et al. Cooperative requests and replies in a collaborative dialogue model
MXPA01006431A (en) Model and method for using an interactive rational agent, multiagent server and system implementing same
Mazuel et al. Generic command interpretation algorithms for conversational agents
McRoy Achieving robust human–computer communication
Green et al. A computational mechanism for initiative in answer generation
Mejía et al. CHAT SPI: knowledge extraction proposal using DialogFlow for software process improvement in small and medium enterprises
Daradoumis et al. Using rhetorical relations in building a coherent conversational teaching session
Fum et al. Naive vs. Formal Grammars: A case for integration in the design of a foreign language tutor
Pallotta Computational dialogue models
McRoy et al. A practical, declarative theory of dialog
Ishizaki Mixed-initiative natural language dialogue with variable communicative modes
Shi et al. Xai Language Tutor–A Xai-Based Language Learning Chatbot Using Ontology and Transfer Learning Techniques
Enembreck et al. Dialog with a personal assistant