WO2005086051A1 - 対話システム、対話ロボット、プログラム及び記録媒体 - Google Patents
対話システム、対話ロボット、プログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2005086051A1 WO2005086051A1 PCT/JP2004/002942 JP2004002942W WO2005086051A1 WO 2005086051 A1 WO2005086051 A1 WO 2005086051A1 JP 2004002942 W JP2004002942 W JP 2004002942W WO 2005086051 A1 WO2005086051 A1 WO 2005086051A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- information
- user
- context
- utterance data
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 74
- 230000003993 interaction Effects 0.000 claims description 29
- 230000007613 environmental effect Effects 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 18
- 230000015572 biosynthetic process Effects 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 17
- 238000003786 synthesis reaction Methods 0.000 claims description 17
- 230000009471 action Effects 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 abstract description 32
- 238000012545 processing Methods 0.000 description 39
- 238000010586 diagram Methods 0.000 description 11
- 208000003443 Unconsciousness Diseases 0.000 description 10
- 238000001514 detection method Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 230000002194 synthesizing effect Effects 0.000 description 3
- 230000003542 behavioural effect Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 235000019640 taste Nutrition 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Definitions
- the present invention provides a dialogue robot that performs a dialogue with a user by voice in cooperation with a life support robot system that autonomously controls an appliance to support a user's life, and the life support robot system and the dialogue robot.
- the present invention relates to a dialogue system having a dialogue robot.
- a life support robot system In the context of a ubiquitous environment, a life support robot system has been realized that is installed in a user's living space or the like and autonomously supports the life of the user.
- a life support robot system based on a ubiquitous environment is often an unconscious type system that does not require the user to operate the system consciously.
- the unconscious lifestyle support robot system infers the environment and user behavior based on various types of sensor information in order to maintain a comfortable environment and support user activities.
- the life support robot system analyzes the user's behavior and attitude based on sensor information and infers that the user is in a relaxed state. By playing music that suits tastes, or by maintaining indoor temperature and humidity with an air conditioner, a comfortable living environment can be realized without the user consciously operating these appliances.
- the unconscious lifestyle support robot system is convenient in that the user does not need to consciously operate the appliance.
- the services provided by the living support robot system under the disciplined control will not always be accepted by users. It is thought that the life support robot system will support life in general, and the services provided will become more sophisticated and multifunctional, and the operating principle of appliances will also become more complex. For this reason, autonomous control of the life support robot system alone may not be able to respond to individual user needs.
- the user may want to know other things than the information stored in the life support robot system. For example, if a user wants to operate the appliance himself, it is expected that a situation may arise in which the user cannot operate the appliance to fully utilize the advanced functions. In such a case, since the user wants to know detailed information that is not covered by the life support robot system, it is necessary to provide such information in the dialogue robot.
- the unconscious lifestyle support robot system is a universal user interface that can flexibly respond to various situations, has advanced situational response processing capabilities, and has a familiarity with the user. It is necessary to have a dialogue system that makes people embrace.
- the present invention has been made in view of such a need, and has as its object to provide, as a user interface of a living support robot system, a method of receiving an ambiguous request from a user more accurately and reflecting it in the control of the system. Provide useful services and information to users so that they can better understand the control situation of the system.
- Such an interactive robot and an interactive system using the interactive robot can be realized. .
- the present invention in order to achieve the above object, realizes a dialog system having a high level of situational coordination, in which a living support robot system and a dialogue port cooperate. Further, a user-friendly interface is realized by realizing the user interface of the interactive system by a visible-type interactive robot that can be recognized by a user as an entity.
- the present invention provides a life support robot system that autonomously controls an appliance that executes a service to support a user's life, and interacts with the user by voice of a dialog robot in cooperation with the life support robot system.
- the life support robot system that constitutes the dialogue system includes: 1) an inference system that controls an appliance by inferring an environment in the space and a user's action based on sensor information measured in a predetermined space. And 2) a distributed environmental behavior database for accumulating environmental information and behavior information as a result of inference by the inference system.
- the interactive robot includes: 1) a dialogue strategy database that stores associative information describing the degree of association between concepts; and 2) robotic utterance data generated in the interactive robot into audio data.
- Voice synthesis means for converting and uttering speech; 3) voice recognition means for recognizing the content of user's voice data and converting it into user utterance data; 4) converting the environment information and behavior information from the distributed environment behavior database.
- a status information obtaining means for obtaining and storing the status information in the status storage means; 5) analyzing a status of the user from the environmental information and the behavior information; selecting a service to be provided to the user based on the status; A process of identifying a concept related to the situation based on the situation and generating robot utterance data using a linguistic expression indicating the concept; That infers the user's utterance data evening context, reference to previous SL user utterance Day evening contextual is conceptually related to the context of the robot speech data evening the association information Dialogue control means for performing a process of determining whether or not the user utterance data is received.6) When the dialogue control means determines that the context of the user utterance data is related to the context of the robot utterance data, Execution request means for transmitting a service execution request to the life support robot system or the appliance.
- the dialogue robot is an associative robot used to generate dialogue based on the ⁇ associative recognizing '' behavioral hypothesis in order for the dialogue robot to participate in or ask the user's conversation and draw the user into his or her own dialogue. It has a dialogue strategy database to store information.
- the dialogue robot acquires the environment information and the behavior information from the distributed environment behavior database of the living support robot system by the situation information acquisition means and stores the acquired environment information and the behavior information in the situation storage means.
- the interaction control means of the interaction robot analyzes a situation of the user from the environment information and the behavior information, and selects a service to be provided to the user based on the situation. Further, a concept related to the situation is specified based on the association information of the dialog strategy database, and robot utterance data is generated using a language expression indicating the concept. Then, the generated robot utterance data is converted into voice data by the voice synthesis means and uttered.
- the voice recognition means recognizes the content of the voice data of the user and converts it into user utterance data.
- the dialogue control means infers the context of the user utterance data with respect to the robot utterance data, and refers to the association information to determine whether the context of the user utterance data is conceptually related to the context of the robot utterance data. Determine whether or not. Then, when it is determined that the context of the user utterance data is related to the context of the robot utterance data, the service execution request is transmitted to the life support robot system or the appliance.
- the dialogue robot asks the user for voices to recognize and provide the selected service based on the user's situation, and the user's response is drawn into the dialogue with the robot.
- the latent service can be provided to the user.
- the interactive robot uses the voice recognition means to convert the content of the user's voice data. Recognize and convert to user utterance data.
- the interaction control means analyzes a context of the user utterance data, analyzes a situation from the environment information and the behavior information, and selects a service to be provided to the user based on the context and the situation of the user utterance data, A concept related to the context of the user utterance data is specified based on the association information, and robot utterance data is generated using a language expression indicating the concept. Then, the generated robot utterance data is converted into voice data by the voice synthesizing means, and the voice is uttered.
- the voice recognition means recognizes the content of the user's voice data for the robot utterance data and converts it into user utterance data.
- the dialogue control means infers the context of the new user utterance data with respect to the robot utterance data, and refers to the associative information so that the context of the new user utterance data is conceptually the same as the context of the robot utterance data. Determine whether they are related. If it is determined that the new user utterance data evening context is conceptually related to the robot utterance data context, the service execution request is transmitted to the lifestyle support robot system or the appliance. I do.
- the dialogue robot interrupts the user's conversation by voice and provides the user's response with the robot in order to provide the user with the service selected based on the context of the user's conversation and the user's situation.
- the dialogue robot By sending a service execution request to the life support robot system or the appliance when the user is involved in the conversation, the ambiguous service can be provided to the user.
- the dialogue robot of the dialogue system has a knowledge database for storing knowledge information on the appliance or the service executed on the appliance when the above configuration is adopted, and the dialogue control means includes a sentence of the user utterance data. If it is determined that the pulse is related to the context of the robot utterance data, knowledge information on the selected service is extracted from the knowledge database, and robot utterance data is generated using the extracted knowledge information.
- the dialogue robot of the dialogue system when adopting the above configuration, uses the selected service from another information providing server on the network according to a predetermined communication protocol.
- the dialogue robot of the dialogue system communicates with other dialogue robots as a service selected by the dialogue control means, using a predetermined communication protocol to transmit and receive information on the user's situation and the selected service. Means.
- interactive robots can cooperate with each other to provide services to users.
- the visible-type interactive robot uses the conversation of the user. Therefore, by recognizing such questions and complaining to the user, it is possible to obtain the reason for control from the life support robot system and explain it to the user.
- the user can eliminate the eerie and dissatisfaction peculiar to the autonomous operation of the life support robot system.
- the dialogue robot advances the dialogue with the user and makes the user aware that the life support robot system can respond to the potential requests. be able to. Furthermore, it is possible to materialize the vague request of the user and request the life support robot system to execute the service. As a result, the user life support robot system can realize more flexible and advanced service control.
- the user does not feel uncomfortable by performing the dialogue between the user and the unconscious lifestyle support robot system through a visible-type dialogue robot that the user can recognize as an entity. Natural interface can be realized.
- FIG. 1 is a diagram showing a configuration in an embodiment of a dialogue system of the present invention.
- FIG. 2 is a diagram showing an example of the configuration of a life support robot system and a dialogue robot.
- FIG. 3 is a diagram showing a configuration example of a distributed environment behavior database.
- FIG. 4 is a diagram showing an example of a processing flow of the interactive robot when selecting a service from a user conversation.
- FIG. 5 is a diagram showing an example of a processing flow of a dialogue robot when a service is selected from the situation of the user.
- FIG. 6 is a diagram showing an example of a processing flow of an interactive robot when providing knowledge information related to a service from a user's conversation.
- FIG. 1 is a diagram showing a configuration example of a dialogue system of the present invention.
- the dialogue system is an unconscious lifestyle support robot system (hereinafter, referred to as a living space) applied to a family house (living space) composed of a plurality of users 3 (3a, 3b, 3c).
- Robot system 1 uses a ubiquitous environment to monitor the entire living space and provide services using home appliances 4 with communication functions And autonomously support User 3's life.
- the interactive robot 2 is installed in the user's 3 living space.
- the interactive robot 2 may have a function of being able to move autonomously, and may be configured to be able to move freely in the living space of the user 3.
- the interactive robot 2 has one entity as a system, but may have a plurality of housings provided with processing means having the same configuration.
- the housing is large enough to be installed on a desk, for example. If the interactive robot 2 has multiple housings, Of the dialogue robots 2 installed in each room in the living space, the processing means in the case where the voice data is detected starts the processing operation as the dialogue robot 2. If the conversation partner moves while the processing continues, or if it is necessary to interact with a user at a remote location, the enclosure set to the nearest location to the conversation partner By allowing the processing means to take over the processing, the flow of one processing operation can be linked by multiple cases.
- the dialogue system of the present invention is intended to cooperate with an unconscious lifestyle support robot system 1 and a visible dialogue robot 2 in a distributed manner. Such decentralized collaboration can be likened to a metaphor of the relationship between mother and child.
- the entire living space of the user, in which the unconscious lifestyle support robot system 1 is incorporated, is referred to as a “mother metaphor”. In other words, they are always in the house, watching over the family, appearing out of nowhere when needed, and comparing them to those who support the family casually. Then, the conversation robot 2 that handles the conversation with the user 3 is referred to as a “child metaphor”. Although the dialogue robot 2 does not have general social common sense, it can understand to a certain extent the preferences of specific users who share a small living environment such as a home and the relationships between users, and provide a family (user) ) Think of things that you want to be frowned upon or interested in, as if you could gain a high degree of knowledge.
- the dialogue robot 2 for child metaphors cooperates with the life support robot system 1 for mother metaphors to facilitate the relationship between the user 3 and the life support robot system 1 that is like a mother to the user 3. It is thought that it plays the role of the “youngest child” so that the active participation of the family and the presence of the mother can be felt close to the human. It is based on natural language processing capabilities in speech recognition. However, the voice recognition ability may not be enough to continue the conversation with the user 3 naturally. Therefore, it is necessary for the user 3 to be able to understand as a mental model based on the inferior conversation ability of the conversation robot 2. Therefore, the dialogue ability of the dialogue robot 2 is given a three-year-old child's ability by referring to the knowledge in the fields of cognitive science, educational psychology, and developmental psychology. For this reason, dialogue control The "associative association" behavior hypothesis was adopted.
- “Associative recall” means that in a dialogue, a response is made using a linguistic expression that indicates another concept that is conceptually related to the context of the utterance sentence. It is a behavioral hypothesis that continues the action of responding using a linguistic expression that indicates another related concept.
- the dialogue robot 2 is positioned so that the affordance is about 3 years old, and the dialogue is controlled.
- the information provided to the user 3 is, for example, the manual information of the appliance 4, the detailed information on the service, etc.
- Provide professional information on The dialogue robot 2 accumulates situation information on the situation of the user's living space, knowledge information in the knowledge database, information obtained from an external information server, etc., and the user 3 does not notice according to the subject of the user's dialogue.
- a so-called “nerd” presence is formed as an affordance of the interactive robot 2.
- the dialogue robot 2 makes it easier for the user 3 to give the image of being “friendly and useful existence”.
- FIG. 2 is a diagram showing a configuration example of the life support robot system 1 and the conversation robot 2.
- the robot system 1 is composed of an inference system 11, a sensor 12, a distributed environment behavior database 13, an event detector 15, a service history database 17, and the like.
- the inference system 11 has a knowledge database, and the event detector 15 When an event is detected, the context is acquired from the distributed environment behavior database 13 and the environment in the living space, the operation of the user 3, the interaction between the users 3 (interaction), the interaction between the user 3 and the object, and the like are obtained. It is a processing system that analyzes, infers user behavior from the analysis results, determines the service to be executed according to the conclusion, and controls the corresponding appliance 4. The analysis results and the inference results in the inference system 11 are accumulated in the distributed environmental behavior database 13 as needed.
- the sensor 12 is a processing unit that has a communication function, measures and collects various data in the user's 3 living space, and transmits the collected sensor information to the distributed environmental behavior database 13.
- the sensor 12 is, for example, a TV camera, a microphone, a floor sensor, a monitor for an RFID tag, an internal sensor of the appliance 4, and the like.
- the sensor information is, for example, data such as image data, audio data, pressure transition data, and ID tag data. It is assumed that an object belonging to the user 3 or an object existing in the living space is provided with an ID tag storing identification information for non-contact radio wave system recognition, for example, an RFID (Radio Frequency Identification) tag. .
- RFID Radio Frequency Identification
- the distributed environment behavior database 13 is a database system that accumulates and manages sensor information obtained from the sensors 12 and results analyzed or inferred by the inference system 11.
- FIG. 3 shows a configuration example of the distributed environment behavior database 13.
- Distributed environment behavior database 13 is distributed sensor information database 13 1, distributed environment information database 13 2, distributed behavior information database 13 3, distributed behavior information database 13 4, human-object interaction database 13 5. It is composed of a database system, such as a person-to-person interaction database, and a unique information database.
- the distributed sensor information database 13 1 is a database that accumulates various types of sensor information transmitted from the sensor 12 at a predetermined time or every occasion.
- the distributed environment information database 13 2 is a database that stores environmental information such as the position and posture of an object, the temperature and humidity of the living space, and the internal state of the appliance 4.
- the sensor information (time, ID data, position, image data, pressure data, —Internal sensor information of the service execution unit, etc.) is analyzed by the inference system 11, and the following environmental information is generated and stored.
- the distributed motion information database 13 3 is a database that stores motion information indicating the position, posture, and the like of the user 3. For example, in the case of the above example, the sensor information is analyzed by the inference system 11, and the following operation information is accumulated.
- the one-person interaction database 135 is a database that stores one-person interaction information.
- the one-person interaction information is information indicating a combination of a user and an object in which the interaction occurs. For example, in the case of the above example, environmental information and operation information are analyzed by the inference system 11, and the following personal interaction information is accumulated.
- Person-to-person interaction data base 1 36 is a database that stores person-to-person interaction information.
- the person-to-person interaction information is information indicating a combination of users having an interaction. For example, suppose that user 3a (father) and user 3b (daughter) are sitting on the couch and watching TV together. The interaction is analyzed by the inference system 11 from the motion information (position, posture, etc.) of the two people, and the following person-person interaction information is accumulated.
- the distributed behavior information database 134 is a database that stores behavior information indicating the behavior of the user 3.
- the inference system 11 infers behavior information from environmental information, movement information, person-to-person interaction information, person-to-person interaction information, and the following behavior information. Is accumulated.
- the unique information database 1 37 stores unique information indicating attributes of each user 3. Database.
- the unique information includes information such as physical characteristics and gender of the user 3 as well as characteristics inferred by the inference system 11.
- the inference system 11 infers the conversation tendency of the user 3 from the speech data of the conversation between the conversation robot 2 and the user 3 acquired by the sensor 12 and accumulates the inference result as unique information. .
- the event detection device 15 is a processing device that notifies the inference system 11 of the event detection when updating the information in the distributed environment behavior database 13 or detecting a service execution request from the interactive robot 2.
- the appliance 4 has a data transmission / reception function, and performs a home appliance ⁇ f C that executes a predetermined service under the control of the inference system 11 or by the user's own operation.
- the service history database 17 is a database that stores history information of services executed by the inference system 11.
- the inference system 11 also refers to information in the service history database 17 in inference processing.
- the distributed environment behavior database 13, the service history database 17, the situation information acquisition unit 25, and the situation storage unit 26 can be collectively configured as a distributed environment behavior database 13-1.
- the inference system 11 of the robot system 1, the distributed environment behavior database 13, the event detection device 15, and the service history database 17 can be implemented using known processing means or devices. is there.
- the dialogue robot 2 includes a voice recognition unit 21, a voice synthesis unit 22, a dialogue control unit 23, a dialogue strategy database 24, a situation information acquisition unit 25, a situation storage unit 26, a knowledge information acquisition unit 27, and knowledge It consists of a database 28, an execution requesting unit 29, an inter-robot communication unit 210, and the like.
- the voice recognition unit 21 is a processing means for inputting the voice data uttered by the user 3, recognizing the content of the input voice data, and converting it into user utterance data (voice sentence data).
- the speech synthesis unit 22 generates the robot utterance data (response sentence data for the user's conversation, utterance sentence data for asking the user) generated by the dialog control unit 23. ) Into speech data and utterance.
- the dialogue control unit 23 analyzes the environment and the status of the user 3 from the information stored in the status storage unit 26, selects a service to be provided to the user 3 based on the analyzed status, and creates a dialogue strategy database 24. Processing to identify a concept related to the user's situation based on the associative information of the user and to generate robot utterance data using a linguistic expression indicating the concept; and a user utterance that the user responds to the generated robot utterance data. The context of the data (response sentence data) is inferred, and it is determined whether or not the context of the user utterance data is conceptually related to the context of the robot utterance data by referring to the associative information of the dialog strategy database 24. And processing means for performing the processing.
- the dialogue control unit 23 determines that the context of the user utterance data is related to the context of the robot utterance data
- the dialogue control unit 23 extracts knowledge information on the selected service from the knowledge database 28. Then, robot utterance data is generated using the extracted knowledge information.
- the dialogue strategy database 24 is a database that stores associative information for the interactive robot 2 to infer a dialogue with the user 3 and generate a response for associating the user 3 with the user's own dialogue by “associative assimilation”. .
- Associative information is information that describes the degree of association between the concepts, which are themes extracted from the context of the dialogue.
- the associative information uses information indicating the degree of synonymity, similarity, or co-occurrence between the concepts.
- the associative information is given in consideration of the affordance set for the interactive robot 2. For example, when setting the affordance of "3-year-old child" in dialogue report 2, the concept and the associative information are defined based on the conceptual model corresponding to the inference and associative ability of the 3-year-old child.
- the situation information acquisition unit 25 is a processing unit that acquires information such as environment information, operation information, behavior information, and unique information from the distributed environment behavior database 13 and stores the information in the situation storage unit 26.
- the knowledge information acquisition unit 27 is a processing unit that acquires knowledge information on the selected service from the information server 9 on the network 8 using a predetermined communication protocol, for example, TCPZIP, and stores the knowledge information in the knowledge database 28. .
- a predetermined communication protocol for example, TCPZIP
- the knowledge database 28 stores, for example, information related to services executed on the appliance 4 or the appliance 4.
- the knowledge database 28 is a database that stores knowledge information on the appliance 4 or services executed on the appliance 4, for example, manual information of the appliance.
- the execution request unit 29 is processing means for transmitting the service execution request selected by the interaction control unit 23 to the robot system 1. Further, the execution request unit 2.9 may directly transmit a service execution request to the appliance 4.
- the robot-to-robot communication unit 210 communicates with another dialogue robot 2 ′ cooperating with the robot system 1 ′ other than the mouth pot system 1, as a service selected by the dialogue control unit 23, according to a predetermined communication protocol.
- This is a processing means for transmitting and receiving information on the situation of the user 3 and the selected service.
- the speech recognition unit 21, the speech synthesis unit 22, and the dialogue control unit 23 of the dialogue robot 2 are realized by an anthropomorphic spoken dialogue agent toolkit (Galatea Toolkit) (for example, ttp: / / hilt u-tokyo.ac.jpA galatea / galatea-jp.tmU Pending, Helmut; Ishizuka, Mi tsuru (Eds.) Pp. 187-213; 2004; ISBN: 3-540-00867-5).
- anthropomorphic spoken dialogue agent toolkit for example, ttp: / / hilt u-tokyo.ac.jpA galatea / galatea-jp.tmU Pending, Helmut; Ishizuka, Mi tsuru (Eds.) Pp. 187-213; 2004; ISBN: 3-540-00867-5.
- FIG. 4 is a diagram showing a flow of a dialogue robot process when a service is selected from a user's conversation.
- the interactive robot 2 detects the conversation between the users 3 and performs the following processing when proposing an appropriate service.
- Step S1 Voice data recognition processing
- the speech recognition unit 21 of the conversation robot 2 detects conversation between the users 3 and inputs the utterance of the user, recognizes the speech data, and converts the speech data into user speech data.
- the situation information acquisition section 25 acquires predetermined situation information from the distributed environment behavior database 13 and stores it in the situation storage section 26.
- Step S3 Dialogue ⁇ situation inference processing
- the dialogue control unit 23 analyzes the user utterance data (user's conversation) to infer the context. Then, it infers the user's situation based on the context and situation information of the user utterance data, and selects the optimal service from the executable services. Furthermore, by referring to the associative information in the conversation strategy data 24, the concept associated with the context of the user utterance data is extracted, and the robot utterance data (interactive robot Response).
- Step S4 Voice synthesis processing
- the voice synthesizing unit 22 synthesizes the voice of the robot utterance data, and responds to the user 3.
- Step S5 Voice data recognition processing
- the voice recognition unit 21 When the voice recognition unit 21 receives the voice data of the user 3 in response to the response uttered in step S4, the voice data is recognized and converted to new user utterance data (new utterance of the user). I do.
- Step S6 Dialogue pull-in determination process ''
- the dialogue control unit 23 infers the context of the new user utterance data and determines whether or not the user 3 has been drawn into the conversation. In the dialogue determination, if the context of the new user utterance data is within the range of the associative information of the extracted concept, or if the robot utterance data (response) is consent, the dialogue is successfully drawn. judge. If the service to be executed is specified, the process proceeds to step S7. If the service to be executed is not specified, response statement data is created (step S6-2), and step S4 is performed. Processing proceeds to
- step S6-1 If the context of the new user utterance data is out of the range of the associative information of the extracted concept, or if the robot utterance data (response) is not accepted, it is determined that the dialog lead-in has failed. In this case, another concept is extracted by referring to the associative information, the topic is corrected by the new association (step S6-1), and response sentence data is generated using a linguistic expression indicating the corrected topic (step S6-1). S 6— 2), step S 4 Processing proceeds to
- the dialog control unit 23 determines that the pull-in of the dialog has failed if the response sentence data is not received from the speech recognition unit 21 within a predetermined time.
- Step S7 Service execution processing
- the execution request unit 29 sends an execution request for the selected service to the robot system 1. Or, send directly to the appropriate appliance 4.
- FIG. 5 is a diagram showing a processing flow of the interactive robot when selecting a service from the situation of the user.
- the interactive robot 2 performs the following processing when selecting an appropriate service based on the status of the user 3 obtained from the robot system 1.
- Step S 11 1 The situation information acquisition section 25 of the interactive robot 2 acquires predetermined situation information from the distributed environment behavior database 13 and stores it in the situation storage section 26.
- Step S12 The dialogue control section 23 analyzes the situation of the user 3 from the situation information in the situation storage section 26, and selects an optimal service from executable services. Furthermore, by referring to the associative information of the dialogue strategy database 24, a concept associated with the situation of the user 3 is extracted, and robot utterance data (a question of the dialogue robot) is created using a linguistic expression indicating the concept.
- Step S13 Then, the voice synthesis unit 22 synthesizes the voice of the robot utterance data and utters the voice to the user 3.
- Step S14 The voice recognition section 21 inputs voice data of the response of the user 3 to the robot voice data, recognizes the voice data, and converts the voice data into user voice data (user response).
- Step S15 The dialogue control section 23 infers the context of the user's response and determines whether or not the user 3 has successfully led the dialogue. If it is determined that the dialogue is successfully acquired, the process proceeds to step S16 if the service to be executed is specified, and if the service to be executed is not specified, the response statement data is created (step S1). 5-2), and the process proceeds to step S13. On the other hand, if the entrainment fails, the topic is corrected with the new association (step S15—1), and the response sentence data An evening is created (step S15-2), and the process proceeds to step S13.
- Step S16 When the interaction is successfully drawn, the execution request unit 29 sends a request to execute the selected service to the robot system 1 or the corresponding appliance 4.
- FIG. 6 is a diagram showing a flow of an interactive robot process when providing knowledge information related to a service from a user's conversation.
- Step S 21 The conversation robot 2 detects the conversation between the users 3 by the voice recognition unit 21, inputs the voice data of the user's utterance, recognizes the voice data, and utters the user. Convert to data (user conversation).
- Step S22 In parallel, the situation information acquisition section 25 acquires predetermined situation information from the distributed environment behavior database 13 and stores it in the situation storage section 26.
- Step S23 The conversation control unit 23 analyzes the situation of the user 3 and the context of the conversation of the user 3 from the situation information. Then, based on the context and situation of the conversation of the user 3, the optimal service is selected from the services that can be executed. Furthermore, by referring to the associative information in the dialogue strategy database 24, a concept associated with the context of the conversation of the user 3 is extracted, and robot utterance data (robot response) is created using a linguistic expression indicating the concept. I do.
- Step S24 The voice synthesizing unit 22 synthesizes the voice of the robot utterance data and responds to the user 3.
- Step S25 The voice recognition unit 21 inputs the voice data of the user 3 in response to the response uttered in step S24, performs voice recognition on the voice data, and generates new user utterance data (user response). ).
- Step S26 The dialogue control section 23 infers the context of the new user's response and determines whether the user 3 has successfully led the dialogue. If it is determined that the dialogue has been successfully drawn, if the service to be executed is specified, the process proceeds to step S27. If the service to be executed is not specified, response statement data is created (step S27). Step S26-2), and the process proceeds to step S24. On the other hand, if the conversation fails, the topic is corrected by the new association (step S26-1), and the response is returned. The sentence data is created (step S26-2), and the process proceeds to step S24. Step S27: If the dialogue is successfully drawn, the dialogue control section 23 extracts knowledge information related to the selected service from the knowledge database 28.
- Step S28 Further, the dialogue control section 23 generates mouth pot utterance data (providing knowledge information) using the extracted knowledge information.
- Step S29 The speech synthesis section 22 synthesizes the speech of the robot speech data and utters the speech. Note that the speech synthesis unit 22 may directly convert the extracted knowledge information into speech data and speak.
- the processing examples described above may be processed by combining any of the processings. Further, in each processing example, the dialogue control unit 23 generates robot utterance data for inquiring whether or not the service to be executed before transmitting the service execution request by the execution requesting unit 29, and inquires the user. You may.
- the robot-to-robot communication unit 210 Send and receive information about the situation and selected services
- the users (father and daughter) in the living room input the following voices in conversation, and perform voice recognition on the voice data and convert it to the utterance text data. I do.
- the situation information acquisition unit 25 acquires predetermined information such as unique information, environmental information, and behavior information from the distributed environment behavior database 13 and stores the acquired information in the situation storage unit 26. Based on this information, the interaction control unit 23 uses the user 3 a (father) and the user 3 b ( The situation where the daughter is relaxing in the living room (rooml) and the user 3c (mother) is cleaning up in the kitchen (r0om2) can be understood.
- the dialog control unit 23 infers the context of the uttered sentence data of the user whose speech has been recognized. It divides Hanshin, Katsu> contained in the utterance sentence data into ⁇ Hanshin, Win>, and infers that the concept of the subject of the conversation is ⁇ Baseball>. Then, the concept ⁇ baseball> and the highly related concept ⁇ open baseball> are specified with reference to the association information in the dialog strategy database 24. Then, based on the context and situation of the conversation of the user 3, a service such as ⁇ extract baseball relay program from the electronic television program guide> and turn on the television if there is a baseball relay program> is selected.
- the robot utterance data “baseball story?” Is generated.
- the speech synthesis unit 22 synthesizes the data by speech and utters.
- Dialogue Robot 2 asks “Talk about baseball?” And participates in the conversation between father and daughter.
- the voice recognition unit 21 continuously detects whether or not the user has uttered the robot utterance data. If the user is speaking, the voice data is recognized and converted to new user speech data. For example, if the utterance of the new user is “Professional baseball.”, The dialogue control unit 23 succeeds in drawing the user 3 into the dialogue, assuming that the context is within the range to be associated. Judge. If the utterance of the new user is “yes”, the context is a positive reply, and it is determined that the dialogue was successfully drawn.
- the dialog control unit 23 executes the selected service when it is determined that the pull-in of the dialog is successful.
- the electronic database program table is searched in the knowledge database 28.
- the knowledge information acquiring section 2 7 acquires the URL of the information server 9 to provide an electronic television program guide from the knowledge database 2 8, net
- the electronic TV program guide is acquired from the information server 9 through the network 8 and stored in the knowledge database 28.
- the dialogue control unit 23 extracts the information of the baseball relay program from the TV program table stored in the knowledge database 28, and finds that, for example, the relay program of the “Hanshin-China game” game is currently being broadcast. And
- the dialogue control unit 23 generates new robot utterance data “I'm playing Hanshin with China Japan now” from the information (knowledge information) of this baseball broadcast program.
- the dialog control unit 23 extracts, from the information stored in the situation storage unit 26, unique information that the user 3c (mother) is a fan of Hanshin. From this unique information (situation information), generate new robot utterance data "I like mom too.” O
- the execution request unit 29 transmits an execution request for the selected service to the robot system 1.
- the event detection device 15 of the robot system 1 detects a service execution request from the interactive robot 2 and notifies the inference system 11.
- the user 3 c mother
- the dishwasher which is one of the appliances 4 "" 3
- the situation information acquisition unit 25 of the dialogue mouth bot 2 of the kitchen (r 0 0 m 2) acquires predetermined information from the distributed environment behavior database 13 and stores it in the situation storage unit 26.
- the dialogue control unit 23 analyzes the situation from the information stored in the situation storage unit 26, and selects a service called ⁇ examine and notify the cause of failure>.
- the robot utterance data “Dishwasher is weird?” Is generated using the linguistic expression associated with the situation, and the speech synthesis unit 22 asks the user by speech synthesis. .
- the speech recognition unit 21 of the dialogue robot 2 inputs a response to the question with the robot, "Why?", And the dialogue control unit 23 succeeds in pulling in the dialogue from the context of the mother's response. Judgment has been made and the selected service is executed. That is, the interaction control unit 23 acquires the state of the dishwasher from the environment information in the state storage unit 26.
- the knowledge information obtaining unit 27 obtains the URL of the information server 9 of the manufacturer of the applicable abli- cation (dishwasher) 4 from the knowledge database 28, and uses the information server 9 of the manufacturer to key the dishwasher state. Obtain information about the cause of the failure at a later date.
- the dialogue controller 23 based on the information (knowledge information) about the cause of the failure G Create utterance data “The cause of the failure is understood....”, And utter the voice by the voice synthesis unit 22.
- the dialogue robot 2 infers from the conversation with the user 3 ⁇ (married and independent daughter) that the daughter needs the advice of the user 3 y (mother), and provides the service ⁇ the mother's videophone call. Connect> to your daughter. If the dialog control unit 23 of the dialog robot 2 determines that the dialog pull-in has succeeded, the inter-robot communication unit 210 requests the robot Uru communication unit 210 'of the dialog robot 2' to execute the service. Send The interactive robot 2 ′ utters the robot utterance data “Mr. ⁇ is waiting for contact.” To the user 3 y (mother) and sends it to the appliance (videophone) 4 and the user 3 X. Send phone number and call instructions.
- the dialogue robot 2 infers the situation of a conversational user between users based on the “associative association” behavior hypothesis, and proceeds with the dialogue with the user. It is a service that can be provided by the remote system 1 and can provide services and related knowledge information that the user has not noticed, according to the situation of the user.
- the interactive robot 2 of the present invention can also be realized as a program that is read by a computer, installed, and executed.
- a program for realizing the present invention can be stored in a computer-readable recording medium, provided by being recorded on such a recording medium, or transmitted and received using various communication networks via a communication interface. Provided. Industrial applicability
- the present invention is suitable for a dialogue system that realizes a user interface function having advanced situational ability in a life support robot system that autonomously supports human life on the premise of a ubiquitous environment.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Primary Health Care (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Manipulator (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006510592A JP4370410B2 (ja) | 2004-03-08 | 2004-03-08 | 対話システム、対話ロボット、プログラム及び記録媒体 |
PCT/JP2004/002942 WO2005086051A1 (ja) | 2004-03-08 | 2004-03-08 | 対話システム、対話ロボット、プログラム及び記録媒体 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2004/002942 WO2005086051A1 (ja) | 2004-03-08 | 2004-03-08 | 対話システム、対話ロボット、プログラム及び記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005086051A1 true WO2005086051A1 (ja) | 2005-09-15 |
Family
ID=34917844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/002942 WO2005086051A1 (ja) | 2004-03-08 | 2004-03-08 | 対話システム、対話ロボット、プログラム及び記録媒体 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP4370410B2 (ja) |
WO (1) | WO2005086051A1 (ja) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007323233A (ja) * | 2006-05-31 | 2007-12-13 | National Institute Of Information & Communication Technology | 対話ロボットを用いた理由説明サービス処理方法およびその装置,およびそのプログラム |
JP2009139250A (ja) * | 2007-12-07 | 2009-06-25 | Honda Motor Co Ltd | 検知対象検知サーバ |
JP2009269162A (ja) * | 2008-04-09 | 2009-11-19 | Yaskawa Electric Corp | ロボットの制御プログラム構築方法およびロボットシステム |
US9031692B2 (en) | 2010-08-24 | 2015-05-12 | Shenzhen Institutes of Advanced Technology Chinese Academy of Science | Cloud robot system and method of integrating the same |
JP2015152948A (ja) * | 2014-02-10 | 2015-08-24 | 日本電信電話株式会社 | ライフログ記録システム及びそのプログラム |
CN104898589A (zh) * | 2015-03-26 | 2015-09-09 | 天脉聚源(北京)传媒科技有限公司 | 一种用于智能管家机器人的智能应答方法和装置 |
JP2016045583A (ja) * | 2014-08-20 | 2016-04-04 | ヤフー株式会社 | 応答生成装置、応答生成方法及び応答生成プログラム |
JP2017167797A (ja) * | 2016-03-16 | 2017-09-21 | 富士ゼロックス株式会社 | ロボット制御システム |
US20180025727A1 (en) * | 2016-07-19 | 2018-01-25 | Toyota Jidosha Kabushiki Kaisha | Voice interactive device and utterance control method |
JP2018077553A (ja) * | 2016-11-07 | 2018-05-17 | Necプラットフォームズ株式会社 | 応対支援装置、方法、及びプログラム |
JP2018097185A (ja) * | 2016-12-14 | 2018-06-21 | パナソニックIpマネジメント株式会社 | 音声対話装置、音声対話方法、音声対話プログラム及びロボット |
WO2018143460A1 (ja) * | 2017-02-06 | 2018-08-09 | 川崎重工業株式会社 | ロボットシステム及びロボット対話方法 |
WO2019064650A1 (ja) * | 2017-09-28 | 2019-04-04 | 三菱自動車工業株式会社 | 車両用情報伝達支援システム |
JP2019061325A (ja) * | 2017-09-25 | 2019-04-18 | 富士ゼロックス株式会社 | 自走式サービス提供装置及びサービス提供システム |
WO2022049710A1 (ja) * | 2020-09-03 | 2022-03-10 | 日本電気株式会社 | サービス提供装置、サービス提供システム、サービス提供方法及び非一時的なコンピュータ可読媒体 |
US11617957B2 (en) | 2018-12-06 | 2023-04-04 | Samsung Electronics Co., Ltd. | Electronic device for providing interactive game and operating method therefor |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190057687A (ko) * | 2017-11-20 | 2019-05-29 | 삼성전자주식회사 | 챗봇 변경을 위한 위한 전자 장치 및 이의 제어 방법 |
CN115461749A (zh) * | 2020-02-29 | 2022-12-09 | 具象有限公司 | 用于机器人计算设备/数字伴侣与用户之间的短期和长期对话管理的系统和方法 |
US20230298568A1 (en) * | 2022-03-15 | 2023-09-21 | Drift.com, Inc. | Authoring content for a conversational bot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002199470A (ja) * | 2000-12-25 | 2002-07-12 | Yoichi Sakai | 対話型仮想ロボットシステムによるホームオートメーション |
JP2003036091A (ja) * | 2001-07-23 | 2003-02-07 | Matsushita Electric Ind Co Ltd | 電化情報機器 |
JP2004005481A (ja) * | 2002-03-15 | 2004-01-08 | Samsung Electronics Co Ltd | ホームネットワークに連結された電化製品を制御する方法及び装置 |
-
2004
- 2004-03-08 WO PCT/JP2004/002942 patent/WO2005086051A1/ja active Application Filing
- 2004-03-08 JP JP2006510592A patent/JP4370410B2/ja not_active Expired - Lifetime
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002199470A (ja) * | 2000-12-25 | 2002-07-12 | Yoichi Sakai | 対話型仮想ロボットシステムによるホームオートメーション |
JP2003036091A (ja) * | 2001-07-23 | 2003-02-07 | Matsushita Electric Ind Co Ltd | 電化情報機器 |
JP2004005481A (ja) * | 2002-03-15 | 2004-01-08 | Samsung Electronics Co Ltd | ホームネットワークに連結された電化製品を制御する方法及び装置 |
Non-Patent Citations (1)
Title |
---|
NAGAO A.: "Hito to Joho o Tsunagu Interface Robot", NINGEN SEIKATSU KOGAKU, vol. 3, no. 1, 15 January 2002 (2002-01-15), pages 22 - 26, XP002992684 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007323233A (ja) * | 2006-05-31 | 2007-12-13 | National Institute Of Information & Communication Technology | 対話ロボットを用いた理由説明サービス処理方法およびその装置,およびそのプログラム |
JP2009139250A (ja) * | 2007-12-07 | 2009-06-25 | Honda Motor Co Ltd | 検知対象検知サーバ |
JP2009269162A (ja) * | 2008-04-09 | 2009-11-19 | Yaskawa Electric Corp | ロボットの制御プログラム構築方法およびロボットシステム |
US9031692B2 (en) | 2010-08-24 | 2015-05-12 | Shenzhen Institutes of Advanced Technology Chinese Academy of Science | Cloud robot system and method of integrating the same |
JP2015152948A (ja) * | 2014-02-10 | 2015-08-24 | 日本電信電話株式会社 | ライフログ記録システム及びそのプログラム |
JP2016045583A (ja) * | 2014-08-20 | 2016-04-04 | ヤフー株式会社 | 応答生成装置、応答生成方法及び応答生成プログラム |
CN104898589A (zh) * | 2015-03-26 | 2015-09-09 | 天脉聚源(北京)传媒科技有限公司 | 一种用于智能管家机器人的智能应答方法和装置 |
US10513038B2 (en) | 2016-03-16 | 2019-12-24 | Fuji Xerox Co., Ltd. | Robot control system |
CN107199571A (zh) * | 2016-03-16 | 2017-09-26 | 富士施乐株式会社 | 机器人控制系统 |
CN107199571B (zh) * | 2016-03-16 | 2021-11-05 | 富士胶片商业创新有限公司 | 机器人控制系统 |
JP2017167797A (ja) * | 2016-03-16 | 2017-09-21 | 富士ゼロックス株式会社 | ロボット制御システム |
US10304452B2 (en) | 2016-07-19 | 2019-05-28 | Toyota Jidosha Kabushiki Kaisha | Voice interactive device and utterance control method |
US20180025727A1 (en) * | 2016-07-19 | 2018-01-25 | Toyota Jidosha Kabushiki Kaisha | Voice interactive device and utterance control method |
JP2018013545A (ja) * | 2016-07-19 | 2018-01-25 | トヨタ自動車株式会社 | 音声対話装置および発話制御方法 |
JP2018077553A (ja) * | 2016-11-07 | 2018-05-17 | Necプラットフォームズ株式会社 | 応対支援装置、方法、及びプログラム |
JP2018097185A (ja) * | 2016-12-14 | 2018-06-21 | パナソニックIpマネジメント株式会社 | 音声対話装置、音声対話方法、音声対話プログラム及びロボット |
JP2018126810A (ja) * | 2017-02-06 | 2018-08-16 | 川崎重工業株式会社 | ロボットシステム及びロボット対話方法 |
DE112018000702B4 (de) * | 2017-02-06 | 2021-01-14 | Kawasaki Jukogyo Kabushiki Kaisha | Robotersystem und roboterdialogverfahren |
WO2018143460A1 (ja) * | 2017-02-06 | 2018-08-09 | 川崎重工業株式会社 | ロボットシステム及びロボット対話方法 |
JP2019061325A (ja) * | 2017-09-25 | 2019-04-18 | 富士ゼロックス株式会社 | 自走式サービス提供装置及びサービス提供システム |
JP7024282B2 (ja) | 2017-09-25 | 2022-02-24 | 富士フイルムビジネスイノベーション株式会社 | 自走式サービス提供装置及びサービス提供システム |
WO2019064650A1 (ja) * | 2017-09-28 | 2019-04-04 | 三菱自動車工業株式会社 | 車両用情報伝達支援システム |
US11617957B2 (en) | 2018-12-06 | 2023-04-04 | Samsung Electronics Co., Ltd. | Electronic device for providing interactive game and operating method therefor |
WO2022049710A1 (ja) * | 2020-09-03 | 2022-03-10 | 日本電気株式会社 | サービス提供装置、サービス提供システム、サービス提供方法及び非一時的なコンピュータ可読媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2005086051A1 (ja) | 2008-01-24 |
JP4370410B2 (ja) | 2009-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005086051A1 (ja) | 対話システム、対話ロボット、プログラム及び記録媒体 | |
Arisio et al. | Deliverable 1.1 User Study, analysis of requirements and definition of the application task | |
US20180293221A1 (en) | Speech parsing with intelligent assistant | |
Vacher et al. | Evaluation of a context-aware voice interface for ambient assisted living: qualitative user study vs. quantitative system evaluation | |
US20160372138A1 (en) | Interactive home-appliance system, server device, interactive home appliance, method for allowing home-appliance system to interact, and nonvolatile computer-readable data recording medium encoded with program for allowing computer to implement the method | |
CN111869185B (zh) | 生成基于IoT的通知并提供命令以致使客户端设备的自动助手客户端自动呈现基于IoT的通知 | |
KR102599607B1 (ko) | 자동화된 어시스턴트를 호출하기 위한 다이내믹 및/또는 컨텍스트 특정 핫워드 | |
Hamill et al. | Development of an automated speech recognition interface for personal emergency response systems | |
JP6291303B2 (ja) | コミュニケーション支援ロボットシステム | |
CN114127710A (zh) | 利用对话搜索历史的歧义解决方案 | |
US20180277110A1 (en) | Interactive System, Terminal, Method of Controlling Dialog, and Program for Causing Computer to Function as Interactive System | |
Woo et al. | Robot partner system for elderly people care by using sensor network | |
JP6559079B2 (ja) | 対話型家電システム、および発話者との対話に基づいてメッセージを出力するためにコンピュータが実行する方法 | |
CN113287175A (zh) | 互动式健康状态评估方法及其系统 | |
CN114127694A (zh) | 用于会话系统的错误恢复 | |
JP2007011674A (ja) | 対話ロボットを用いた理由説明サービス処理方法およびその装置,およびそのプログラム | |
Chen et al. | Human-robot interaction based on cloud computing infrastructure for senior companion | |
JP2007323233A (ja) | 対話ロボットを用いた理由説明サービス処理方法およびその装置,およびそのプログラム | |
Akash et al. | Desktop based smart voice assistant using python language integrated with arduino | |
KR20210027991A (ko) | 전자장치 및 그 제어방법 | |
Goetze et al. | Multimodal human-machine interaction for service robots in home-care environments | |
KR20230156929A (ko) | 적응적 사용자 상호 작용을 갖는 로봇형 컴퓨팅 디바이스 | |
JPWO2018061346A1 (ja) | 情報処理装置 | |
Ueda et al. | Human-robot interaction in the home ubiquitous network environment | |
US10923140B2 (en) | Device, robot, method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006510592 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |