WO2018000267A1 - Procédé de génération de contenu d'interaction de robot, système et robot - Google Patents

Procédé de génération de contenu d'interaction de robot, système et robot Download PDF

Info

Publication number
WO2018000267A1
WO2018000267A1 PCT/CN2016/087752 CN2016087752W WO2018000267A1 WO 2018000267 A1 WO2018000267 A1 WO 2018000267A1 CN 2016087752 W CN2016087752 W CN 2016087752W WO 2018000267 A1 WO2018000267 A1 WO 2018000267A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
signal
generating
user
parameters
Prior art date
Application number
PCT/CN2016/087752
Other languages
English (en)
Chinese (zh)
Inventor
杨新宇
王昊奋
邱楠
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to PCT/CN2016/087752 priority Critical patent/WO2018000267A1/fr
Priority to CN201680001745.7A priority patent/CN106462255A/zh
Publication of WO2018000267A1 publication Critical patent/WO2018000267A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • an expression is made in the process of human interaction.
  • a reasonable expression feedback is given, and the person comes to a life scene on a certain time axis, such as eating, Sleeping, exercise, etc.
  • changes in various scene values can affect the feedback of human expression.
  • the current desire for robots to make expression feedback is mainly through pre-designed methods and deep learning training corpus.
  • This kind of feedback through pre-designed programs and corpus training has the following disadvantages:
  • the output of the expression depends on the human text representation, that is, similar to a question-and-answer machine, the different words of the user trigger different expressions.
  • the robot actually outputs the expression according to the human pre-designed interaction mode, which leads to the robot.
  • the object of the present invention is to provide a method, a system and a robot for generating interactive content of a robot, which can improve the anthropomorphicity of the interactive content generation of the robot based on multimodal input and active interactive variable parameters, improve the human-computer interaction experience, and improve intelligence. .
  • a method for generating robot interactive content comprising:
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter further comprises: combining the current robot according to the user intention and the multi-modal signal
  • the fitting curve of the variable parameter and the parameter change probability generates the robot interaction content.
  • the method for generating a fitting curve of the parameter change probability comprises: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
  • the multi-modal signal includes at least an image signal
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
  • the robot interaction content is generated in conjunction with the current robot variable parameters based on the image signal and the user intent.
  • the multi-modal signal includes at least a voice signal
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
  • the robot interaction content is generated in conjunction with the current robot variable parameters based on the speech signal and the user intent.
  • the multi-modal signal includes at least a gesture signal
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
  • the robot interaction content is generated in accordance with the current robot variable parameter based on the gesture signal and the user intent.
  • the invention relates to a system for generating robot interactive content, which comprises:
  • An intent identification module configured to determine a user intent according to the multimodal signal
  • a content generating module configured to generate the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter.
  • the system comprises a time axis based and artificial intelligence cloud processing module for:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the time-based and artificial intelligence cloud processing module is further configured to: generate the robot interaction content according to the user intent and the multi-modal signal, in combination with the current robot variable parameter and the fitting curve of the parameter change probability.
  • the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
  • the multi-modal signal includes at least an image signal
  • the content generating module is specifically configured to: generate the robot interaction content according to the image signal and the user intention, in combination with the current robot variable parameter.
  • the multi-modal signal includes at least a voice signal
  • the content generating module is specifically configured to: generate the robot interaction content according to the voice signal and the user intention, in combination with the current robot variable parameter.
  • the multi-modal signal includes at least a gesture signal
  • the content generating module is specifically configured to: generate the robot interaction content according to the gesture signal and the user intention, in combination with the current robot variable parameter.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intent according to the multi-modal signal; and combining current mutable variable parameters according to the multi-modal signal and the user intention Generate robot interaction content.
  • multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people.
  • the variable parameters are: during the human-computer interaction process, the user actively controls The parameters of the system, for example: control the robot to do the movement, control the robot to do the communication, and so on.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean. In this way, the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis. This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • a method for generating interactive content of a robot includes: acquiring a multi-modal signal; determining a user intent according to the multi-modal signal; and combining current mutable variable parameters according to the multi-modal signal and the user intention Generate robot interaction content.
  • multi-modal signals such as image signals and speech signals can be combined with robot variable parameters to more accurately generate robot interaction content, thereby more accurately and anthropomorphic interaction and communication with people.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
  • This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
  • the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
  • the robot variable parameter 300 is completed and set in advance. Specifically, the robot variable parameter 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and
  • the scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the robot will use this as a variable parameter.
  • the robot when the user interacts with the robot during this time period, the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content in the meal, in the specific interaction
  • the robot combines the acquired multi-modal signals, such as a combination of voice information and picture information, or a combination with a video signal, and variable parameters.
  • the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
  • the multi-modal signal includes the robot to obtain expressions and text emotions, etc., which can be input or combined by voice input or video input or gesture, the robot expression input is happy, the text analysis is unhappy, and the user controls the robot to exercise for many times. .
  • the robot will refuse to accept the instructions and interact as: I am very tired and need rest now.
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated. .
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the knot is based on the multimodal signal and the user intent
  • the step of generating the robot interaction content with the current robot variable parameter further comprises: generating the robot interaction content according to the user intention and the multi-modal signal, combining the current robot variable parameter and the fitting curve of the parameter change probability.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the method for generating a fitting curve of the parameter change probability includes: using a probability algorithm, using a network to make a probability estimate of a parameter between the robots, and calculating a time on the life time axis of the robot on the life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
  • the probability algorithm can use a Bayesian probability algorithm.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
  • the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the multi-modal signal includes at least an image signal
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
  • the robot interaction content is generated in conjunction with the current robot variable parameters based on the image signal and the user intent.
  • the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
  • the multi-modal signal includes at least a voice signal
  • the step of generating the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter specifically includes:
  • the robot interaction content is generated in conjunction with the current robot variable parameters based on the speech signal and the user intent.
  • the multimodal signal includes at least a gesture signal
  • the generating robot is combined with current robot variable parameters according to the multimodal signal and the user intent
  • the steps of interactive content specifically include:
  • the robot interaction content is generated in accordance with the current robot variable parameter based on the gesture signal and the user intent.
  • the robot sings continuously for a while, and then the user tells the robot by voice and continues to sing.
  • the picture signal shows that the user is serious, then the robot will reply, too tired, let me take a break, with a tired face.
  • the picture information number shows that the user is happy, then the robot will reply, the owner, I will take a break and sing to you, with a happy face.
  • the voice signal and the picture signal can accurately understand the meaning of the user, thereby more accurately replying to the user.
  • other signals are more accurate, such as gesture signals, video signals, and the like.
  • a system for generating interactive content of a robot according to the present invention is disclosed, which is characterized in that:
  • the obtaining module 201 is configured to acquire a multi-modal signal
  • the intent identification module 202 is configured to determine a user intent according to the multimodal signal
  • the content generating module 203 is configured to generate the robot interaction content according to the multi-modal signal and the user intention, in combination with the current robot variable parameter.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
  • the multimodal information in this embodiment may be user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and finger One or more of the grain information and the like.
  • voice information may be user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and finger One or more of the grain information and the like.
  • variable parameter can be something that the robot has done in a preset period of time, such as the robot interacting with the user for an hour during the last time period, when the user expresses the continuation to the robot through the multi-modal signal.
  • the intent of the conversation then the robot can say that I am tired and need to take a break, and with tired state content, such as expressions.
  • the multimodal signal shows the user making a joke, then the robot can say don't tease me with a happy expression.
  • the multi-modal signal is generally a combination of a plurality of signals, such as a picture signal plus a voice signal, or a picture signal plus a voice signal plus a gesture signal.
  • the system includes a time axis based and artificial intelligence cloud processing module for:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the time-based and artificial intelligence cloud processing module is further configured to: generate a robot interaction according to the user intent and the multi-modal signal, combining the current robot variable parameter and the fitting curve of the parameter change probability content.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the time axis-based and artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, to form a fit curve.
  • the self-cognition of the robot itself is expanded The exhibition, the parameters in the self-cognition and the parameters of the scene used in the variable participation Su axis, to produce the influence of anthropomorphization.
  • the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the multi-modal signal includes at least an image signal
  • the expression generating module is specifically configured to: generate the robot interaction content according to the image signal and the user intention, in combination with the current robot variable parameter.
  • the multi-modal signal includes at least an image signal, so that the robot can grasp the user's intention, and in order to better understand the user's intention, other signals, such as a voice signal, a gesture signal, etc., are generally added, so that the robot can be more accurately understood. Whether the user is the real expression or the meaning of a joke.
  • the multi-modal signal includes at least a voice signal
  • the expression generating module is specifically configured to: generate the robot interaction content according to the voice signal and the user intention, in combination with the current robot variable parameter.
  • the multi-modality signal includes at least a gesture signal
  • the expression generation module is specifically configured to: generate the robot interaction content according to the gesture signal and the user intention, in combination with the current robot variable parameter.
  • the robot sings continuously for a while, and then the user tells the robot by voice and continues to sing.
  • the picture signal shows that the user is serious, then the robot will reply, too tired, let me take a break, with a tired face.
  • the picture information number shows that the user is happy, then the robot will reply, the owner, I will take a break and sing to you, with a happy face.
  • the voice signal and the picture signal can accurately understand the meaning of the user, thereby more accurately replying to the user.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

La présente invention concerne un procédé de génération de contenu d'interaction de robot, consistant : à acquérir un signal multimodal (S101); à déterminer, selon le signal multimodal, une intention d'utilisateur (S102); et à incorporer, selon le signal multimodal et l'intention d'utilisateur, un paramètre variable de courant du robot en vue de générer un contenu d'interaction de robot (S103). En incorporant un paramètre variable d'un robot dans la génération d'un contenu d'interaction de robot, le procédé de la présente invention permet au robot de générer un contenu d'interaction selon un paramètre variable précédent, de sorte que le robot interagisse avec l'homme d'une manière plus humaine. En outre, la présente invention permet au robot d'avoir des routines quotidiennes d'un être humain indiquées sur un calendrier quotidien, améliorant ainsi la similitude humaine du contenu d'interaction du robot généré et de l'expérience de l'interaction homme-machine, et augmentant un niveau d'intelligence.
PCT/CN2016/087752 2016-06-29 2016-06-29 Procédé de génération de contenu d'interaction de robot, système et robot WO2018000267A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/087752 WO2018000267A1 (fr) 2016-06-29 2016-06-29 Procédé de génération de contenu d'interaction de robot, système et robot
CN201680001745.7A CN106462255A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087752 WO2018000267A1 (fr) 2016-06-29 2016-06-29 Procédé de génération de contenu d'interaction de robot, système et robot

Publications (1)

Publication Number Publication Date
WO2018000267A1 true WO2018000267A1 (fr) 2018-01-04

Family

ID=58215718

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087752 WO2018000267A1 (fr) 2016-06-29 2016-06-29 Procédé de génération de contenu d'interaction de robot, système et robot

Country Status (2)

Country Link
CN (1) CN106462255A (fr)
WO (1) WO2018000267A1 (fr)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107030691B (zh) * 2017-03-24 2020-04-14 华为技术有限公司 一种看护机器人的数据处理方法及装置
CN107564522A (zh) * 2017-09-18 2018-01-09 郑州云海信息技术有限公司 一种智能控制方法及装置
CN108363492B (zh) * 2018-03-09 2021-06-25 南京阿凡达机器人科技有限公司 一种人机交互方法及交互机器人
CN110154048B (zh) * 2019-02-21 2020-12-18 北京格元智博科技有限公司 机器人的控制方法、控制装置和机器人
CN110228065A (zh) * 2019-04-29 2019-09-13 北京云迹科技有限公司 机器人运动控制方法及装置
CN112775991B (zh) * 2021-02-10 2021-09-07 溪作智能(深圳)有限公司 一种机器人的头部机构、机器人及机器人的控制方法
CN113450436B (zh) * 2021-06-28 2022-04-15 武汉理工大学 一种基于多模态相关性的人脸动画生成方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN103294725A (zh) * 2012-03-03 2013-09-11 李辉 智能应答机器人软件
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1956528B1 (fr) * 2007-02-08 2018-10-03 Samsung Electronics Co., Ltd. Appareil et procédé pour exprimer le comportement d'un robot de logiciel
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN103294725A (zh) * 2012-03-03 2013-09-11 李辉 智能应答机器人软件
CN105511608A (zh) * 2015-11-30 2016-04-20 北京光年无限科技有限公司 基于智能机器人的交互方法及装置、智能机器人

Also Published As

Publication number Publication date
CN106462255A (zh) 2017-02-22

Similar Documents

Publication Publication Date Title
WO2018000267A1 (fr) Procédé de génération de contenu d'interaction de robot, système et robot
US11024294B2 (en) System and method for dialogue management
CN107030691B (zh) 一种看护机器人的数据处理方法及装置
WO2021169431A1 (fr) Procédé et appareil d'interaction, et dispositif électronique et support de stockage
WO2018000259A1 (fr) Procédé et système pour générer un contenu d'interaction de robot et robot
WO2018000268A1 (fr) Procédé et système pour générer un contenu d'interaction de robot, et robot
US20190206402A1 (en) System and Method for Artificial Intelligence Driven Automated Companion
Severinson-Eklundh et al. Social and collaborative aspects of interaction with a service robot
US11003860B2 (en) System and method for learning preferences in dialogue personalization
WO2018006370A1 (fr) Procédé et système d'interaction pour robot 3d virtuel, et robot
WO2018006369A1 (fr) Procédé et système de synchronisation d'actions vocales et virtuelles, et robot
WO2018006374A1 (fr) Procédé, système et robot de recommandation de fonction basés sur un réveil automatique
WO2018006371A1 (fr) Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot
Papaioannou et al. Hybrid chat and task dialogue for more engaging hri using reinforcement learning
WO2021003471A1 (fr) Système et procédé de gestion de dialogue adaptative dans une réalité réelle et augmentée
CN112204563A (zh) 用于基于用户通信的视觉场景构建的系统和方法
WO2018000266A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
KR20190075416A (ko) 디지털 에이전트 이동 매니퓰레이터 및 그 동작 방법
Nooraei et al. A real-time architecture for embodied conversational agents: beyond turn-taking
WO2018000258A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot et robot
WO2018000261A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
WO2018000260A1 (fr) Procédé servant à générer un contenu d'interaction de robot, système et robot
CN114303151A (zh) 经由使用组合神经网络的场景建模进行自适应对话的系统和方法
Rach et al. Emotion recognition based preference modelling in argumentative dialogue systems
Thórisson et al. A multiparty multimodal architecture for realtime turntaking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906668

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906668

Country of ref document: EP

Kind code of ref document: A1