WO2018000266A1 - Procédé et système permettant de générer un contenu d'interaction de robot, et robot - Google Patents

Procédé et système permettant de générer un contenu d'interaction de robot, et robot Download PDF

Info

Publication number
WO2018000266A1
WO2018000266A1 PCT/CN2016/087751 CN2016087751W WO2018000266A1 WO 2018000266 A1 WO2018000266 A1 WO 2018000266A1 CN 2016087751 W CN2016087751 W CN 2016087751W WO 2018000266 A1 WO2018000266 A1 WO 2018000266A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
information
parameter
user
generating
Prior art date
Application number
PCT/CN2016/087751
Other languages
English (en)
Chinese (zh)
Inventor
邱楠
杨新宇
王昊奋
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to PCT/CN2016/087751 priority Critical patent/WO2018000266A1/fr
Priority to CN201680001751.2A priority patent/CN106462804A/zh
Publication of WO2018000266A1 publication Critical patent/WO2018000266A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • the robot is generally based on the question and answer interaction in the solid scene.
  • the current intention of letting robots make feedback on expressions is mainly through pre-designed methods and deep learning training corpus.
  • This kind of feedback through pre-designed programs and corpus training has the following shortcomings. And for the scene in which it is located, it will affect the changes in human expression, such as: excited in the billiard room, very happy at home.
  • the robot wants to make the expression feedback, mainly through the pre-designed way and deep learning.
  • the output of the expression depends on the human text representation, that is, similar to a question and answer machine, the different words of the user trigger different expressions.
  • the robot actually outputs the expression according to the human interaction pattern pre-designed.
  • the robot can't be more anthropomorphic. It can't behave like human beings.
  • different expressions are expressed.
  • the generation of robot interactive content is completely passive, so the generation of expressions requires a lot of Human-computer interaction leads to poor intelligence of the robot.
  • the object of the present invention is to provide a method, a system and a robot for generating robot interactive content, so that the robot itself actively interacts with the variable parameters to have a human lifestyle, enhances the anthropomorphicity of the robot interactive content generation, and enhances the human-computer interaction experience. Improve intelligence.
  • a method for generating robot interactive content comprising:
  • the robot interaction content is generated according to the user intent and location scene information in combination with the current robot variable parameters.
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the step of generating the robot interaction content according to the user intention and the location scene information in combination with the current robot variable parameter further comprises: combining the current robot variable parameter and the parameter according to the user intention and the location scene information.
  • a fitting curve that changes the probability generates a robot interaction content.
  • the method for generating a fitting curve of the parameter change probability comprises: using a probability algorithm, using a network to make a probability estimation of parameters between the robots, and calculating a scene parameter change of the robot on the life time axis on the life time axis. After that, the probability of each parameter change forms a fitted curve of the parameter change probability.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the invention discloses a system for generating robot interactive content, comprising:
  • An intention identification module configured to acquire user information, and determine a user intention according to the user information
  • a scene recognition module configured to acquire location scene information
  • the content generating module is configured to generate the robot interaction content according to the user intent and the location scenario information, in combination with the current robot variable parameter.
  • the system includes an artificial intelligence cloud processing module, configured to: fit a self-cognitive parameter of the robot with a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • an artificial intelligence cloud processing module configured to: fit a self-cognitive parameter of the robot with a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • variable parameter includes at least a behavior of changing a user's original behavior and a change, and a parameter value representing a behavior of changing a user's original behavior and a change.
  • the content generating module is further configured to: generate the robot interaction content according to the user intent and the location scene information, in combination with the current robot variable parameter and the fitting curve of the parameter change probability.
  • the system includes a fitting curve generating module, configured to: use a probability algorithm to estimate a parameter between the robots by using a network, and calculate a scene parameter of the robot on a life time axis on a life time axis. After the change, the probability of each parameter change forms a fitted curve of the parameter change probability.
  • the scene recognition module is specifically configured to acquire video information.
  • the scene recognition module is specifically configured to acquire picture information.
  • the scene recognition module is specifically configured to acquire gesture information.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interactive robot in the fixed scene, and cannot generate the robot more accurately based on the current scene. expression.
  • the generating method of the present invention includes: a method for generating a robot interactive content, comprising: acquiring user information, determining a user intent according to the user information; acquiring location scene information; and combining the current robot according to the user intention and the location scene information; Variable parameters generate robot interaction content. In this way, according to the current location scene information, combined with the variable parameters of the robot, the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
  • This method can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve the intelligence.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interaction robot in the solid scene, and cannot generate the robot more accurately based on the current scene.
  • the generating method of the present invention includes: a method for generating a robot interactive content, comprising: acquiring user information, determining a user intent according to the user information; acquiring location scene information; and combining the current robot according to the user intention and the location scene information; Variable parameters generate robot interaction content.
  • the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean. In this way, the robot is more anthropomorphic when interacting with humans, so that the robot has a human lifestyle in the life time axis.
  • the robot variable parameter 300 is completed and set in advance. Specifically, the robot variable parameter 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the user information in this embodiment may be one or more of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • the user's expression is preferred, so that the recognition is accurate and the recognition efficiency is high.
  • variable parameters are specifically: sudden changes in people and machines, such as one day on the time axis is eating, sleeping, interacting, running, eating, sleeping. In this case, if the scene of the robot is suddenly changed, such as taking the beach at the time of running, etc., these human active parameters for the robot, as variable parameters, will cause the robot's self-cognition to change.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the robot will use this as a variable parameter.
  • the robot will go out to go shopping at 12 noon to generate interactive content, instead of combining the previous 12 noon to generate interactive content.
  • the robot will combine the acquired user information. For example, voice information, video information, picture information, and the like are generated and variable parameters are generated. In this way, some unexpected events in human life can be added to the life axis of the robot, making the interaction of the robot more anthropomorphic.
  • the method for generating the variable parameter of the robot includes:
  • the robot's self-cognitive parameters are fitted to the parameters of the scene in the variable parameters to generate robotic variable parameters.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated. .
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the step of generating the robot interaction content in combination with the current robot variable parameter according to the user intention and the location scene information further comprises: combining the current robot variable according to the user intention and the location scene information
  • the parameters and the fitting curve of the parameter change probability generate robot interaction content.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the method for generating a fitting curve of the parameter change probability includes: using a probability algorithm, using a network to make a probability estimate of a parameter between the robots, and calculating a time on the life time axis of the robot on the life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
  • the probability algorithm can adopt the Bayesian probability algorithm.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
  • the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • a system for generating interactive content of a robot includes:
  • the intent identification module 201 is configured to acquire user information, and determine a user intention according to the user information;
  • the scene recognition module 202 is configured to acquire location scene information.
  • the content generating module 203 is configured to generate the robot interaction content according to the current robot variable parameter sent by the robot variable parameter 301 according to the user intention and the location scene information.
  • the robot interaction content can be generated more accurately, thereby more accurately and anthropomorphic interaction and communication with people.
  • the variable parameters are: parameters that the user actively controls during the human-computer interaction process, for example, controlling the robot to perform motion, controlling the robot to perform communication, and the like.
  • the invention adds the robot variable parameter to the interactive content generation of the robot, so that the robot can generate the interactive content according to the previous variable parameter, for example, when the variable parameter is that the robot has been moving for one hour, again When the robot sends a command such as cleaning, the robot will say that I am tired and refuse to clean.
  • this method can enhance the anthropomorphicity of robot interactive content generation, enhance the human-computer interaction experience, and improve intelligence.
  • variable parameter can be something that the robot has done in a preset period of time, such as the robot interacting with the user for an hour during the last time period, when the user says to the robot to continue talking, then the location If it is in the room, then the robot can say that I am tired and need to take a break, and with tired state content, such as expressions. If the location is outdoors, the robot can also say that I want to go out and have a happy expression.
  • the system includes an artificial intelligence cloud processing module for: fitting a self-cognitive parameter of the robot to a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • an artificial intelligence cloud processing module for: fitting a self-cognitive parameter of the robot to a parameter of the scene in the variable parameter to generate a robot variable parameter.
  • variable parameter includes at least a behavior that changes an original user's behavior and a change, and a parameter value that represents an act of changing a user's original behavior and a change.
  • variable parameters are in the same state as the original plan.
  • the sudden change causes the user to be in another state.
  • the variable parameter represents the change of the behavior or state, and the state or behavior of the user after the change. For example, it was originally running at 5 pm, and suddenly there were other things, such as going to play, then changing from running to playing is a variable parameter, and the probability of such a change is also studied.
  • the content generation module is further configured to: generate the robot interaction content according to the current robot variable parameter and the fitting curve of the parameter change probability according to the user intention and the location scene information.
  • the fitting curve can be generated by the probability training of the variable parameters, thereby generating the robot interaction content.
  • the system includes a fitting curve generating module, configured to: use a probability algorithm to estimate a parameter between the robots using a network for probability estimation, and calculate a robot on a life time axis on a life time axis After the scene parameters are changed, the probability of each parameter change forms a fitting curve of the parameter change probability.
  • the probability algorithm can adopt the Bayesian probability algorithm.
  • the parameters in the self-cognition are matched with the parameters of the scene used in the variable participation axis, and the influence of the personification is generated.
  • the geographical location will change the way the interactive content is generated according to the geographical environment in which it is located.
  • Bayesian probability algorithm to estimate the parameters between robots using Bayesian network, and calculate the probability of each parameter change after the change of the time axis scene parameters of the robot itself on the life time axis.
  • the curve dynamically affects the self-recognition of the robot itself.
  • This innovative module makes the robot itself a human lifestyle. For the expression, it can be changed according to the location scene.
  • the scene recognition module is specifically configured to acquire video information.
  • Such location scene information can be obtained through video, and the video acquisition is more accurate.
  • the scene recognition module is specifically configured to acquire picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the scene recognition module is specifically configured to acquire gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

L'invention porte sur un procédé permettant de générer un contenu d'interaction de robot, comprenant : l'obtention d'informations d'utilisateur, et la détermination d'une intention d'utilisateur selon les informations d'utilisateur (S101) ; l'obtention d'informations de scène de lieu (S102) ; et la génération d'un contenu d'interaction de robot par combinaison de paramètres variables de robot courants en fonction de l'intention d'utilisateur et des informations de scène de lieu (S103). Les paramètres variables de robot sont ajoutés à la génération du contenu d'interaction de robot, de sorte que le robot soit plus humanisé lorsqu'il entre en interaction avec une personne et qu'il ait un mode de vie humain dans une chronologie de vie. Grâce au procédé, l'humanisation de la génération de contenu d'interaction de robot, l'expérience d'interaction homme-robot et l'intelligence peuvent être améliorées.
PCT/CN2016/087751 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot, et robot WO2018000266A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/087751 WO2018000266A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot, et robot
CN201680001751.2A CN106462804A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087751 WO2018000266A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot, et robot

Publications (1)

Publication Number Publication Date
WO2018000266A1 true WO2018000266A1 (fr) 2018-01-04

Family

ID=58215747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087751 WO2018000266A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot, et robot

Country Status (2)

Country Link
CN (1) CN106462804A (fr)
WO (1) WO2018000266A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI649057B (zh) * 2018-04-10 2019-02-01 禾聯碩股份有限公司 即時環境掃描之清潔系統

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491511A (zh) * 2017-08-03 2017-12-19 深圳狗尾草智能科技有限公司 机器人的自我认知方法及装置
CN107799126B (zh) * 2017-10-16 2020-10-16 苏州狗尾草智能科技有限公司 基于有监督机器学习的语音端点检测方法及装置
CN108320021A (zh) * 2018-01-23 2018-07-24 深圳狗尾草智能科技有限公司 机器人动作与表情确定方法、展示合成方法、装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105701211A (zh) * 2016-01-13 2016-06-22 北京光年无限科技有限公司 面向问答系统的主动交互数据处理方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001277166A (ja) * 2000-03-31 2001-10-09 Sony Corp ロボット及びロボットの行動決定方法
CN1398214A (zh) * 2000-10-23 2003-02-19 索尼公司 有足机器人、用于有足机器人的动作控制方法、和存储介质
JP3988121B2 (ja) * 2002-03-15 2007-10-10 ソニー株式会社 学習装置、記憶方法及びロボット装置
JP2003340759A (ja) * 2002-05-20 2003-12-02 Sony Corp ロボット装置およびロボット制御方法、記録媒体、並びにプログラム
CN101587329A (zh) * 2009-06-18 2009-11-25 北京理工大学 机器人预测的方法和系统
CN104901873A (zh) * 2015-06-29 2015-09-09 曾劲柏 一种基于场景和动作的网络社交系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7685518B2 (en) * 1998-01-23 2010-03-23 Sony Corporation Information processing apparatus, method and medium using a virtual reality space
CN102103707A (zh) * 2009-12-16 2011-06-22 群联电子股份有限公司 情感引擎、情感引擎系统及电子装置的控制方法
CN104951077A (zh) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法、装置和终端设备
CN105490918A (zh) * 2015-11-20 2016-04-13 深圳狗尾草智能科技有限公司 一种机器人主动与主人交互的系统及方法
CN105701211A (zh) * 2016-01-13 2016-06-22 北京光年无限科技有限公司 面向问答系统的主动交互数据处理方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI649057B (zh) * 2018-04-10 2019-02-01 禾聯碩股份有限公司 即時環境掃描之清潔系統

Also Published As

Publication number Publication date
CN106462804A (zh) 2017-02-22

Similar Documents

Publication Publication Date Title
WO2018006370A1 (fr) Procédé et système d'interaction pour robot 3d virtuel, et robot
WO2018000259A1 (fr) Procédé et système pour générer un contenu d'interaction de robot et robot
WO2018000267A1 (fr) Procédé de génération de contenu d'interaction de robot, système et robot
WO2018000268A1 (fr) Procédé et système pour générer un contenu d'interaction de robot, et robot
WO2018006374A1 (fr) Procédé, système et robot de recommandation de fonction basés sur un réveil automatique
WO2018000277A1 (fr) Procédé et système de questions et réponses, et robot
WO2018000266A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
WO2018006369A1 (fr) Procédé et système de synchronisation d'actions vocales et virtuelles, et robot
KR102423712B1 (ko) 루틴 실행 중에 클라이언트 디바이스간 자동화 어시스턴트 루틴 전송
US7574332B2 (en) Apparatus and method for generating behaviour in an object
WO2018006373A1 (fr) Procédé et système permettant de commander un appareil ménager sur la base d'une reconnaissance d'intention, et robot
CN111869185B (zh) 生成基于IoT的通知并提供命令以致使客户端设备的自动助手客户端自动呈现基于IoT的通知
WO2018006372A1 (fr) Procédé et système de commande d'appareil ménager sur la base de la reconnaissance d'intention, et robot
Koenig et al. Robot life-long task learning from human demonstrations: a Bayesian approach
WO2018006371A1 (fr) Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot
KR20200024675A (ko) 휴먼 행동 인식 장치 및 방법
JP6867971B2 (ja) 会議支援装置及び会議支援システム
WO2018000258A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot et robot
JP2020009474A (ja) 人工ソーシャルネットワークを運用するためのシステム及び方法
CN110073405A (zh) 图像识别系统以及图像识别方法
JP2009262279A (ja) ロボット、ロボットプログラム共有システム、ロボットプログラム共有方法およびプログラム
WO2018000261A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
Mavridis et al. FaceBots: Steps towards enhanced long-term human-robot interaction by utilizing and publishing online social information
Vishwanath et al. Humanoid co-workers: How is it like to work with a robot?
Nooraei et al. A real-time architecture for embodied conversational agents: beyond turn-taking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906667

Country of ref document: EP

Kind code of ref document: A1