WO2018000259A1 - Procédé et système pour générer un contenu d'interaction de robot et robot - Google Patents

Procédé et système pour générer un contenu d'interaction de robot et robot Download PDF

Info

Publication number
WO2018000259A1
WO2018000259A1 PCT/CN2016/087738 CN2016087738W WO2018000259A1 WO 2018000259 A1 WO2018000259 A1 WO 2018000259A1 CN 2016087738 W CN2016087738 W CN 2016087738W WO 2018000259 A1 WO2018000259 A1 WO 2018000259A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
time axis
information
life time
life
Prior art date
Application number
PCT/CN2016/087738
Other languages
English (en)
Chinese (zh)
Inventor
王昊奋
邱楠
杨新宇
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to CN201680001753.1A priority Critical patent/CN106537294A/zh
Priority to PCT/CN2016/087738 priority patent/WO2018000259A1/fr
Publication of WO2018000259A1 publication Critical patent/WO2018000259A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • an expression is made in the process of human interaction.
  • the eye sees or the ear hears the sound
  • a reasonable expression feedback is performed.
  • the language interacted by the other party is used to analyze the emotion.
  • Feedback changes in sentiment analysis can affect feedback from human expressions.
  • the robot wants to make the expression feedback, mainly through the pre-designed method and the deep learning training corpus.
  • the expression feedback through the preset program and the corpus training has the following shortcomings, the expression The output depends on the human text representation, that is, similar to a question-and-answer machine, different words of the user trigger different expressions.
  • the robot actually outputs the expression according to the human pre-designed interaction mode, which causes the robot to fail.
  • More anthropomorphic can not be like humans, life scenes at different time points, showing different expressions, that is, the way the robot interactive content is generated is completely passive, so the generation of expression requires a lot of human-computer interaction, resulting in the robot's The intelligence is very poor.
  • the object of the present invention is to provide a method, a system and a robot for generating interactive content of a robot, so that the robot itself has a human lifestyle in the time axis of life, can enhance the anthropomorphicity of the robot interactive content generation, enhance the human-computer interaction experience, and improve Intelligence.
  • a method for generating robot interactive content comprising:
  • the human life timeline generates robot interaction content.
  • the expression information is collected by video information.
  • the text sentiment information is collected by voice information.
  • the method for generating parameters of the life time axis of the robot includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
  • the step of fitting the self-cognitive parameter of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate each parameter of the robot on the life time axis after the time axis scene parameter is changed.
  • the probability of change forms a fitted curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • a system for generating interactive content of a robot comprising:
  • An expression visual processing module configured to acquire expression information of the user
  • a text analysis module configured to obtain text emotional information of the user
  • An intention identification module configured to determine a user intention according to the expression information and the text emotion information
  • the content generating module is configured to generate the robot interaction content according to the current robot life time axis according to the expression information, the text emotion information, and the user intention.
  • the expression visual processing module is specifically configured to: collect expression information by using video information.
  • the text analysis module is specifically configured to: collect text emotional information by using voice information.
  • the system comprises an artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the artificial intelligence cloud processing module is further configured to: use a life scene and a robot The combination of self-awareness forms a self-recognition curve based on the life time axis.
  • the artificial intelligence cloud processing module is further configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • the generation method of the expression is generally based on the text emotion and the expression recognition
  • the method for generating the robot interaction content in the present invention includes: acquiring the expression information of the user; The user's text emotion information; determining the user's intention according to the expression information and the text emotion information; and generating the robot interaction content according to the current robot life time axis according to the expression information, the text emotion information, and the user intention.
  • the robot interaction content can be generated according to the user's expression and text emotion combined with the robot life time axis, thereby more accurately and anthropomorphic interaction and communication with the person. For people, everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • S104 Generate robot interaction content according to the current robot life timeline 300 according to the expression information, the text emotion information, and the user intention.
  • the expression generation manner is generally based on text emotion and expression recognition
  • the method for generating the robot interaction content in the present invention includes: acquiring the user's expression information; acquiring the user's text emotion information; and according to the expression information and the text emotion The information determines a user intent; and generates a robot interaction content in accordance with the current robot life time axis according to the expression information, the text emotion information, and the user intent.
  • the robot interaction content can be generated according to the user's expression and text emotion combined with the robot life time axis, thereby more accurate, anthropomorphic and human intervention. Line interaction and communication. For people, everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • the interactive content can be an expression or text or voice.
  • the robot life timeline 300 is completed and set in advance. Specifically, the robot life timeline 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the life time axis is specifically: according to the time axis of human daily life, according to the human way, the self-cognition value of the robot itself in the time axis of daily life is fitted, and the behavior of the robot is according to this The action is to get the robot's own behavior in one day, so that the robot can perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the video input captures the user's expression, and the voice input enters the text sentiment analysis engine; the user expression input and the text emotion input enter the artificial intelligence system, the system analyzes the user's intention, and then combines the life time axis of the robot to generate interactive content, for example. Expressions, etc.
  • the text analysis emotion is unhappy
  • the expression visual analysis is happy
  • the robot life timeline such as the current time is 9:00 in the morning, just got up
  • the robot can tease the owner happy, will reply similar to: you are joking with me Well?
  • the expression system generates a joking expression.
  • Another example is that the text analysis emotion is unhappy, the expression visual analysis is sad, and the robot life time axis, such as the current time is 11 o'clock at night, the robot analysis owner may be insomnia, will reply similarly: how unhappy, I give You put a song, the expression system generates a sympathetic expression.
  • the expression information is collected by video information.
  • the expression information can be obtained through the video, and the video acquisition is more accurate, thereby more accurately determining the user's expression and other information.
  • the textual sentiment information is collected by voice information. This way It is convenient and fast to analyze the emotional information of the text through voice input.
  • the method for generating parameters of the robot life time axis includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the step of fitting the parameter of the self-cognition of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate the time of the robot on the life time axis after the time axis scene parameter is changed The probability of each parameter change forms a fitted curve.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • Text analysis emotions are unhappy, user expression analysis is sad, will reply: how unhappy, I will give you a song.
  • the system generates a sympathetic expression.
  • a system for generating interactive content of a robot includes:
  • the expression visual processing module 201 is configured to acquire expression information of the user
  • the text analysis module 202 is configured to acquire text emotional information of the user
  • the intention identification module 203 is configured to determine a user intention according to the expression information and the text emotion information;
  • the content generating module 204 is configured to generate the robot interaction content according to the current robot life time axis sent by the robot life timeline module 301 according to the expression information, the text emotion information, and the user intention.
  • the robot interaction content can be generated according to the user's expression and text emotion combined with the robot life time axis, thereby more accurately and anthropomorphic interaction and communication with the person.
  • everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • the expression visual processing module is specifically configured to: collect expression information by using video information.
  • the expression information can be obtained through the video, and the video acquisition is more accurate, thereby more accurately determining the user's expression and other information.
  • the text analysis module is specifically configured to: collect text emotion information by using voice information. In this way, the emotional information of the text can be analyzed through voice input, which is convenient and fast.
  • the system includes an artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the artificial intelligence cloud processing module is further configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the artificial intelligence cloud processing module is further configured to: use The rate algorithm calculates the probability of each parameter change of the robot on the life time axis after the time axis scene parameter changes, and forms a fitting curve.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

L'invention porte sur un procédé qui permet de générer un contenu d'interaction de robot et qui consiste : à obtenir des informations d'expression d'un utilisateur (S101); à obtenir des informations d'émotion de texte de l'utilisateur (S102); à déterminer une intention d'utilisateur selon les informations d'expression et les informations d'émotion de texte (S103); à générer un contenu d'interaction de robot par combinaison des informations d'expression, des informations d'émotion de texte, de l'intention de l'utilisateur et d'une ligne temporelle actuelle de vie de robot (S104). Dans le procédé, la ligne temporelle de vie d'un robot est ajoutée à la génération du contenu d'interaction du robot, de telle sorte que le robot est plus humanisé lorsqu'il interagit avec un être humain et présente un style de vie humain dans la ligne temporelle de vie. Grâce au procédé, l'humanisation de la génération de contenu d'interaction de robot, l'expérience d'interaction être humain-robot et l'intelligence peuvent être améliorées.
PCT/CN2016/087738 2016-06-29 2016-06-29 Procédé et système pour générer un contenu d'interaction de robot et robot WO2018000259A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001753.1A CN106537294A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人
PCT/CN2016/087738 WO2018000259A1 (fr) 2016-06-29 2016-06-29 Procédé et système pour générer un contenu d'interaction de robot et robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087738 WO2018000259A1 (fr) 2016-06-29 2016-06-29 Procédé et système pour générer un contenu d'interaction de robot et robot

Publications (1)

Publication Number Publication Date
WO2018000259A1 true WO2018000259A1 (fr) 2018-01-04

Family

ID=58335804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087738 WO2018000259A1 (fr) 2016-06-29 2016-06-29 Procédé et système pour générer un contenu d'interaction de robot et robot

Country Status (2)

Country Link
CN (1) CN106537294A (fr)
WO (1) WO2018000259A1 (fr)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992935A (zh) * 2017-12-14 2018-05-04 深圳狗尾草智能科技有限公司 为机器人设置生活周期的方法、设备及介质
CN108334583B (zh) * 2018-01-26 2021-07-09 上海智臻智能网络科技股份有限公司 情感交互方法及装置、计算机可读存储介质、计算机设备
CN110110169A (zh) * 2018-01-26 2019-08-09 上海智臻智能网络科技股份有限公司 人机交互方法及人机交互装置
CN110085221A (zh) * 2018-01-26 2019-08-02 上海智臻智能网络科技股份有限公司 语音情感交互方法、计算机设备和计算机可读存储介质
JP7199451B2 (ja) 2018-01-26 2023-01-05 インスティテュート オブ ソフトウェア チャイニーズ アカデミー オブ サイエンシズ 感情コンピューティングユーザインターフェースに基づく感性的インタラクションシステム、装置及び方法
CN108326855A (zh) * 2018-01-26 2018-07-27 上海器魂智能科技有限公司 一种机器人的交互方法、装置、设备以及存储介质
CN108227932B (zh) * 2018-01-26 2020-06-23 上海智臻智能网络科技股份有限公司 交互意图确定方法及装置、计算机设备及存储介质
CN108197115B (zh) 2018-01-26 2022-04-22 上海智臻智能网络科技股份有限公司 智能交互方法、装置、计算机设备和计算机可读存储介质
CN110309254A (zh) * 2018-03-01 2019-10-08 富泰华工业(深圳)有限公司 智能机器人与人机交互方法
CN108363492B (zh) * 2018-03-09 2021-06-25 南京阿凡达机器人科技有限公司 一种人机交互方法及交互机器人
CN109119077A (zh) * 2018-08-20 2019-01-01 深圳市三宝创新智能有限公司 一种机器人语音交互系统
CN110209792B (zh) * 2019-06-13 2021-07-06 思必驰科技股份有限公司 对话彩蛋生成方法及系统
CN113190661B (zh) * 2021-02-04 2023-05-26 上海幻引信息技术服务有限公司 一种具有自我认知能力的智能对话机器人系统和方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104985599A (zh) * 2015-07-20 2015-10-21 百度在线网络技术(北京)有限公司 基于人工智能的智能机器人控制方法、系统及智能机器人
CN105082150A (zh) * 2015-08-25 2015-11-25 国家康复辅具研究中心 一种基于用户情绪及意图识别的机器人人机交互方法
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3676969B2 (ja) * 2000-09-13 2005-07-27 株式会社エイ・ジー・アイ 感情検出方法及び感情検出装置ならびに記録媒体
CN100583007C (zh) * 2006-12-21 2010-01-20 财团法人工业技术研究院 具有表面显示信息与互动功能的可动装置
CN101474481B (zh) * 2009-01-12 2010-07-21 北京科技大学 情感机器人系统
CN105798922B (zh) * 2016-05-12 2018-02-27 中国科学院深圳先进技术研究院 一种家庭服务机器人

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法
CN104985599A (zh) * 2015-07-20 2015-10-21 百度在线网络技术(北京)有限公司 基于人工智能的智能机器人控制方法、系统及智能机器人
CN105082150A (zh) * 2015-08-25 2015-11-25 国家康复辅具研究中心 一种基于用户情绪及意图识别的机器人人机交互方法
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和系统

Also Published As

Publication number Publication date
CN106537294A (zh) 2017-03-22

Similar Documents

Publication Publication Date Title
WO2018000259A1 (fr) Procédé et système pour générer un contenu d'interaction de robot et robot
WO2018000268A1 (fr) Procédé et système pour générer un contenu d'interaction de robot, et robot
US11670324B2 (en) Method for predicting emotion status and robot
CN106297789B (zh) 智能机器人的个性化交互方法及交互系统
CN105511608B (zh) 基于智能机器人的交互方法及装置、智能机器人
WO2019144542A1 (fr) Systèmes, dispositifs et procédés d'interaction affective fondés sur une interface utilisateur d'informatique affective
CN105843381B (zh) 用于实现多模态交互的数据处理方法及多模态交互系统
CN107870994A (zh) 用于智能机器人的人机交互方法及系统
WO2018000267A1 (fr) Procédé de génération de contenu d'interaction de robot, système et robot
CN107797663A (zh) 基于虚拟人的多模态交互处理方法及系统
WO2018006371A1 (fr) Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot
CN107301168A (zh) 智能机器人及其情绪交互方法、系统
CN107003997A (zh) 用于交互式对话系统的情绪类型分类
CN109789550A (zh) 基于小说或表演中的先前角色描绘的社交机器人的控制
WO2018006370A1 (fr) Procédé et système d'interaction pour robot 3d virtuel, et robot
WO2018006374A1 (fr) Procédé, système et robot de recommandation de fonction basés sur un réveil automatique
CN109176535A (zh) 基于智能机器人的交互方法及系统
WO2021217282A1 (fr) Procédé de mise en œuvre d'intelligence artificielle universelle
WO2018006369A1 (fr) Procédé et système de synchronisation d'actions vocales et virtuelles, et robot
CN105912530A (zh) 面向智能机器人的信息处理方法及系统
JP6366749B2 (ja) 対話インターフェース
WO2018000258A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot et robot
JP2017037601A (ja) 対話インターフェース
WO2018000261A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
WO2018000266A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906660

Country of ref document: EP

Kind code of ref document: A1