WO2018000258A1 - Procédé et système permettant de générer un contenu d'interaction de robot et robot - Google Patents

Procédé et système permettant de générer un contenu d'interaction de robot et robot Download PDF

Info

Publication number
WO2018000258A1
WO2018000258A1 PCT/CN2016/087736 CN2016087736W WO2018000258A1 WO 2018000258 A1 WO2018000258 A1 WO 2018000258A1 CN 2016087736 W CN2016087736 W CN 2016087736W WO 2018000258 A1 WO2018000258 A1 WO 2018000258A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
time axis
information
user
life
Prior art date
Application number
PCT/CN2016/087736
Other languages
English (en)
Chinese (zh)
Inventor
邱楠
杨新宇
王昊奋
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to CN201680001754.6A priority Critical patent/CN106489114A/zh
Priority to PCT/CN2016/087736 priority patent/WO2018000258A1/fr
Publication of WO2018000258A1 publication Critical patent/WO2018000258A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the invention relates to the field of robot interaction technology, and in particular to a method, a system and a robot for generating robot interactive content.
  • the robot is generally based on the question and answer interaction in the solid scene, the life scene of the person on a certain time axis, such as eating, sleeping, exercising, etc., the changes of various life scene values will affect the human expression.
  • the feedback, and for the scene in which it is located, will affect the changes in human expression, such as: excited in the billiard room, very happy at home.
  • the robot wants to make the expression feedback, mainly through the pre-set method and depth learning, there is no better solution to the question and answer on the scene, which leads to the robot can not be more anthropomorphic, Can not be like humans, life scenes at different time points, location scenes, showing different expressions, that is, the way the robot interactive content is generated is completely passive, so the generation of expressions requires a lot of human-computer interaction, resulting in the intelligence of the robot Very poor sex.
  • the object of the present invention is to provide a method, a system and a robot for generating robot interactive content, so that the robot has a human lifestyle in the life time axis, and the method can improve the anthropomorphicity of the robot interaction content generation and enhance the human-computer interaction experience. Improve intelligence.
  • a method for generating robot interactive content comprising:
  • the robot interaction content is generated in combination with the current robot life time axis.
  • the method for generating parameters of the life time axis of the robot includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the step of expanding the self-cognition of the robot specifically comprises: combining the life scene with the self-knowledge of the robot to form a self-cognitive curve based on the life time axis.
  • the step of fitting the self-cognitive parameter of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate each parameter of the robot on the life time axis after the time axis scene parameter is changed.
  • the probability of change forms a fitted curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the user information includes voice information
  • the step of acquiring user information, and determining the user's intention according to the user information specifically includes: acquiring voice information, and determining a user intention according to the voice information.
  • the invention discloses a system for generating robot interactive content, comprising:
  • An intention identification module configured to acquire user information, and determine a user intention according to the user information
  • a scene recognition module configured to acquire location scene information
  • the content generating module is configured to generate the robot interaction content according to the current user life time axis according to the user intention and the location scene information.
  • the system comprises a time axis based and artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the time-based and artificial intelligence cloud processing module is specifically configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the time-based and artificial intelligence cloud processing module is specifically configured to: use a probability algorithm to calculate a probability of each parameter change of the robot on the life time axis after the time axis scene parameter is changed, to form a fitting curve.
  • the life time axis refers to a time axis including 24 hours a day
  • the parameters in the life time axis include at least a daily life behavior performed by the user on the life time axis and parameter values representing the behavior.
  • the scene recognition module is specifically configured to acquire location scene information by using video information.
  • the scene recognition module is specifically configured to acquire location scene information by using picture information.
  • the scene recognition module is specifically configured to acquire gesture information.
  • the location scene information is obtained by the gesture information.
  • the user information includes voice information
  • the intent identification module is specifically configured to: acquire voice information, and determine a user intention according to the voice information.
  • the invention discloses a robot comprising a system for generating interactive content of a robot as described above.
  • the present invention has the following advantages: the existing robot is generally based on the method of generating the interactive interactive content of the question and answer interactive robot in the fixed scene, and cannot generate the robot more accurately based on the current scene.
  • Interactive content includes: acquiring user information, determining user intent according to the user information; acquiring location scene information; and generating robot interaction content according to the current user life time axis according to the user intention and location scene information.
  • the robot interaction content can be more accurately generated, thereby more accurately and anthropomorphic interaction and communication with people. For people, everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • FIG. 1 is a flowchart of a method for generating interactive content of a robot according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a system for generating interactive content of a robot according to a second embodiment of the present invention.
  • Computer devices include user devices and network devices.
  • the user equipment or the client includes but is not limited to a computer, a smart phone, a PDA, etc.;
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing-based computer or network server. cloud.
  • the computer device can operate alone to carry out the invention, and can also access the network and implement the invention through interoperation with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • first means “first,” “second,” and the like may be used herein to describe the various elements, but the elements should not be limited by these terms, and the terms are used only to distinguish one element from another.
  • the term “and/or” used herein includes any and all combinations of one or more of the associated listed items. When a unit is referred to as being “connected” or “coupled” to another unit, it can be directly connected or coupled to the other unit, or an intermediate unit can be present.
  • a method for generating interactive content of a robot including:
  • the existing robot is generally based on the method of generating the interaction content of the question and answer interactive robot in the fixed scene, and cannot generate the interactive content of the robot more accurately based on the current scene.
  • the generating method of the present invention includes: acquiring user information, determining user intent according to the user information; acquiring location scene information; and generating robot interaction content according to the current user life time axis according to the user intention and location scene information. In this way, according to the current location scene information, combined with the life time axis of the robot, the robot interaction content can be more accurately generated, thereby more accurately and anthropomorphic interaction and communication with people. For people, everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • the interactive content can be an expression or text or voice.
  • the robot life timeline 300 is completed and set in advance. Specifically, the robot life timeline 300 is a series of parameter collections, and this parameter is transmitted to the system to generate interactive content.
  • the user information in this embodiment may be one or more of user expression, voice information, gesture information, scene information, image information, video information, face information, pupil iris information, light sense information, and fingerprint information.
  • the user's expression is preferred, so that the recognition is accurate and the recognition efficiency is high.
  • the life time axis is specifically: according to the time axis of human daily life, according to the human way, the self-cognition value of the robot itself in the time axis of daily life is fitted, and the behavior of the robot is according to this The action is to get the robot's own behavior in one day, so that the robot can perform its own behavior based on the life time axis, such as generating interactive content and communicating with humans. If the robot is always awake, it will act according to the behavior on this timeline, and the robot's self-awareness will be changed according to this timeline.
  • the life timeline and variable parameters can be used to change the attributes of self-cognition, such as mood values, fatigue values, etc., and can also automatically add new self-awareness information, such as no previous anger value, based on the life time axis and The scene of the variable factor will automatically add to the self-cognition of the robot based on the scene that previously simulated the human self-cognition.
  • the user speaks to the robot: “It’s so sleepy”, the robot understands that the user is very sleepy, and then combines the collected scene scene information into the room, and the robot life timeline. For example, if the current time is 9:00 am, then the robot knows that the owner just got up, then he should ask the owner early, for example, answer "good morning” as a reply, or with an expression, a picture, etc., the interactive content in the present invention. Can be understood as the response of the robot.
  • the robot understands that the user is very sleepy, and then combines the collected scene scene information into the room, and the robot life time axis, for example, the current time is 9:00 pm Then, the robot knows that the owner needs to sleep, then he will reply to the words "master good night, sleep well” and other similar terms, and can also be accompanied by expressions, pictures and so on. This kind of approach is more anthropomorphic than simply relying on scene recognition to generate replies and expressions that are more intimate with people's lives.
  • the method for generating parameters of the robot life time axis includes:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the step of expanding the self-cognition of the robot specifically includes: combining the life scene with the self-awareness of the robot to form a self-cognitive curve based on the life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the step of fitting the parameter of the self-cognition of the robot to the parameter in the life time axis comprises: using a probability algorithm to calculate the time of the robot on the life time axis after the time axis scene parameter is changed The probability of each parameter change forms a fitted curve.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night and eating at noon. Daytime sports, etc., all of these scenes in the life timeline have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the step of acquiring location scene information specifically includes: acquiring location scene information by using gesture information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • the user information includes voice information
  • the step of acquiring user information, and determining the user's intention according to the user information specifically includes: acquiring voice information, and determining a user intention according to the voice information.
  • the user's voice can be used to obtain the user's intention, so that the robot grasps the user's intention more accurately.
  • other methods such as text input may be used to let the robot know the intention of the user.
  • the self-cognition of the robot itself through the scene of the robot on the life time axis, such as the normal life scene within a day, eating, sleeping, exercising, these life scenes It will affect the mood and fatigue value of the robot itself.
  • We will fit these effects to form a self-cognitive curve based on the time axis.
  • the Bayesian probability algorithm to estimate the parameters between the robots using the Bayesian network to calculate the probability on the life time axis. After the time axis parameters of the robot itself change, the probability of each parameter change forms a fitting curve, which dynamically affects the self-cognition of the robot itself.
  • the life time axis makes regular changes to the robot itself during the time period.
  • the change comes from the fitting of the self-cognition in the life scene in the previous algorithm, which produces the influence of personification.
  • the robot will know its geographical location, and will change the way the interactive content is generated according to the geographical environment in which it is located. Geographical changes are based on our geographic scene recognition algorithms that allow robots to know where they are located, such as cafes or bedrooms.
  • this innovative module makes the robot itself have a human lifestyle. For the expression, it can be changed according to the scene in the ground.
  • a system for generating interactive content of a robot includes:
  • the intent identification module 201 is configured to acquire user information, and determine a user intention according to the user information;
  • the scene recognition module 202 is configured to acquire location scene information.
  • the content generation module 203 is configured to generate the robot interaction content according to the current robot life time axis sent by the robot life timeline module 301 according to the user intention and the location scene information.
  • the robot interaction content can be more accurately generated, thereby more accurately and anthropomorphic interaction and communication with people.
  • everyday life has a certain regularity.
  • the present invention adds the life time axis in which the robot is located to the interactive content generation of the robot, and makes the robot more humanized when interacting with the human, so that the robot has a human lifestyle in the life time axis, and the method can enhance the robot interaction content.
  • Generate anthropomorphic enhance the human-computer interaction experience and improve intelligence.
  • the user speaks to the robot: “It’s so sleepy”, the robot understands that the user is very sleepy, and then combines the collected scene scene information into the room, and the robot life timeline, for example, the current time is 9:00 am. Then, the robot knows that the owner just got up, then he should ask the owner early, for example, answer "good morning” as a reply, and can also be accompanied by expressions, pictures, etc.
  • the interactive content in the present invention can be understood as the reply of the robot.
  • the robot understands that the user is very sleepy, and then combines the collected scene scene information into the room, and the robot life time axis, for example, the current time is 9:00 pm Then, the robot knows that the owner needs to sleep, then he will reply to the words "master good night, sleep well” and other similar terms, and can also be accompanied by expressions, pictures and so on. This kind of approach is more anthropomorphic than simply relying on scene recognition to generate replies and expressions that are more intimate with people's lives.
  • the system includes a time axis based and artificial intelligence cloud processing module for:
  • the self-cognitive parameters of the robot are fitted to the parameters in the life time axis to generate a robot life time axis.
  • the life time axis is added to the self-cognition of the robot itself, so that the robot has an anthropomorphic life. For example, add the cognition of lunch to the robot.
  • the time-based and artificial intelligence cloud processing module is specifically configured to combine a life scene with a self-awareness of the robot to form a self-cognitive curve based on a life time axis.
  • the life time axis can be specifically added to the parameters of the robot itself.
  • the time-based and artificial intelligence cloud processing module is specifically configured to: use a probability algorithm to calculate a probability of each parameter change of a robot on a life time axis after a change of a time axis scene parameter, to form a fit curve.
  • the probability algorithm may be a Bayesian probability algorithm.
  • the robot will have sleep, exercise, eat, dance, read books, eat, make up, sleep and other actions. Each action will affect the self-cognition of the robot itself, and combine the parameters on the life time axis with the self-cognition of the robot itself.
  • the robot's self-cognition includes, mood, fatigue value, intimacy. , goodness, number of interactions, three-dimensional cognition of the robot, age, height, weight, intimacy, game scene value, game object value, location scene value, location object value, etc. For the robot to identify the location of the scene, such as cafes, bedrooms, etc.
  • the machine will perform different actions in the time axis of the day, such as sleeping at night, eating at noon, exercising during the day, etc. All the scenes in the life time axis will have an impact on self-awareness. These numerical changes are modeled by the dynamic fit of the probability model, fitting the probability that all of these actions occur on the time axis.
  • Scene Recognition This type of scene recognition changes the value of the geographic scene in self-cognition.
  • the scene recognition module is specifically configured to acquire location scene information by using video information.
  • location scene information can be obtained through video, and the video acquisition is more accurate.
  • the scene recognition module is specifically configured to acquire location scene information by using picture information.
  • the image acquisition can save the robot's calculations and make the robot's reaction more rapid.
  • the scene recognition module is specifically configured to obtain by using gesture information. Take location scene information.
  • the gesture can be used to make the robot more applicable. For example, if the disabled or the owner sometimes does not want to talk, the gesture can be used to transmit information to the robot.
  • the user information includes voice information
  • the intent recognition module is specifically configured to: acquire voice information, and determine a user intention according to the voice information.
  • the user's voice can be used to obtain the user's intention, so that the robot grasps the user's intention more accurately.
  • other methods such as text input may be used to let the robot know the intention of the user.
  • a robot is further disclosed, including a robot interaction content generation system according to any of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

L'invention concerne un procédé permettant de générer un contenu d'interaction de robot, comprenant : l'obtention d'informations d'utilisateur et la détermination d'une intention d'utilisateur en fonction des informations d'utilisateur; l'obtention d'informations de scène de lieu; et la génération d'un contenu d'interaction de robot par la combinaison de l'intention d'utilisateur, des informations de scène de lieu et d'une chronologie de vie de robot en cours. La chronologie de vie d'un robot est ajoutée à la génération du contenu d'interaction de robot, de sorte que le robot soit plus humanisé lorsqu'il entre en interaction avec un humain et possède un style de vie humain dans la chronologie de vie. Au moyen du procédé, l'humanisation de la génération de contenu d'interaction de robot, l'expérience d'interaction homme-robot et l'intelligence peuvent être améliorées.
PCT/CN2016/087736 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot et robot WO2018000258A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001754.6A CN106489114A (zh) 2016-06-29 2016-06-29 一种机器人交互内容的生成方法、系统及机器人
PCT/CN2016/087736 WO2018000258A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot et robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087736 WO2018000258A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot et robot

Publications (1)

Publication Number Publication Date
WO2018000258A1 true WO2018000258A1 (fr) 2018-01-04

Family

ID=58285363

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087736 WO2018000258A1 (fr) 2016-06-29 2016-06-29 Procédé et système permettant de générer un contenu d'interaction de robot et robot

Country Status (2)

Country Link
CN (1) CN106489114A (fr)
WO (1) WO2018000258A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992935A (zh) * 2017-12-14 2018-05-04 深圳狗尾草智能科技有限公司 为机器人设置生活周期的方法、设备及介质
CN108733741A (zh) * 2018-03-07 2018-11-02 北京猎户星空科技有限公司 一种交互方法及装置、智能设备和计算机可读存储介质
CN108363492B (zh) * 2018-03-09 2021-06-25 南京阿凡达机器人科技有限公司 一种人机交互方法及交互机器人
CN112099630B (zh) * 2020-09-11 2024-04-05 济南大学 一种多模态意图逆向主动融合的人机交互方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105058389A (zh) * 2015-07-15 2015-11-18 深圳乐行天下科技有限公司 一种机器人系统、机器人控制方法及机器人
CN105082150A (zh) * 2015-08-25 2015-11-25 国家康复辅具研究中心 一种基于用户情绪及意图识别的机器人人机交互方法
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105409197A (zh) * 2013-03-15 2016-03-16 趣普科技公司 用于提供持久伙伴装置的设备和方法
CN105058389A (zh) * 2015-07-15 2015-11-18 深圳乐行天下科技有限公司 一种机器人系统、机器人控制方法及机器人
CN105082150A (zh) * 2015-08-25 2015-11-25 国家康复辅具研究中心 一种基于用户情绪及意图识别的机器人人机交互方法

Also Published As

Publication number Publication date
CN106489114A (zh) 2017-03-08

Similar Documents

Publication Publication Date Title
WO2018000268A1 (fr) Procédé et système pour générer un contenu d'interaction de robot, et robot
WO2018000259A1 (fr) Procédé et système pour générer un contenu d'interaction de robot et robot
Tang et al. A novel multimodal communication framework using robot partner for aging population
US20180231653A1 (en) Entity-tracking computing system
WO2018000258A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot et robot
WO2018006370A1 (fr) Procédé et système d'interaction pour robot 3d virtuel, et robot
WO2018000267A1 (fr) Procédé de génération de contenu d'interaction de robot, système et robot
WO2018006374A1 (fr) Procédé, système et robot de recommandation de fonction basés sur un réveil automatique
WO2018006372A1 (fr) Procédé et système de commande d'appareil ménager sur la base de la reconnaissance d'intention, et robot
WO2018006371A1 (fr) Procédé et système de synchronisation de paroles et d'actions virtuelles, et robot
WO2018006373A1 (fr) Procédé et système permettant de commander un appareil ménager sur la base d'une reconnaissance d'intention, et robot
WO2018006369A1 (fr) Procédé et système de synchronisation d'actions vocales et virtuelles, et robot
KR20200024675A (ko) 휴먼 행동 인식 장치 및 방법
Jain et al. Objective self
Lee et al. Context-aware inference in ubiquitous residential environments
WO2018000266A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
WO2018192567A1 (fr) Procédé de détermination de seuil émotionnel et dispositif d'intelligence artificielle
Thakur et al. A complex activity based emotion recognition algorithm for affect aware systems
WO2018000261A1 (fr) Procédé et système permettant de générer un contenu d'interaction de robot, et robot
JP6937723B2 (ja) 行動パターンからの乖離度に基づいて感情を推定可能なプログラム、装置及び方法
Chen et al. Cp-robot: Cloud-assisted pillow robot for emotion sensing and interaction
Zhu et al. A human-centric framework for context-aware flowable services in cloud computing environments
JP2022517457A (ja) 感情認識機械を定義するための方法及びシステム
Taniguchi et al. Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society
WO2018000260A1 (fr) Procédé servant à générer un contenu d'interaction de robot, système et robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906659

Country of ref document: EP

Kind code of ref document: A1