CN109033179B - Reply information generation method and device based on emotional state of robot - Google Patents

Reply information generation method and device based on emotional state of robot Download PDF

Info

Publication number
CN109033179B
CN109033179B CN201810668689.3A CN201810668689A CN109033179B CN 109033179 B CN109033179 B CN 109033179B CN 201810668689 A CN201810668689 A CN 201810668689A CN 109033179 B CN109033179 B CN 109033179B
Authority
CN
China
Prior art keywords
robot
emotion
information
knowledge graph
current user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810668689.3A
Other languages
Chinese (zh)
Other versions
CN109033179A (en
Inventor
宋亚楠
邱楠
陈甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gowild Robotics Co ltd
Original Assignee
Shenzhen Gowild Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gowild Robotics Co ltd filed Critical Shenzhen Gowild Robotics Co ltd
Publication of CN109033179A publication Critical patent/CN109033179A/en
Application granted granted Critical
Publication of CN109033179B publication Critical patent/CN109033179B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/001Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the technical field of intelligent robots, and provides a reply information generation method and device based on emotional states of a robot. The method comprises the steps of obtaining emotion factors of the robot, determining emotion labels of the robot according to the current mood state of the robot, the familiarity of the robot to a current user, the emotion histories of the robot and the current user, the emotion histories of the robot to the current environment and the emotion state of the current user, and generating reply information under the guidance of the emotion labels of the robot or directly generating the reply information under the guidance of the emotion factors. The reply information generation method and device based on the emotional state of the robot can guide the generation of the reply information by combining with a real interactive scene, and realize diversified human-computer interaction processes.

Description

Reply information generation method and device based on emotional state of robot
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a reply information generation method and device based on emotional states of a robot.
Background
At present, many products and platforms relating to the human-computer interaction technology are available, and most products acquire various information from the products by processing and analyzing user voice or multi-modal input, extract or generate reply information from a target database or a knowledge base according to the information, and reply the reply information to the user.
However, potential problems with the prior art are: regardless of when and under what circumstances the user interacts with the product, in most cases the product will give the same reply to the same input of the user. This is clearly not the case with human-to-human interaction.
How to combine a real interactive scene, guide the generation of reply information and realize diversified human-computer interaction processes is a problem that needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a reply information generation method and device based on the emotional state of a robot, which can guide the generation of reply information by combining a real interactive scene and realize diversified human-computer interaction process.
In a first aspect, a reply information generation method based on a robot emotional state includes:
acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion histories of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
generating a robot emotion label according to the emotion factors of the robot;
and guiding to generate reply information according to the emotion label of the robot.
Further, acquiring an emotion factor of the robot, including:
counting the current residual electric quantity and the current service life of the robot;
detecting the current network condition and activity condition of the robot;
and determining the current mood state of the robot according to the residual electric quantity, the service life, the network condition, the activity condition or the pre-received mood specific information.
Further, acquiring an emotion factor of the robot, including:
constructing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
acquiring voice information or picture information of the robot interacting with a current user;
determining the current user ID or the current user name according to the voice information or the picture information;
extracting a knowledge graph sub-graph of which the user data is matched with the current user ID or the current user name from the knowledge graph;
determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
and determining the emotional history of the robot and the current user according to the extracted user history interactive information of the knowledge graph subgraph.
Further, the knowledge graph subgraph also comprises robot environment data; the acquiring of the emotional factor of the robot comprises the following steps:
acquiring multi-mode information of interaction between the robot and a current user;
determining an environment name or environment ID from the multimodal information;
extracting a knowledge graph subgraph of which the robot environment data is matched with the environment name or the environment ID from the knowledge graph;
and determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
Further, the knowledge graph subgraph also comprises user emotion data; the user emotion data comprises one or more of tone data, expression data, action data and expression data input by a user; the acquiring of the emotional factor of the robot comprises the following steps:
acquiring multi-mode information of interaction between the robot and a current user;
extracting tone information, expression information or action information of the current user from the multi-mode information;
extracting a knowledge graph subgraph of which the emotion data of the user is matched with the tone information, the expression information or the action information from a knowledge graph;
and determining the emotional state of the current user according to the extracted knowledge graph subgraph.
Furthermore, emotion factors of the robot are described by adopting multi-dimensional data; the generating of the robot emotion label according to the emotion factor of the robot includes:
and converting the emotion factors into one-dimensional data to obtain the robot emotion label.
Further, according to the emotional tag of the robot, guiding to generate reply information, including:
taking the robot emotion label as one of input information of a training model to guide generation of the training model; according to the generated training model, guiding the generation of reply information, and determining the form type specifically adopted when the information is replied;
or according to the emotion label of the robot and a preset rule, guiding the generation of reply information and determining a form type specifically adopted when the information is replied;
the form categories comprise tone, action, expression and expression.
In a second aspect, a reply information generation method based on a robot emotional state includes:
acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion histories of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
And directly guiding to generate reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user.
In a third aspect, a reply information generating apparatus based on a emotional state of a robot includes:
the emotion factor acquisition unit is used for acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
the robot emotion tag determination unit is used for generating a robot emotion tag according to the emotion factors of the robot;
and the robot emotion label application unit is used for guiding the generation of reply information according to the robot emotion label.
A reply information generation apparatus based on a robot emotional state in a fourth aspect, includes:
the emotion factor acquisition unit is used for acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
And the emotion factor application unit is used for directly guiding and generating reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotion histories of the robot and the current user, the emotion histories of the robot to the current environment and the emotion state of the current user.
As can be seen from the foregoing technical solutions, the reply information generation method and apparatus based on the emotion state of the robot according to this embodiment can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the emotion history of the robot and the current user, the emotion history of the robot with the current environment, and the emotion state of the current user, to perform comprehensive analysis, determine the emotion tag of the robot, guide the generation of the reply information through the emotion tag of the robot, or directly guide the generation of the reply information through the emotion factors, thereby implementing a diversified human-computer interaction process, and facilitating the robot to reply to the current user due to human, time, accident, and place.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flowchart illustrating a method of controlling reply message generation according to an embodiment;
fig. 2 is a flowchart of a method for confirming a current mood status of a robot according to a second embodiment;
FIG. 3 is a flowchart of a method for confirming familiarity of a robot with a current user and emotional history of the robot and the current user according to a second embodiment;
FIG. 4 is a flowchart of a method for confirming emotion history of a current environment by a robot according to the second embodiment;
FIG. 5 is a flowchart of a method for confirming an emotional state of a current user according to the second embodiment;
FIG. 6 is a flowchart illustrating a reply information generation control method according to a fourth embodiment;
fig. 7 is a schematic connection diagram illustrating a reply information generation apparatus according to the fifth embodiment;
fig. 8 is a schematic connection diagram illustrating a reply information generation apparatus according to a sixth embodiment.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only used as examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the present invention belongs.
The first embodiment is as follows:
the embodiment provides a reply information generation method based on an emotional state of a robot, and referring to fig. 1, the method includes:
step S101, acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
specifically, the emotional factors of the robot are described using multidimensional data, such as: and the description can be realized by adopting a multidimensional vector and a multidimensional linked list. The current mood state of the robot refers to the current mood of the robot, for example: open heart, depressed heart, heart injury, etc. Familiarity of the robot with current users includes: familiar, generally familiar, unfamiliar, etc. The emotion history of the robot and the current user refers to the historical emotion of the robot to the user, which is judged according to the historical interaction data. The current user emotional state refers to the current emotion of the user.
Step S102, generating a robot emotion label according to the emotion factors of the robot;
the method specifically comprises the following steps: and converting the emotion factors into one-dimensional data to obtain the robot emotion label.
Specifically, the multidimensional emotion factors are converted into one-dimensional emotion tags, so that only one-dimensional emotion tags need to be considered in subsequent reply information, multidimensional emotion factors do not need to be considered, and reply information can be generated more quickly. The emotion labels may be described in the form of one-dimensional vectors, one-dimensional linked lists, and the like.
Step S103, generating reply information according to the guidance of the robot emotion label.
According to the technical scheme, the reply information generation method based on the emotion state of the robot provided by the embodiment can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot to the current user, the emotion history of the robot and the current user, the emotion history of the current environment and the emotion state of the current user, so as to perform comprehensive analysis, determine the emotion tag of the robot, guide the generation of reply information through the emotion tag of the robot, realize diversified human-computer interaction processes, and help the robot to reply to the current user due to human factors, time, accidents and places.
Example two:
embodiment two is on the basis of embodiment one, increased the acquisition method of mood factor.
1. The current mood state of the robot.
In terms of processing the emotional factor, for the current mood state of the robot, referring to fig. 2, the specific acquisition process is as follows:
step S201, counting the current residual electric quantity and the service life of the robot.
Step S202, the current network state and the current activity state of the robot are detected.
Step S203, determining the current mood state of the robot according to the remaining power, the service life, the network condition, the activity condition or the pre-received mood specific information.
Specifically, the robot may determine the current mood according to the remaining power, the usage duration, the network condition, the activity condition, or the mood-specific information. For example, if the remaining power of the robot is less than 10% in a hungry state, the current mood state of the robot is to request the user to charge the user. Regardless of what the user enters, the robot will request the user to charge himself before replying to the user input. Also for example: when the network state of the robot is bad, the current mood state of the robot requests the user to check the network of the robot, so that no matter what contents are input by the user, the robot requests the user to check the network of the robot before replying to the input of the user.
The mood-specific information may be transmitted to the robot by the developer in advance. For example: the mood specific information can be the happy information pushed by a developer during the spring festival, and the robot does not update the mood state according to the information such as the residual electric quantity, so that the robot can keep happy all the time during the spring festival, and the robot can interact with the user in a more happy and lively mood state.
Besides the above factors, as for other information, the current mood of the robot can be influenced, the information can be generally set according to the product requirements, and an expert system can also be adopted, so that the expert can directly specify the determination rule of the current mood of the robot according to the psychological research result.
In the embodiment of the reply information generation method based on the emotional state of the robot, various states of the robot can be detected and counted, and the current emotional state of the robot is determined by the remaining power, the network condition, the ongoing activities of the robot, the service life of the robot, or specific emotional information pushed by a developer.
2. Familiarity of the robot with the current user and emotional history of the robot and the current user.
Referring to fig. 3, the specific acquisition process includes, for familiarity of the robot with the current user and emotional history of the robot and the current user:
Step S301, establishing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
specifically, the knowledge graph comprises attributes of various users or robots, and the knowledge graph subgraphs are formed by extracting partial attributes from the knowledge graph. The storage mode of the knowledge graph can be two types: unified storage and block storage. The unified storage means that all the attributes of the robot and the attributes of the user are stored in a graph library, so that when the knowledge graph subgraph is extracted, only the subgraph needs to be extracted from the graph library. The storage in blocks means that all the robot attributes and user attributes are stored in a plurality of storage blocks. For example: dividing all the robot attributes into a group, and storing the group as a robot gallery; all user attributes are grouped together and stored as a user gallery. Thus, when extracting the knowledge graph subgraph, extracting the knowledge graph subgraph of the robot from the robot graph library; and extracting a knowledge graph subgraph of the user from the user graph library.
Step S302, acquiring voice information or picture information of the robot interacting with the current user;
step S303, determining the current user ID or the current user name according to the voice information or the picture information;
Step S304, extracting a knowledge graph subgraph with user data matched with the current user ID or the current user name from the knowledge graph;
s305, determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
in particular, sophistication refers to the number of attributes included in a knowledge-graph subgraph. For example, in an educational scenario, if a robot is used to provide lesson-related auxiliary education to students in an obligation education stage, in this scenario, the information that can be populated in the knowledge graph subgraph can be listed as follows:
the first type of information: name (ID), identifying information (voiceprint, fingerprint, face image, etc., used by the robot to identify the user), grade of reading, region of belongings; the information is closely related to the educational function, and the subjects and knowledge which are learned, are learning and are going to be learned can be known by knowing the grade and the region of the user;
the second type of information: age, sex, class; the information has an auxiliary effect on the education function, students of different ages and different sexes have different characteristics, and the class information can help the robot to know a teacher team and a specific teaching progress of a user;
The third type of information: historical information such as historical scores, interactive histories and wrong question conditions; the information has an auxiliary effect on the educational function, is information obtained by the robot in the teaching process in the ways of teaching, interaction and the like, and is used for tracking the learning condition of the user, guiding the teaching and the customization of review.
In the product scenario, if the first type of information is completely filled, the familiarity of the robot and the user is a passing level (sixty percent), the second type of information is completely filled, the familiarity of the robot and the user is a good level (eighty percent), the third type of information is complete, new information is periodically filled in the history information, and the familiarity of the robot and the user is an excellent level (ninety-five percent). Therefore, the more complete the knowledge graph subgraph is filled, the more contents for explaining the interaction between the robot and the user are, and the familiarity between the robot and the user is higher.
And S306, determining the emotion histories of the robot and the current user according to the user history interactive information of the extracted knowledge graph subgraph.
Specifically, the user history interaction information includes tone data, expression data, action data, expression data, and the like input by the user. From the information fed back by the user, the emotional history of the robot and the current user can be determined. For example: if the robot is intended for a cell phone assistant. In one working hour, the robot plays the information received by the mobile phone of the user by voice, and the work of the user or other people is seriously affected by the action, so that the user feeds back the emotional feeling of vitality to the robot by voice or information, such as: reply "you will be like you, put outside at this time" etc. through the message. The robot records the user's emotion in that scene.
In the reply information generation method based on the emotional state of the robot, after the current user is analyzed through the received voice information or picture information and the like, history and knowledge graph subgraphs of interaction between the robot and the user can be extracted from a knowledge graph of the robot, and the good feeling of the robot to the current user, the intimacy degree with the current user, the understanding degree and familiarity of the robot to the current user, the emotional history of the current user, and the emotional history of the robot and the current user can be obtained, and the information can influence the emotion of the robot.
3. The robot's emotional history of the current environment.
For the emotional history of the robot to the current environment, referring to fig. 4, the specific acquisition process is as follows:
step S401, multi-modal information interacting with the current user is acquired.
In step S402, an environment name or an environment ID is determined from the multimodal information.
Step S403, extracting a knowledge graph subgraph with the environment data of the robot matched with the environment name or the environment ID from the knowledge graph;
and S404, determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
Specifically, the robot obtains the influence of the current environment on the emotion of the robot according to the interaction history with the user, for example: for the scene, when a user is at work and the mobile phone receives new information, the robot knows that the new information cannot be played in the scene according to the interactive history. Therefore, the robot can acquire the current scene through the multi-modal information, further extract the content in the knowledge map related to the current environment of the robot, extract the sub-map related to the current environment from the knowledge map, and determine the influence of the current environment on the emotion of the robot.
4. The current user's emotional state.
Aiming at the emotional state of the current user, the knowledge graph subgraph also comprises user emotion data; the user emotion data comprises one or more of tone data, expression data, action data and expression data input by a user; referring to fig. 5, the specific acquisition process is as follows:
step S501, multi-modal information interacted with a current user is acquired.
Step S502, the tone information, expression information or action information of the current user is extracted from the multi-modal information.
Step S503, extracting a knowledge graph subgraph of which the emotion data of the user is matched with the tone information, the expression information or the action information from a knowledge graph;
and step S504, determining the emotional state of the current user according to the extracted knowledge graph subgraph.
For example, the robot may obtain the tone, expression, action, and the like of the user when inputting the information through the multi-modal information, analyze the user emotion represented by the tone, expression, action, and expression of the user and the current emotion of the user to the robot by extracting the sub-graph related to the user in the knowledge graph of the robot, and then the robot may adjust its own emotion according to the information.
For the sake of brief description, the method provided in this embodiment may refer to the corresponding contents in the foregoing method embodiments.
Example three:
third embodiment is based on the above embodiments, and adds a method for generating reply information.
In the aspect of generating guidance control of reply information, the specific implementation process is as follows:
firstly, the robot emotion label is used as one of input information of a training model to guide the generation of the training model; according to the generated training model, guiding the generation of reply information, and determining the form type specifically adopted when the information is replied;
specifically, the training model is an artificial intelligence model, and the training model is obtained by learning all the emotion labels of the robot through the artificial intelligence model. The method can be started from human learning, model building is carried out on the emotional response of the human by using an artificial intelligence method such as machine learning, the emotional response of the human is determined under the condition that the information has different values, then the information is used as the input of an artificial intelligence model, and the output value of the artificial intelligence model is enabled to be closer to the real emotional response value of the human by training the model.
In the aspect of selecting the form category, the specific implementation process is as follows:
And sorting the candidate items of each form type according to the state of the emotional tag of the robot and the context information, wherein the context information is the interaction information between the robot and the current user.
And determining the final option of each form category according to the sequencing result of each form category, and interacting with the current user.
For example: when the emotion label of the robot is 1, the wording the robot can select is you, please, trouble, etc., and when the emotion label of the robot is 2, the wording the robot can select is parent, bar, o, and so, etc. Each robot emotion tag of the robot has corresponding candidate tone, intonation, action, expression and expression, each form category may contain a plurality of selectable items, the robot sorts the candidate items according to context and context in actual interaction, and then selects the candidate item with the highest score as an option for interacting with a user, so that a diversified human-computer interaction process is presented, and the robot is facilitated to reply to the current user due to human, time, accident and place.
Second, in the practical application process, a model meeting the requirements may not be technically trained, and the original training data of the human body model may not be sufficient. Providing a second reply information generation guidance control method for guiding generation of reply information according to the emotion tag of the robot and a preset rule, and determining a form type specifically adopted in reply information;
The form categories comprise tone, movement, expression and expression.
Specifically, the rule mainly refers to a grammatical rule. For example: when the emotional state of the robot is good, the user inputs a demand to the robot: help me to set up an alarm clock eight morning hours tomorrow. The robot processes the input of the user, recognizes that the user intends to decide the alarm clock, and extracts the time of eight morning hours in the tomorrow. The robot sets an alarm clock for a user, and grammar rules corresponding to the found alarm clock are as follows: "word of tone", "alarm clock that help you decide" time point ", and" custom part ". According to the emotional state of the robot, positive and active word filling should be selected for the custom part and the mood word, and finally the robot may generate the following reply: thanks, who has done your alarm clock eight hours tomorrow in the morning, I call your alarm clock eight hours tomorrow in the morning.
If the emotional state of the robot is not good, the user inputs requirements to the robot: help me to set up an alarm clock eight morning hours tomorrow. The robot processes the input of the user, recognizes that the user intends to decide the alarm clock, and extracts the time of eight morning hours in the tomorrow. The robot sets an alarm clock for a user, and the language rule corresponding to the found alarm clock is as follows: "word of tone", "alarm clock that help you decide" time point ", and" custom part ". Depending on the emotional state of the robot, negative, relatively negative word fills should be selected for the custom part and the mood words, and eventually the robot may generate the following replies: humming helps you to order an alarm clock eight hours in the morning tomorrow, and people do not feel good and get worried about the alarm clock.
For a brief description, the method provided in this embodiment may refer to the corresponding contents in the foregoing method embodiments.
Example four:
the embodiment of the invention provides another reply information generation method based on the emotional state of a robot, and in combination with fig. 6, the method comprises the following steps:
step S601, acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
step S602, directly guiding to generate reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user.
In the practical application process, the reply information generation method based on the emotional state of the robot inputs the emotional factor into the pre-constructed training model to guide the generation of the reply information, or guide the generation of the reply information according to the pre-established rule.
For example, the robot reminds the user to charge the robot at night without a light, frightening the user who is taking a rest, and causing the user to dislike the robot. Later, under the same condition/scene, even if the electric quantity of the robot is insufficient and the current mood state of the robot is influenced, a request for charging the user for the robot cannot be actively initiated.
According to the technical scheme, the reply information generation method based on the emotional state of the robot provided by the embodiment can analyze various emotional factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the emotional history of the robot and the current user, the emotional history of the current environment and the emotional state of the current user, so as to perform comprehensive analysis, and directly guide the generation of the reply information through the emotional factors, so that a diversified human-computer interaction process is realized, and the robot is facilitated to reply to the current user due to human, time, accident and place.
For the sake of brief description, the method provided in this embodiment may refer to the corresponding contents in the foregoing method embodiments.
Example five:
the embodiment of the invention provides a reply information generation device based on a robot emotional state, which is combined with fig. 7, and comprises an emotion factor acquisition unit 101, a robot emotion tag determination unit 102 and a robot emotion tag application unit 103, wherein the emotion factor acquisition unit 101 is used for acquiring emotion factors of a robot, and the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to a current environment, and the emotion state of the current user; the robot emotion tag determination unit 102 is configured to determine a robot emotion tag according to a current mood state of the robot, familiarity of the robot to a current user, emotion histories of the robot and the current user, an emotion history of the robot to a current environment, and an emotion state of the current user; the robot emotion label application unit 103 is configured to instruct generation of reply information according to the robot emotion label.
As can be seen from the foregoing technical solutions, the reply information generating apparatus based on the emotion state of the robot according to this embodiment can analyze various emotion factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the emotion history of the robot and the current user, the emotion history of the current environment, and the emotion state of the current user, to perform comprehensive analysis, determine the emotion tag of the robot, and guide the generation of the reply information through the emotion tag of the robot, so as to implement a diversified human-computer interaction process, which is helpful for the robot to reply to the current user due to human, time, accident, and place.
For a brief description of the system provided in this embodiment, reference may be made to the corresponding contents in the foregoing method embodiments.
Example six:
the embodiment of the invention provides another reply information generation device based on the emotional state of a robot, which is combined with fig. 8 and comprises an emotional factor acquisition unit 101 and an emotional factor application unit 201, wherein the emotional factor acquisition unit 101 is used for acquiring emotional factors of the robot, the emotional factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotional history of the robot and the current user, the emotional history of the robot to the current environment and the emotional state of the current user, and the emotional factors are expressed by vectors; the emotion factor application unit 201 is configured to directly guide generation of reply information according to a current mood state of the robot, familiarity of the robot with a current user, emotion histories of the robot and the current user, an emotion history of the robot with a current environment, and an emotion state of the current user.
As can be seen from the foregoing technical solutions, the reply information generating apparatus based on the emotional state of the robot according to this embodiment can analyze various emotional factors, for example, the current mood state of the robot, the familiarity of the robot with the current user, the emotional histories of the robot and the current user, the emotional history of the current environment, and the emotional state of the current user, to perform comprehensive analysis, and directly guide generation of reply information through the emotional factors, thereby implementing a diversified human-computer interaction process, and facilitating the robot to reply to the current user due to human, time, accident, and place.
For a brief description of the system provided in this embodiment, reference may be made to the corresponding contents in the foregoing method embodiments for the sake of brevity.
In the description of the present invention, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (8)

1. A reply information generation method based on a robot emotion state is characterized by comprising the following steps:
acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion histories of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
generating a robot emotion label according to the emotion factors of the robot;
according to the emotion label of the robot, generating reply information in a guiding mode;
acquiring emotional factors of the robot, including:
Constructing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
acquiring voice information or picture information of the robot interacting with a current user;
determining the current user ID or the current user name according to the voice information or the picture information;
extracting a knowledge graph sub-graph of which the user data is matched with the current user ID or the current user name from the knowledge graph;
determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
determining the emotional history of the robot and the current user according to the extracted user history interactive information of the knowledge graph subgraph;
the knowledge graph subgraph also comprises robot environment data; the acquiring of the emotional factor of the robot comprises the following steps:
acquiring multi-mode information of interaction between the robot and a current user;
determining an environment name or environment ID from the multimodal information;
extracting a knowledge graph subgraph of which the robot environment data is matched with the environment name or the environment ID from the knowledge graph;
and determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
2. The method for generating reply information based on emotional state of a robot according to claim 1, wherein obtaining emotional factors of the robot comprises:
counting the current residual electric quantity and the service life of the robot;
detecting the current network condition and activity condition of the robot;
and determining the current mood state of the robot according to the residual electric quantity, the service life, the network condition, the activity condition or the pre-received mood specific information.
3. The reply information generation method based on the emotional state of the robot according to claim 1, wherein the knowledge-graph subgraph further comprises user emotion data; the user emotion data comprises one or more of tone data, expression data, action data and expression data input by a user; the acquiring of the emotional factor of the robot comprises the following steps:
acquiring multi-mode information of interaction between the robot and a current user;
extracting tone information, expression information or action information of the current user from the multi-mode information;
extracting a knowledge graph subgraph of which the emotion data of the user is matched with the tone information, the expression information or the action information from a knowledge graph;
And determining the emotional state of the current user according to the extracted knowledge graph subgraph.
4. The reply information generation method based on the emotional state of the robot according to claim 1, wherein the emotional factor of the robot is described by multidimensional data; the generating of the robot emotion label according to the emotional factor of the robot includes:
and converting the emotion factors into one-dimensional data to obtain the robot emotion label.
5. The method for generating reply information based on emotional state of robot according to claim 1, wherein the instructing to generate reply information according to the emotional tag of robot comprises:
taking the robot emotion label as one of input information of a training model to guide generation of the training model; according to the generated training model, guiding the generation of reply information, and determining the form type specifically adopted when the information is replied;
or according to the emotion label of the robot and a preset rule, guiding the generation of reply information and determining a form type specifically adopted when the information is replied;
the form categories comprise tone, action, expression and expression.
6. A reply information generation method based on a robot emotion state is characterized by comprising the following steps:
Acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion histories of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
directly guiding to generate reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotion histories of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
acquiring emotional factors of the robot, including:
constructing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
acquiring voice information or picture information of the robot interacting with a current user;
determining the current user ID or the current user name according to the voice information or the picture information;
extracting a knowledge graph sub-graph of which the user data is matched with the current user ID or the current user name from the knowledge graph;
determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
Determining the emotional history of the robot and the current user according to the extracted user history interactive information of the knowledge graph subgraph;
the knowledge graph subgraph also comprises robot environment data; the acquiring of the emotional factor of the robot comprises the following steps:
acquiring multi-mode information of interaction between the robot and a current user;
determining an environment name or environment ID from the multimodal information;
extracting a knowledge graph subgraph of which the robot environment data is matched with the environment name or the environment ID from the knowledge graph;
and determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
7. A reply information generation device based on a robot emotional state, comprising:
the emotion factor acquisition unit is used for acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
the robot emotion label determining unit is used for generating a robot emotion label according to the emotion factor of the robot;
the robot emotion label application unit is used for guiding to generate reply information according to the robot emotion label;
The emotion factor acquisition unit is specifically configured to:
constructing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
acquiring voice information or picture information of the robot interacting with a current user;
determining the current user ID or the current user name according to the voice information or the picture information;
extracting a knowledge graph sub-graph of which the user data is matched with the current user ID or the current user name from the knowledge graph;
determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
determining the emotional history of the robot and the current user according to the extracted user history interactive information of the knowledge graph subgraph;
the knowledge graph subgraph also comprises robot environment data; the emotion factor acquisition unit is specifically configured to:
acquiring multi-mode information of interaction between the robot and a current user;
determining an environment name or environment ID from the multimodal information;
extracting a knowledge graph subgraph of which the robot environment data is matched with the environment name or the environment ID from the knowledge graph;
and determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
8. A reply information generating apparatus based on a emotional state of a robot, comprising:
the emotion factor acquisition unit is used for acquiring emotion factors of the robot, wherein the emotion factors comprise the current mood state of the robot, the familiarity of the robot to a current user, the emotion history of the robot and the current user, the emotion history of the robot to the current environment and the emotion state of the current user;
the emotion factor application unit is used for directly guiding and generating reply information according to the current mood state of the robot, the familiarity of the robot to the current user, the emotion histories of the robot and the current user, the emotion histories of the robot to the current environment and the emotion state of the current user;
the emotion factor acquisition unit is specifically configured to:
constructing a knowledge graph; the knowledge graph comprises a plurality of knowledge graph subgraphs; the knowledge graph subgraph comprises user data and user historical interaction information;
acquiring voice information or picture information of the robot interacting with a current user;
determining the current user ID or the current user name according to the voice information or the picture information;
extracting a knowledge graph sub-graph of which the user data is matched with the current user ID or the current user name from the knowledge graph;
Determining the familiarity of the robot to the current user according to the perfection degree of the knowledge graph subgraph;
determining the emotional history of the robot and the current user according to the extracted user history interactive information of the knowledge graph subgraph;
the knowledge graph subgraph also comprises robot environment data; the emotion factor acquisition unit is specifically configured to:
acquiring multi-mode information of interaction between the robot and a current user;
determining an environment name or environment ID from the multimodal information;
extracting a knowledge graph subgraph of which the robot environment data is matched with the environment name or the environment ID from the knowledge graph;
and determining the emotional history of the robot to the current environment according to the extracted knowledge graph subgraph.
CN201810668689.3A 2018-02-27 2018-06-26 Reply information generation method and device based on emotional state of robot Expired - Fee Related CN109033179B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810162745 2018-02-27
CN2018101627456 2018-02-27

Publications (2)

Publication Number Publication Date
CN109033179A CN109033179A (en) 2018-12-18
CN109033179B true CN109033179B (en) 2022-07-29

Family

ID=64610907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810668689.3A Expired - Fee Related CN109033179B (en) 2018-02-27 2018-06-26 Reply information generation method and device based on emotional state of robot

Country Status (2)

Country Link
CN (1) CN109033179B (en)
WO (1) WO2019165732A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831875B (en) * 2019-04-11 2024-05-31 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN110605724B (en) * 2019-07-01 2022-09-23 青岛联合创智科技有限公司 Intelligence endowment robot that accompanies
CN112809694B (en) * 2020-03-02 2023-12-29 腾讯科技(深圳)有限公司 Robot control method, apparatus, storage medium and computer device
CN112148846A (en) * 2020-08-25 2020-12-29 北京来也网络科技有限公司 Reply voice determination method, device, equipment and storage medium combining RPA and AI

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003340757A (en) * 2002-05-24 2003-12-02 Mitsubishi Heavy Ind Ltd Robot
CN105807933A (en) * 2016-03-18 2016-07-27 北京光年无限科技有限公司 Man-machine interaction method and apparatus used for intelligent robot
CN105824935A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for information processing for question and answer robot
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN107491511A (en) * 2017-08-03 2017-12-19 深圳狗尾草智能科技有限公司 The autognosis method and device of robot
CN107563517A (en) * 2017-08-25 2018-01-09 深圳狗尾草智能科技有限公司 Robot autognosis real time updating method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297789B (en) * 2016-08-19 2020-01-14 北京光年无限科技有限公司 Personalized interaction method and system for intelligent robot
CN106773923B (en) * 2016-11-30 2020-04-21 北京光年无限科技有限公司 Multi-mode emotion data interaction method and device for robot
CN106695839A (en) * 2017-03-02 2017-05-24 青岛中公联信息科技有限公司 Bionic intelligent robot for toddler education
CN106914903B (en) * 2017-03-02 2019-09-13 长威信息科技发展股份有限公司 A kind of interactive system towards intelligent robot
CN107301168A (en) * 2017-06-01 2017-10-27 深圳市朗空亿科科技有限公司 Intelligent robot and its mood exchange method, system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003340757A (en) * 2002-05-24 2003-12-02 Mitsubishi Heavy Ind Ltd Robot
CN105807933A (en) * 2016-03-18 2016-07-27 北京光年无限科技有限公司 Man-machine interaction method and apparatus used for intelligent robot
CN105824935A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for information processing for question and answer robot
CN106462384A (en) * 2016-06-29 2017-02-22 深圳狗尾草智能科技有限公司 Multi-modal based intelligent robot interaction method and intelligent robot
CN107491511A (en) * 2017-08-03 2017-12-19 深圳狗尾草智能科技有限公司 The autognosis method and device of robot
CN107563517A (en) * 2017-08-25 2018-01-09 深圳狗尾草智能科技有限公司 Robot autognosis real time updating method and system

Also Published As

Publication number Publication date
CN109033179A (en) 2018-12-18
WO2019165732A1 (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN109033179B (en) Reply information generation method and device based on emotional state of robot
CN106378781A (en) Service robot guide system and method
CN105493130A (en) Adaptive learning environment driven by real-time identification of engagement level
CN116303949B (en) Dialogue processing method, dialogue processing system, storage medium and terminal
Lin et al. Is it a good move? Mining effective tutoring strategies from human–human tutorial dialogues
CN110807566A (en) Artificial intelligence model evaluation method, device, equipment and storage medium
Sherwani et al. Orality-grounded HCID: Understanding the oral user
US20230119860A1 (en) Matching system, matching method, and matching program
CN111554276B (en) Speech recognition method, device, equipment and computer readable storage medium
Wilks et al. A prototype for a conversational companion for reminiscing about images
CN117035074B (en) Multi-modal knowledge generation method and device based on feedback reinforcement
CN113268610A (en) Intent skipping method, device and equipment based on knowledge graph and storage medium
CN112199486A (en) Task type multi-turn conversation method and system for office scene
CN117172978A (en) Learning path information generation method, device, electronic equipment and medium
CN111353290A (en) Method and system for automatically responding to user inquiry
KR20220133665A (en) Apparatus and method for providing characteristics information
CN117876090A (en) Risk identification method, electronic device, storage medium, and program product
CN117352132A (en) Psychological coaching method, device, equipment and storage medium
Nair HR based Chatbot using deep neural network
CN115617975B (en) Intention recognition method and device for few-sample multi-turn conversation
CN115905475A (en) Answer scoring method, model training method, device, storage medium and equipment
CN113468306A (en) Voice conversation method, device, electronic equipment and storage medium
CN115129971A (en) Course recommendation method and device based on capability evaluation data and readable storage medium
CN111522914A (en) Method and device for acquiring marking data, electronic equipment and storage medium
Tennakoon et al. An interactive application for university students to reduce the industry-academia skill gap in the software engineering field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220729

CF01 Termination of patent right due to non-payment of annual fee