CN109033179A - Based on the return information generation method of robot emotion state, device - Google Patents
Based on the return information generation method of robot emotion state, device Download PDFInfo
- Publication number
- CN109033179A CN109033179A CN201810668689.3A CN201810668689A CN109033179A CN 109033179 A CN109033179 A CN 109033179A CN 201810668689 A CN201810668689 A CN 201810668689A CN 109033179 A CN109033179 A CN 109033179A
- Authority
- CN
- China
- Prior art keywords
- robot
- emotion
- active user
- state
- mood
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/001—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means with emotions simulating means
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention belongs to intelligent robot technology fields, provide a kind of based on the return information generation method of robot emotion state, device.This method includes obtaining the mood factor of robot, emotion history, active user's affective state according to the current mood state of robot, robot to the familiarity of active user, robot and the emotion of active user history, robot to current environment, determine robot emotion label, return information is generated by robot emotion label instructions, or directly instructs to generate return information by the above-mentioned mood factor.The present invention is based on the return information generation methods of robot emotion state, device, can instruct the generation of return information in conjunction with true interaction scenarios, realize diversified human-computer interaction process.
Description
Technical field
The present invention relates to intelligent robot technology fields, and in particular to a kind of return information based on robot emotion state
Generation method, device.
Background technique
Currently, be related to human-computer interaction technology product and platform it is a lot of, mostly by user speech or multi-modal defeated
The processing analysis entered, therefrom obtains various information, extracts or generate back from target database or knowledge base according to these information
Complex information replies to user.
But the potential problem of the prior art is: though user in the case that when, which kind of and product interact,
In most cases, product can give identical reply to the identical input of user.This does not obviously meet the feelings of person to person's interaction
Condition.
How true interaction scenarios are combined, instructs the generation of return information, realize diversified human-computer interaction process, be
The problem of those skilled in the art's urgent need to resolve.
Summary of the invention
For the defects in the prior art, the present invention provides a kind of, and the return information based on robot emotion state generates
Method, apparatus can instruct the generation of return information, realize diversified human-computer interaction process in conjunction with true interaction scenarios.
In a first aspect, a kind of return information generation method based on robot emotion state, comprising:
The mood factor of robot is obtained, the mood factor includes the current mood state of robot, robot to working as
Familiarity, robot and the emotion of the active user history of preceding user, robot are to the emotion history of current environment, active user
Affective state;
Robot emotion label is generated according to the mood factor of robot;
According to the robot emotion label, guidance generates return information.
Further, the mood factor of robot is obtained, comprising:
Count the current remaining capacity of the robot, using duration;
Detect the current Network status of the robot, activity situation;
According to the residual electric quantity, the mood using duration, the Network status, the activity situation or pre-receiving
Specific information determines the current mood state of the robot.
Further, the mood factor of robot is obtained, comprising:
Construct knowledge mapping;It include multiple knowledge mapping subgraphs in the knowledge mapping;The knowledge mapping subgraph includes
User data and user's history interactive information;
Obtain voice messaging or pictorial information that robot is interacted with active user;
According to the voice messaging or the pictorial information, active user ID or active user's title are determined;
The knowledge graph music score that user data matches with active user ID or active user's title is extracted from knowledge mapping
Figure;
According to the degree of perfection of knowledge mapping subgraph, determine the robot to the familiarity of active user;
According to the user's history interactive information for extracting obtained knowledge mapping subgraph, the robot and active user are determined
Emotion history.
Further, the knowledge mapping subgraph further includes robot environment's data;It is described obtain robot mood because
Son, comprising:
Obtain the multi-modal information that robot is interacted with active user;
Environment title or environment ID are determined from the multi-modal information;
The knowledge mapping subgraph that extraction machine people environmental data and environment title or environment ID match from knowledge mapping;
According to the knowledge mapping subgraph extracted, determine the robot to the emotion history of current environment.
Further, the knowledge mapping subgraph further includes user feeling data;The user feeling data include user
One or more of tone data, expression data, action data and wording data of input;The mood for obtaining robot
The factor, comprising:
Obtain the multi-modal information that robot is interacted with active user;
Tone information, expression information or the action message of active user are extracted from the multi-modal information;
User feeling data and the tone information, the expression information or the action message are extracted from knowledge mapping
The knowledge mapping subgraph to match;
According to the knowledge mapping subgraph extracted, the affective state of the active user is determined.
Further, the mood factor of the robot is described using multidimensional data;The mood according to robot because
Son generates robot emotion label, comprising:
The mood factor is converted into one-dimensional data, obtains the robot emotion label.
Further, according to the robot emotion label, guidance generates return information, comprising:
Using the robot emotion label as one of input information of training pattern, the generation of training pattern is instructed;Root
The generation that return information is instructed according to the training pattern of generation, the form classification specifically used when determining return information;
Or according to the robot emotion label and the rule pre-established, the generation of return information is instructed, it determines and replys
The form classification specifically used when information;
The form classification includes the tone, intonation, movement, wording, expression.
Second aspect, a kind of return information generation method based on robot emotion state, comprising:
The mood factor of robot is obtained, the mood factor includes the current mood state of robot, robot to working as
Familiarity, robot and the emotion of the active user history of preceding user, robot are to the emotion history of current environment, active user
Affective state;
According to the current mood state of robot, the robot to the familiarity of active user, robot and current use
Emotion history and the active user affective state of the emotion history, the robot at family to current environment, directly guidance life
At return information.
The third aspect, a kind of return information generating means based on robot emotion state, comprising:
Mood factor acquirement unit, for obtaining the mood factor of robot, the mood factor includes working as robot
Preceding mood states, robot are to the familiarity of active user, robot and the emotion of active user history, robot to working as front ring
The emotion history in border, active user's affective state;
Robot emotion tag determination unit, for generating robot emotion label according to the mood factor of robot;
Robot emotion label applying unit, for according to the robot emotion label, guidance to generate return information.
A kind of return information generating means based on robot emotion state of fourth aspect, comprising:
Mood factor acquirement unit, for obtaining the mood factor of robot, the mood factor includes working as robot
Preceding mood states, robot are to the familiarity of active user, robot and the emotion of active user history, robot to working as front ring
The emotion history in border, active user's affective state;
Mood factor applying unit, for according to the current mood state of robot, the robot to active user's
Familiarity, robot and the emotion of active user history, the robot are to the emotion history and the current use of current environment
Family affective state, directly guidance generate return information.
As shown from the above technical solution, the return information generation side provided in this embodiment based on robot emotion state
Method, device can analyze a variety of mood factors, for example, the current mood state of robot, robot are to active user
Emotion history to current environment of familiarity, robot and the emotion of active user history, robot, active user's emotion shape
State determines robot emotion label to carry out comprehensive analysis, by the generation of robot emotion label instructions return information, or
The generation that return information is directly instructed by the above-mentioned mood factor, realizes diversified human-computer interaction process, facilitates robot
Realize according to the people and the time, because of thing, because replying active user on ground.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art are briefly described.In all the appended drawings, similar element
Or part is generally identified by similar appended drawing reference.In attached drawing, each element or part might not be drawn according to actual ratio.
The return information that Fig. 1 shows the offer of embodiment one generates the method flow diagram of control method;
Fig. 2 shows the method flow diagrams of the current mood state confirmation of the robot of the offer of embodiment two;
Fig. 3 shows the robot of the offer of embodiment two to the familiarity and robot of active user and the feelings of active user
Feel the method flow diagram of history confirmation;
Fig. 4 shows the method flow diagram that the robot of the offer of embodiment two confirms the emotion history of current environment;
Fig. 5 shows the method flow diagram of the affective state confirmation of the active user of the offer of embodiment two;
The return information that Fig. 6 shows example IV offer generates the method flow diagram of control method;
Fig. 7 shows the connection schematic diagram of the return information generating means of the offer of embodiment five;
Fig. 8 shows the connection schematic diagram of the return information generating means of the offer of embodiment six.
Specific embodiment
It is described in detail below in conjunction with embodiment of the attached drawing to technical solution of the present invention.Following embodiment is only used for
Clearly illustrate technical solution of the present invention, therefore be intended only as example, and cannot be used as a limitation and limit protection of the invention
Range.
It should be noted that unless otherwise indicated, technical term or scientific term used in this application should be this hair
The ordinary meaning that bright one of ordinary skill in the art are understood.
Embodiment one:
The present embodiment provides a kind of return information generation methods based on robot emotion state, referring to Fig. 1, this method packet
It includes:
Step S101, obtains the mood factor of robot, and the mood factor includes the current mood state of robot, machine
Device people to the familiarity of active user, robot and the emotion of active user history, robot to the emotion history of current environment,
Active user's affective state;
Specifically, the mood factor of robot is described using multidimensional data, such as: multi-C vector, multidimensional chain can be used
Table describes.The current mood state of robot refers to the mood of robot instantly, such as: it is happy, gloomy, sad etc..Machine
People includes: to be very familiar with, be generally familiar with, be unfamiliar with to the familiarity of active user.The emotion history of robot and active user
Refer to the robot that is judged according to historical interaction data to the history emotion of user.Active user's affective state refers to user
Instantly emotion.
Step S102 generates robot emotion label according to the mood factor of robot;
It specifically includes: the mood factor is converted into one-dimensional data, obtain the robot emotion label.
Specifically, the mood factor of multidimensional is converted to one-dimensional mood label, in this way in subsequent return information, is only needed
Consider one-dimensional mood label, without the concern for the mood factor of multidimensional, can faster generate return information.Mood label
The description of the forms such as one-dimensional vector, one-dimensional chained list can be used.
Step S103, according to robot emotion label, guidance generates return information.
As shown from the above technical solution, the return information generation side provided in this embodiment based on robot emotion state
Method can analyze a variety of mood factors, for example, the current mood state of robot, robot are to the ripe of active user
Degree of knowing, robot and the emotion of active user history, the emotion history of current environment and active user's affective state, it is comprehensive to carry out
Analysis is closed, robot emotion label is determined, by the generation of robot emotion label instructions return information, realizes diversified people
Machine interactive process, facilitate robot realize according to the people and the time, because of thing, because active user is replied on ground.
Embodiment two:
Embodiment two on the basis of example 1, increases the acquisition methods of the mood factor.
1, the current mood state of robot.
In terms of mood factor treatment, for the current mood state of robot, referring to fig. 2, specific acquisition process is as follows:
Step S201, the current remaining capacity of statistical machine people use duration.
Step S202, the current Network status of detection robot, activity situation.
Step S203 according to remaining capacity, uses the specific letter of mood of duration, Network status, activity situation or pre-receiving
Breath, determines the current mood state of robot.
Specifically, robot can according to remaining capacity, use duration, Network status, activity situation or the specific letter of mood
Breath determines mood instantly.For example, if robot remaining capacity is starvation, the current mood shape of robot lower than 10%
State is that request user charges for oneself.So though user inputs any content, before replying user's input, machine
People can first request user for oneself charging.Also for example: when the network state of robot is bad, the current mood state of robot
Be request user to check oneself network, no matter so what content user inputs, before replying user's input, machine
Device people can first request user to check oneself network.
Mood specific information can be sent to robot by developer in advance.Such as: mood specific information, which can be, to be ground
The happy information that originator pushes during the Spring Festival, robot will not further according to the information updates mood states such as remaining capacity so that
Robot can be always maintained at happy mood during the Spring Festival, and robot all can be with more happy active emotional state and user
Interaction.
In addition to above-mentioned several factors, the current mood of robot also will affect as other which information, it generally can root
It is set according to product demand, expert system can also be used, directly by expert according to psychological study achievement, designated robot is current
Mood is established rules then really.
Return information generation method of the present embodiment based on robot emotion state, can various states to robot into
Row detection and statistics, the current mood state of robot will receive the ongoing work of remaining capacity, network condition, robot
Dynamic, robot is determined using the mood specific information that duration or developer push.
2, robot is to the familiarity and robot of active user and the emotion history of active user.
For robot to the familiarity and robot of active user and the emotion history of active user, referring to Fig. 3, specifically
Acquisition process is as follows:
Step S301 constructs knowledge mapping;It include multiple knowledge mapping subgraphs in the knowledge mapping;The knowledge mapping
Subgraph includes user data and user's history interactive information;
It specifically, include the attribute of various users or robot in knowledge mapping, knowledge mapping subgraph is exactly to extract knowledge
Adhering to separately property is constituted in the middle part of map.The storage mode of knowledge mapping can be two kinds: unified storage and piecemeal storage.Unification stores
Refer to that all machines are humanized and user property is stored in a picture library, in this way, only needing when extracting knowledge mapping subgraph
It to be extracted from the picture library.Piecemeal storage refer to by all machines are humanized and user property be divided into multiple memory blocks into
Row storage.Such as: it is divided into one group for all machines are humanized, is stored as robot picture library;By all user properties point
At one group, stored as user's picture library.In this way, when extracting knowledge mapping subgraph, from robot picture library extraction machine people
Knowledge mapping subgraph;The knowledge mapping subgraph of user is extracted from user's picture library.
Step S302 obtains voice messaging or pictorial information that robot is interacted with active user;
Step S303 determines active user ID or active user's title according to the voice messaging or the pictorial information;
Step S304 extracts user data from knowledge mapping and knows with what active user ID or active user's title matched
Know map subgraph;
Step S305 determines the robot to the familiarity of active user according to the degree of perfection of knowledge mapping subgraph;
Specifically, degree of perfection refers to the number of attributes for including in knowledge mapping subgraph.Such as under the scene of education, such as
The purposes of fruit robot is known in this scene for the relevant complementary education of students with class of compulsory education
The information that knowing can fill in map subgraph can be listed below:
Type I information: (vocal print, fingerprint, face image etc., robot uses for identification for name (ID), identity information
Family), attend school grade, affiliated area;These information and educational function are closely bound up, it is known that the grade of user and area can be known
The subject and knowledge that road learnt, and learning, will learn;
Second category information: age, gender, class;These information help out to educational function, all ages and classes, difference
The student of gender has the characteristics that respectively different, class's information can help robot to understand user teacher teams and specifically teaching
Progress;
Third category information: the historic information such as Historical Results, interactive history, wrong topic situation;These information are to educational function
It helps out, is the information that robot is obtained in teaching process by the modes such as imparting knowledge to students, interacting, for tracking user's study
Situation, guided teaching and the customization of review.
In the said goods scene, if type I information is filled in completely, the familiarity of robot and user are to pass
Horizontal (60 points of hundred-mark system), the second category information is filled in completely, then the familiarity of robot and user are good level (hundred-mark system
80 points), third category information is complete, periodically has new information to be filled into historical information, then robot and user are familiar with
Degree is superior level (95 points of hundred-mark system).It can be seen that knowledge mapping subgraph is filled more complete, illustrate robot
The content interacted with user is more, and the familiarity of robot and user are higher.
Step S306 determines the robot according to the user's history interactive information for extracting obtained knowledge mapping subgraph
With the emotion history of active user.
Specifically, include the tone data of user's input in user's history interactive information, expression data, action data and arrange
Take leave data etc..Robot and the emotion history of active user can be determined from the information of user feedback.Such as: if robot
Purposes be mobile phone assistant.In a work hours, robot puts the information that user mobile phone receives by the way that voice progress is outer,
And this behavior has seriously affected user or other people go to work, so the feelings that user is angry to robot feedback by voice or information
Thread, such as: pass through message reply " you are stupid, puts outside this when " etc..So robot just records user under the scene
Emotion.
Return information generation method of the present embodiment based on robot emotion state, passes through the voice messaging or figure received
Piece information etc. after analyzing active user, can extract what robot was interacted with the user from the knowledge mapping of robot
History and knowledge mapping subgraph get robot and use to the likability of active user, with the cohesion of active user, to current
The percentage of intelligibility at family, familiarity, to the emotion history of active user, robot and active user's emotion history, above- mentioned information all can
Influence the mood of robot.
3, emotion history of the robot to current environment.
For robot to the emotion history of current environment, referring to fig. 4, specific acquisition process is as follows:
Step S401 obtains the multi-modal information interacted with active user.
Step S402 determines environment title or environment ID from multi-modal information.
Step S403, the knowledge that extraction machine people environmental data and environment title or environment ID match from knowledge mapping
Map subgraph;
Step S404 determines the robot to the emotion history of current environment according to the knowledge mapping subgraph extracted.
Specifically, robot obtains influence of the current environment to robot emotion according to the interactive history of user, such as:
For above-mentioned scene, when user is in working, when mobile phone receives new information, robot is learnt according to interactive history, cannot
It is put outside new information under the scene.In this way, robot can get current scene by multi-modal information, and then extract
Content in robot knowledge mapping relevant to current environment, extracts son relevant to current environment from knowledge mapping
Figure, determines influence of the current environment to robot emotion.
4, the affective state of active user.
For the affective state of active user, the knowledge mapping subgraph further includes user feeling data;User's feelings
Sense data include one or more of tone data, expression data, action data and the wording data of user's input;Referring to figure
5, specific acquisition process is as follows:
Step S501 obtains the multi-modal information interacted with active user.
Step S502 extracts tone information, expression information or the action message of active user from multi-modal information.
Step S503 extracts user feeling data and the tone information, the expression information or institute from knowledge mapping
State the knowledge mapping subgraph that action message matches;
Step S504 determines the affective state of the active user according to the knowledge mapping subgraph extracted.
For example, robot can get the tone, expression, movement etc. when user inputs by multi-modal information, pass through
Subgraph relevant to the user in extraction machine people's knowledge mapping, the tone, expression, movement, wording generation when analysis user inputs
The user emotion of table and user are to the current emotion of robot, and then robot can adjust the feelings of itself according to above- mentioned information
Thread.
Method provided by the present embodiment, to briefly describe, embodiment part does not refer to place, can refer to preceding method reality
Apply corresponding contents in example.
Embodiment three:
Embodiment three on the basis of the above embodiments, increases the generation method of return information.
In terms of return information generates guidance control, the specific implementation process is as follows:
The first instructs training pattern using the robot emotion label as one of input information of training pattern
It generates;The generation that return information is instructed according to the training pattern of generation, the form classification specifically used when determining return information;
Specifically, the training pattern is artificial intelligence model, and training pattern is by artificial intelligence model to all machines
Human feelings thread label is learnt to obtain.This method can be right using artificial intelligence approaches such as machine learning from mankind's study
The emotional reactions of people model, and the emotional reactions of people are determined in the case where above- mentioned information difference value, then by above-mentioned letter
The input as artificial intelligence model is ceased, by training pattern, the value for exporting it becomes closer to the true emotional reaction of people
Value.
In terms of the selection of form classification, the specific implementation process is as follows:
According to the state and contextual information of robot emotion label, the candidate item of every kind of form classification is ranked up,
Contextual information is the interactive information of robot and active user.
According to the ranking results of every kind of form classification, determines the final option of the form classification, handed over active user
Mutually.
Such as: in the case that robot emotion label is 1, the wording that robot can select is you, asks, bothers, machine
In the case that human feelings thread label is 2, wording that robot can select be it is close, eh, rattle away.Each robot of robot
Mood label can all have the corresponding candidate tone, intonation, movement, wording, expression, and may include many in each form classification can
Option, robot are ranked up candidate item according to context and context in actual interaction, and then therefrom select score most
Diversified human-computer interaction process is presented as the option interacted with user in high candidate item, facilitate robot realize because people,
Because when, because of thing, because replying active user on ground.
Second, in actual application, it can not may technically train satisfactory model, and manikin
Original training data may not be sufficient.Second of return information generation is provided thus and instructs control method, according to the machine
Rule device human feelings thread label and pre-established instructs the generation of return information, the form specifically used when determining return information
Classification;
The form classification includes the tone, intonation, movement, wording, expression.
Specifically, rule refers mainly to syntax rule.Such as: when robot emotion state is good, user is to robot
Input demand: my fixed a tomorrow morning of 8 points of alarm clock is helped.Robot handles user's input, recognizes user and is intended to locking
Clock, extracting the time is 8 points of tomorrow morning.The artificial user's setting alarm clock of machine, and find the corresponding syntax rule that sets an alarm
Are as follows: " modal particle ", " you has been helped to set " time point " alarm clock, " customized part ".It is right according to the emotional state of robot
Customized part and modal particle should select front, positive word filling, and final robot may generate following reply: uh
, helped you to order 8 points of alarm clock tomorrow morning, I can cry yours 8 points of tomorrow morning on time.
If robot emotion state is bad, user inputs demand to robot: helping my a fixed tomorrow morning eight
The alarm clock of point.Robot handles user's input, recognizes user and is intended to set an alarm, and extracting the time is 8 points of tomorrow morning.
The artificial user's setting alarm clock of machine, and find the corresponding language rule that sets an alarm are as follows: " modal particle ", when " you has been helped to set "
Between point " alarm clock, " customized part ".According to the emotional state of robot, customized part and modal particle should be selected to bear
Face, relatively passive word filling, final robot may generate following reply: groaning, you has been helped to order tomorrow morning
8 points of alarm clock, I feels blue, and you are not tired of me again.
Method provided by the present embodiment, to briefly describe, embodiment part does not refer to place, can refer to preceding method reality
Apply corresponding contents in example.
Example IV:
The embodiment of the present invention provides another return information generation method based on robot emotion state, should in conjunction with Fig. 6
Method includes:
Step S601, obtains the mood factor of robot, and the mood factor includes the current mood state of robot, machine
Device people to the familiarity of active user, robot and the emotion of active user history, robot to the emotion history of current environment,
Active user's affective state;
Step S602, familiarity, robot according to the current mood state, the robot of robot to active user
Emotion history and active user's affective state with the emotion history, the robot of active user to current environment, directly
It connects guidance and generates return information.
In actual application, the present embodiment based on the return information generation method of robot emotion state by mood because
The training pattern that son input constructs in advance, to instruct the generation of return information, or according to the rule pre-established, letter is replied in guidance
The generation of breath.
For example, robot once at night without lamp in the case where remind user give oneself charging, frighten the use rested
Family causes user to dislike language to robot opposite.Later under similarity condition/scene, even if robot electric quantity is insufficient, machine is influenced
Device people's current mood state, will not actively initiate the request for allowing user to charge to oneself.
As shown from the above technical solution, the return information generation side provided in this embodiment based on robot emotion state
Method can analyze a variety of mood factors, for example, the current mood state of robot, robot are to the ripe of active user
Degree of knowing, robot and the emotion of active user history, the emotion history of current environment and active user's affective state, it is comprehensive to carry out
Analysis is closed, the generation of return information is directly instructed by the above-mentioned mood factor, diversified human-computer interaction process is realized, facilitates
Robot realize according to the people and the time, because of thing, because replying active user on ground.
Method provided by the present embodiment, to briefly describe, embodiment part does not refer to place, can refer to preceding method reality
Apply corresponding contents in example.
Embodiment five:
The embodiment of the present invention provides a kind of return information generating means based on robot emotion state, in conjunction with Fig. 7, the dress
It sets including mood factor acquirement unit 101, robot emotion tag determination unit 102 and robot emotion label applying unit
103, mood factor acquirement unit 101 is used to obtain the mood factor of robot, and the mood factor includes the current mood of robot
State, robot are to the familiarity of active user, robot and the emotion of active user history, robot to the feelings of current environment
Feel history, active user's affective state;Robot emotion tag determination unit 102 is used for the current mood shape according to robot
State, robot are to the familiarity of active user, robot and the emotion of active user history, robot to the emotion of current environment
History and active user's affective state, determine robot emotion label;Robot emotion label applying unit 103 is used for according to machine
Device human feelings thread label, guidance generate return information.
As shown from the above technical solution, the return information provided in this embodiment based on robot emotion state generates dress
It sets, a variety of mood factors can be analyzed, for example, the current mood state of robot, robot are to the ripe of active user
Degree of knowing, robot and the emotion of active user history, the emotion history of current environment and active user's affective state, it is comprehensive to carry out
Analysis is closed, robot emotion label is determined, by the generation of robot emotion label instructions return information, realizes diversified people
Machine interactive process, facilitate robot realize according to the people and the time, because of thing, because active user is replied on ground.
System provided by the present embodiment, to briefly describe, embodiment part does not refer to place, can refer to preceding method reality
Apply corresponding contents in example.
Embodiment six:
The embodiment of the present invention provides another return information generating means based on robot emotion state, should in conjunction with Fig. 8
Device includes mood factor acquirement unit 101 and mood factor applying unit 201, and mood factor acquirement unit 101 is for obtaining
The mood factor of robot, the mood factor include familiarity to active user of the current mood state, robot of robot, machine
Emotion history to current environment of the emotion history of device people and active user, robot, active user's affective state, the mood factor
It is indicated using vector;Mood factor applying unit 201 is used for the current mood state according to robot, robot to active user
Familiarity, robot and the emotion of active user history, robot is to the emotion history and active user's emotion of current environment
State, directly guidance generate return information.
As shown from the above technical solution, the return information provided in this embodiment based on robot emotion state generates dress
It sets, a variety of mood factors can be analyzed, for example, the current mood state of robot, robot are to the ripe of active user
Degree of knowing, robot and the emotion of active user history, the emotion history of current environment and active user's affective state, it is comprehensive to carry out
Analysis is closed, the generation of return information is directly instructed by the above-mentioned mood factor, diversified human-computer interaction process is realized, facilitates
Robot realize according to the people and the time, because of thing, because replying active user on ground.
System provided by the present embodiment, to briefly describe, embodiment part does not refer to place, can refer to preceding method reality
Apply corresponding contents in example.
In specification of the invention, numerous specific details are set forth.It is to be appreciated, however, that the embodiment of the present invention can be with
It practices without these specific details.In some instances, well known method, structure and skill is not been shown in detail
Art, so as not to obscure the understanding of this specification.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme should all cover within the scope of the claims and the description of the invention.
Claims (10)
1. a kind of return information generation method based on robot emotion state characterized by comprising
The mood factor of robot is obtained, the mood factor includes the current mood state of robot, robot to current use
Emotion history, active user emotion of familiarity, robot and the emotion of the active user history, robot at family to current environment
State;
Robot emotion label is generated according to the mood factor of robot;
According to the robot emotion label, guidance generates return information.
2. according to claim 1 based on the return information generation method of robot emotion state, which is characterized in that obtain machine
The mood factor of device people, comprising:
Count the current remaining capacity of the robot, using duration;
Detect the current Network status of the robot, activity situation;
According to the residual electric quantity, the mood using duration, the Network status, the activity situation or pre-receiving is specific
Information determines the current mood state of the robot.
3. the return information generation method according to claim 1 or claim 2 based on robot emotion state, which is characterized in that obtain
Take the mood factor of robot, comprising:
Construct knowledge mapping;It include multiple knowledge mapping subgraphs in the knowledge mapping;The knowledge mapping subgraph includes user
Data and user's history interactive information;
Obtain voice messaging or pictorial information that robot is interacted with active user;
According to the voice messaging or the pictorial information, active user ID or active user's title are determined;
The knowledge mapping subgraph that user data matches with active user ID or active user's title is extracted from knowledge mapping;
According to the degree of perfection of knowledge mapping subgraph, determine the robot to the familiarity of active user;
According to the user's history interactive information for extracting obtained knowledge mapping subgraph, the robot and the feelings of active user are determined
Feel history.
4. according to claim 3 based on the return information generation method of robot emotion state, which is characterized in that described to know
Knowing map subgraph further includes robot environment's data;The mood factor for obtaining robot, comprising:
Obtain the multi-modal information that robot is interacted with active user;
Environment title or environment ID are determined from the multi-modal information;
The knowledge mapping subgraph that extraction machine people environmental data and environment title or environment ID match from knowledge mapping;
According to the knowledge mapping subgraph extracted, determine the robot to the emotion history of current environment.
5. according to claim 3 based on the return information generation method of robot emotion state, which is characterized in that described to know
Knowing map subgraph further includes user feeling data;The user feeling data include user input tone data, expression data,
One or more of action data and wording data;The mood factor for obtaining robot, comprising:
Obtain the multi-modal information that robot is interacted with active user;
Tone information, expression information or the action message of active user are extracted from the multi-modal information;
User feeling data and the tone information, the expression information or the action message phase are extracted from knowledge mapping
The knowledge mapping subgraph matched;
According to the knowledge mapping subgraph extracted, the affective state of the active user is determined.
6. according to claim 1 based on the return information generation method of robot emotion state, which is characterized in that the machine
The mood factor of device people is described using multidimensional data;It is described that robot emotion label, packet are generated according to the mood factor of robot
It includes:
The mood factor is converted into one-dimensional data, obtains the robot emotion label.
7. according to claim 1 based on the return information generation method of robot emotion state, which is characterized in that according to institute
Robot emotion label is stated, guidance generates return information, comprising:
Using the robot emotion label as one of input information of training pattern, the generation of training pattern is instructed;According to life
At training pattern instruct the generation of return information, the form classification specifically used when determining return information;
Or according to the robot emotion label and the rule pre-established, the generation of return information is instructed, determines return information
When the form classification that specifically uses;
The form classification includes the tone, intonation, movement, wording, expression.
8. a kind of return information generation method based on robot emotion state characterized by comprising
The mood factor of robot is obtained, the mood factor includes the current mood state of robot, robot to current use
Emotion history, active user emotion of familiarity, robot and the emotion of the active user history, robot at family to current environment
State;
According to the current mood state of robot, robot to the familiarity of active user, robot and the emotion of active user
The emotion history and active user's affective state of history, the robot to current environment, directly guidance, which generate, replys letter
Breath.
9. a kind of return information generating means based on robot emotion state characterized by comprising
Mood factor acquirement unit, for obtaining the mood factor of robot, the mood factor includes that robot works as front center
Situation state, robot are to the familiarity of active user, robot and the emotion of active user history, robot to current environment
Emotion history, active user's affective state;
Robot emotion tag determination unit, for generating robot emotion label according to the mood factor of robot;
Robot emotion label applying unit, for according to the robot emotion label, guidance to generate return information.
10. a kind of return information generating means based on robot emotion state characterized by comprising
Mood factor acquirement unit, for obtaining the mood factor of robot, the mood factor includes that robot works as front center
Situation state, robot are to the familiarity of active user, robot and the emotion of active user history, robot to current environment
Emotion history, active user's affective state;
Mood factor applying unit, familiarity, machine for the current mood state, robot according to robot to active user
The emotion history and active user's emotion shape of the emotion history of device people and active user, the robot to current environment
State, directly guidance generate return information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018101627456 | 2018-02-27 | ||
CN201810162745 | 2018-02-27 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109033179A true CN109033179A (en) | 2018-12-18 |
CN109033179B CN109033179B (en) | 2022-07-29 |
Family
ID=64610907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810668689.3A Active CN109033179B (en) | 2018-02-27 | 2018-06-26 | Reply information generation method and device based on emotional state of robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109033179B (en) |
WO (1) | WO2019165732A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110605724A (en) * | 2019-07-01 | 2019-12-24 | 青岛联合创智科技有限公司 | Intelligence endowment robot that accompanies |
CN111831875A (en) * | 2019-04-11 | 2020-10-27 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN112148846A (en) * | 2020-08-25 | 2020-12-29 | 北京来也网络科技有限公司 | Reply voice determination method, device, equipment and storage medium combining RPA and AI |
CN111831875B (en) * | 2019-04-11 | 2024-05-31 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112809694B (en) * | 2020-03-02 | 2023-12-29 | 腾讯科技(深圳)有限公司 | Robot control method, apparatus, storage medium and computer device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003340757A (en) * | 2002-05-24 | 2003-12-02 | Mitsubishi Heavy Ind Ltd | Robot |
CN105807933A (en) * | 2016-03-18 | 2016-07-27 | 北京光年无限科技有限公司 | Man-machine interaction method and apparatus used for intelligent robot |
CN105824935A (en) * | 2016-03-18 | 2016-08-03 | 北京光年无限科技有限公司 | Method and system for information processing for question and answer robot |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN107491511A (en) * | 2017-08-03 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | The autognosis method and device of robot |
CN107563517A (en) * | 2017-08-25 | 2018-01-09 | 深圳狗尾草智能科技有限公司 | Robot autognosis real time updating method and system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106297789B (en) * | 2016-08-19 | 2020-01-14 | 北京光年无限科技有限公司 | Personalized interaction method and system for intelligent robot |
CN106773923B (en) * | 2016-11-30 | 2020-04-21 | 北京光年无限科技有限公司 | Multi-mode emotion data interaction method and device for robot |
CN106695839A (en) * | 2017-03-02 | 2017-05-24 | 青岛中公联信息科技有限公司 | Bionic intelligent robot for toddler education |
CN106914903B (en) * | 2017-03-02 | 2019-09-13 | 长威信息科技发展股份有限公司 | A kind of interactive system towards intelligent robot |
CN107301168A (en) * | 2017-06-01 | 2017-10-27 | 深圳市朗空亿科科技有限公司 | Intelligent robot and its mood exchange method, system |
-
2018
- 2018-06-26 CN CN201810668689.3A patent/CN109033179B/en active Active
- 2018-06-26 WO PCT/CN2018/092877 patent/WO2019165732A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003340757A (en) * | 2002-05-24 | 2003-12-02 | Mitsubishi Heavy Ind Ltd | Robot |
CN105807933A (en) * | 2016-03-18 | 2016-07-27 | 北京光年无限科技有限公司 | Man-machine interaction method and apparatus used for intelligent robot |
CN105824935A (en) * | 2016-03-18 | 2016-08-03 | 北京光年无限科技有限公司 | Method and system for information processing for question and answer robot |
CN106462384A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Multi-modal based intelligent robot interaction method and intelligent robot |
CN107491511A (en) * | 2017-08-03 | 2017-12-19 | 深圳狗尾草智能科技有限公司 | The autognosis method and device of robot |
CN107563517A (en) * | 2017-08-25 | 2018-01-09 | 深圳狗尾草智能科技有限公司 | Robot autognosis real time updating method and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111831875A (en) * | 2019-04-11 | 2020-10-27 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN111831875B (en) * | 2019-04-11 | 2024-05-31 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN110605724A (en) * | 2019-07-01 | 2019-12-24 | 青岛联合创智科技有限公司 | Intelligence endowment robot that accompanies |
CN112148846A (en) * | 2020-08-25 | 2020-12-29 | 北京来也网络科技有限公司 | Reply voice determination method, device, equipment and storage medium combining RPA and AI |
Also Published As
Publication number | Publication date |
---|---|
CN109033179B (en) | 2022-07-29 |
WO2019165732A1 (en) | 2019-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Stahl | Responsible innovation ecosystems: Ethical implications of the application of the ecosystem concept to artificial intelligence | |
Liddicoat et al. | Intercultural language teaching and learning | |
Durupinar et al. | How the ocean personality model affects the perception of crowds | |
US8700620B1 (en) | Artificial intelligence method and apparatus | |
Cook et al. | Framing In Computational Creativity-A Survey And Taxonomy. | |
Derksen et al. | Social technology | |
Sherwani et al. | Orality-grounded HCID: Understanding the oral user | |
McKelvey | 2a From Fields to Science: Can Organization Studies make the Transition? | |
CN109033179A (en) | Based on the return information generation method of robot emotion state, device | |
Durupınar et al. | The impact of the ocean personality model on the perception of crowds | |
Rehm et al. | Too close for comfort? Adapting to the user's cultural background | |
Lugrin et al. | Combining a data-driven and a theory-based approach to generate culture-dependent behaviours for virtual characters | |
Nagao | Artificial intelligence accelerates human learning: Discussion data analytics | |
Bing | An epistemic framing analysis of upper level physics students' use of mathematics | |
Akharraz et al. | To context-aware learner modeling based on ontology | |
Armstrong | Big Data, Big Design: Why Designers Should Care about Artificial Intelligence | |
Huang et al. | Human-Computer Collaborative Visual Design Creation Assisted by Artificial Intelligence | |
Baccari et al. | Design for a context-aware and collaborative mobile learning system | |
Kelly | Feminist mapping: Content, form, and process | |
de Rosa | Knowledge Acquisition Analytical Games: games for cognitive systems design. | |
Khanom et al. | Icons: Visual representation to enrich requirements engineering work | |
Ledington et al. | Decision-variable partitioning: An alternative modelling approach in soft systems methodology | |
Fiore et al. | Narrative theory and distributed training: Using the narrative form for debriefing distributed simulation-based exercises. | |
Yang et al. | The Application of Interactive Humanoid Robots in the History Education of Museums Under Artificial Intelligence | |
Frydenlund et al. | Modeler in a box: how can large language models aid in the simulation modeling process? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |