CN116665273A - Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation - Google Patents
Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation Download PDFInfo
- Publication number
- CN116665273A CN116665273A CN202310634720.2A CN202310634720A CN116665273A CN 116665273 A CN116665273 A CN 116665273A CN 202310634720 A CN202310634720 A CN 202310634720A CN 116665273 A CN116665273 A CN 116665273A
- Authority
- CN
- China
- Prior art keywords
- emotion
- interactive
- robot
- discrete points
- calculation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 157
- 238000004364 calculation method Methods 0.000 title claims abstract description 67
- 230000014509 gene expression Effects 0.000 title claims abstract description 41
- 230000003993 interaction Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000004445 quantitative analysis Methods 0.000 title claims abstract description 16
- 230000002452 interceptive effect Effects 0.000 claims abstract description 92
- 238000004458 analytical method Methods 0.000 claims abstract description 52
- 238000011002 quantification Methods 0.000 claims abstract description 13
- 239000013598 vector Substances 0.000 claims description 61
- 206010063659 Aversion Diseases 0.000 claims description 19
- 230000007935 neutral effect Effects 0.000 claims description 18
- 230000009471 action Effects 0.000 claims description 13
- 230000008921 facial expression Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 239000002131 composite material Substances 0.000 claims description 10
- 239000013604 expression vector Substances 0.000 claims description 7
- 230000006854 communication Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000002996 emotional effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 5
- 230000008909 emotion recognition Effects 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation, which comprises the following steps: step P100: identifying the expression of the user; step P200: carrying out emotion quantification and calculation analysis according to the expression recognition result in the step P100 to obtain a current emotion quantification calculation analysis result; step P300: and transmitting the current emotion quantitative calculation analysis result to the interactive robot, determining corresponding interactive contents by the interactive robot according to the current emotion quantitative calculation analysis result, and outputting the interactive contents in a man-machine interaction mode so as to carry out emotion adjustment on the user. The robot-computer interaction method can identify various emotions possibly possessed by the user at the same time, so that the current emotion state of the user is more accurately identified, and meanwhile, the interaction robot makes a targeted reaction according to the current emotion state of the user, so that emotion adjustment of the user is better achieved.
Description
Technical Field
The invention relates to the field of interactive robots, in particular to a robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation.
Background
In order to solve the above problems to some extent, some researchers propose to use interactive robots instead of humans to perform emotion adjustment of college students, so as to alleviate the emotion health problems of college students and enrich the emotion adjustment means of college students. Robots are taken as important carriers of artificial intelligence, the artificial intelligence technology is increasingly developed today, the living of people is changed over the sky, the robots are more and more popular in future families, and the robots are gradually valued by people, so that the robots play an important role in emotion accompaniment and communication, medical rehabilitation and education assistance. At present, some interactive robot products with emotion recognition capability are already on the market, but when the interactive robot is used for replacing human beings to carry out emotion adjustment, the interaction requirement is determined according to the current real-time emotion of college student users, and most of the interactive robot products divide emotion recognition and interaction functions, so that the functions can only be carried out independently and are not fused together well, and emotion adjustment interaction of college students cannot be carried out according to emotion recognition results.
Furthermore, many documents indicate that human emotion is represented by substantially four ways, namely speech semantics, gesture actions, facial expressions and physiological signals. In the communication process, the information quantity transmitted by the facial expression is 55%, the information quantity displayed by the intonation is 38%, and the information quantity transmitted by the language is only 7%. Therefore, the emotion expression mode of facial expression is larger than the emotion information quantity which can be reflected by other two modes, and the emotion state is often more effectively determined through expression recognition. However, the expression recognition function commonly possessed by the interactive robot at present only has single expression recognition capability, and cannot recognize various emotions possibly possessed by the user at the same time, so that the emotion state of the user cannot be accurately known by the interactive robot.
In summary, how to make the interactive robot have the function of identifying multiple emotions simultaneously and combining the interactive robot with the man-machine interaction function, so as to accurately identify the current emotion of the college student user and perform emotion adjustment interaction according to the current emotion, is needed to be solved by researchers in the field.
Disclosure of Invention
The invention aims to solve the technical problem of providing a robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation aiming at the defects of the prior art, wherein the robot man-machine interaction method can recognize various emotions possibly possessed by a user at the same time, so that the current emotion state of the user is recognized more accurately, and meanwhile, an interaction robot reacts in a targeted manner according to the current emotion state of the user, so that emotion adjustment of the user is better achieved.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
the robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation comprises the following steps:
step P100: identifying the expression of the user;
step P200: carrying out emotion quantification and calculation analysis according to the expression recognition result in the step P100 to obtain a current emotion quantification calculation analysis result;
step P300: and transmitting the current emotion quantitative calculation analysis result to the interactive robot, determining corresponding interactive contents by the interactive robot according to the current emotion quantitative calculation analysis result, and outputting the interactive contents in a man-machine interaction mode so as to carry out emotion adjustment on the user.
As a further improved technical solution of the present invention, the step P100 specifically includes:
step P101: acquiring facial expression images of a user through a PC end camera;
step P102: and (3) realizing expression recognition in the facial expression image of the user through the deep neural network, further recognizing the expression as neutral, anger, surprise, fear, aversion, high-speed and/or sadness, and obtaining the probability value of the expression output by the deep neural network belonging to each category.
As a further improved technical solution of the present invention, in step P102:
the probability value of each category includes the probability value E of anger 1 Probability value E of aversion 2 Probability value E of fear 3 Probability value E of happiness 4 Neutral probability value E 5 Probability value E of sadness 6 And a surprising probability value E 7 Wherein E is i All between 0-1, i=1, 2,3 … … 7; and is also provided with
As a further improved technical solution of the present invention, the step P200 specifically includes:
step P201: constructing an emotion quantitative calculation analysis model, and determining the expression vectors of 125 discrete emotion states in a three-dimensional space of the emotion quantitative calculation analysis model;
step P2011: determining 13 basic emotional states including happy, sad, anger, surprise, aversion, fear, somewhat happy, somewhat sad, somewhat anger, somewhat surprise, somewhat aversive, somewhat fear, and neutral;
step P2012: establishing a three-dimensional space coordinate system of an emotion quantitative calculation analysis model based on Euclidean space, wherein an origin is neutral, an X-axis positive direction is fear, an X-axis negative direction is surprise, a Y-axis positive direction is sadness, a Y-axis negative direction is happy, a Z-axis positive direction is anger, a Z-axis negative direction is aversion, the closer the origin tends to, the calm emotion is represented, and the farther the origin is from, the stronger the emotion is represented;
step P2013: determining the coordinate values of discrete points corresponding to 13 basic emotion states respectively in a three-dimensional space coordinate system, the discrete points (0, 0) corresponding to the neutral, the discrete points (0.5, 0) corresponding to the somewhat fear, the discrete points (0,0.5,0) corresponding to the somewhat sad, the discrete points (0, 0.5) corresponding to the somewhat anger, the discrete points (1, 0) corresponding to the fear, the discrete points (0, 1, 0) corresponding to the sad, the discrete points (0, 1) corresponding to the anger, the discrete points (-0.5, 0) corresponding to the somewhat surprise, the discrete points (0, -0.5, 0), discrete points (0, -0.5), discrete points (-1, 0) corresponding to surprise, discrete points (0, -1, 0) corresponding to happiness and discrete points (0, -1) corresponding to aversion, vectors composed of 13 discrete points and an origin are used for representing the representing vectors of 13 basic emotion states, the representing vectors of 13 basic emotion states are mutually combined to form 112 composite three-dimensional vectors, the representing vectors of 112 composite emotion states are further represented, the representing vectors of 13 basic emotion states are combined with the representing vectors of 112 composite emotion states, and finally 125 representing vectors of discrete emotion states are obtained;
step P202: inputting the probability value tensor output in the step P102 into a emotion quantitative calculation analysis model for decomposition, wherein the probability value tensor is decomposed into a neutral representation vector, an angry representation vector, a surprise representation vector, a fear representation vector, an aversion representation vector, a happy representation vector and a sad representation vector;
step P203: performing weighted calculation on each representation vector in the step P202;
step P204: and (3) finding out the representative vector of the discrete emotion state with the highest similarity with the output result of the step (P203) from the representative vectors of 125 discrete emotion states in the three-dimensional space of the emotion quantitative calculation analysis model, and taking the found representative vector of the discrete emotion state as the current emotion quantitative calculation analysis result.
As a further improved technical solution of the present invention, the step P300 specifically includes:
step P301, the PC encodes the current emotion quantitative calculation analysis result and then transmits the encoded result to the interactive robot;
step P302, the interactive robot receives the encoded current emotion quantitative calculation analysis result and decodes the analysis result;
step P303, the interactive robot calculates an analysis result according to the current emotion quantification, and determines corresponding interactive contents;
step P304, the interactive robot outputs the interactive content in a man-machine interaction mode according to the determined interactive content;
the output of the interactive content specifically comprises the following steps:
step P3041, the interactive robot calls an entity API;
step P3042, the interactive robot searches the interactive content file corresponding to the current emotion quantitative calculation analysis result in the library;
step P3043, the interactive robot loads and executes the interactive content file through the API;
and P3044, completing limb actions by the interactive robot by adopting a servo steering engine according to action instructions of the interactive content file, playing interactive voices by the interactive robot by adopting a body loudspeaker according to voice instructions of the interactive content file, and completing action interaction and voice interaction, so as to perform emotion adjustment on the user.
As a further improved technical scheme of the present invention, in the step P301,
the PC and the interactive robot are in the same local area network, and the PC and the interactive robot are in wireless communication through the local area network.
The beneficial effects of the invention are as follows:
1. according to the invention, through the emotion quantitative analysis calculation model, various emotions possibly possessed by the user can be identified, and the various emotions can be quantitatively calculated to obtain the current emotion quantitative calculation analysis result, so that the current emotion state of the user can be identified more accurately.
2. The invention can make the interactive robot make a targeted reaction according to the current emotion of the user, and the personal computer interaction is performed in advance, so that the emotion adjustment of the user is better achieved.
Drawings
Fig. 1 is a general flow chart of a robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation according to an embodiment of the present invention.
Fig. 2 is a flowchart of expression recognition and emotion quantitative analysis calculation based on an embodiment of the present invention.
Fig. 3 is a flowchart of robot man-machine interaction execution provided by an embodiment of the present invention.
Fig. 4 is a graph of an emotion quantification calculation analysis model provided by an embodiment of the present invention.
Detailed Description
The following is a further description of embodiments of the invention, with reference to the accompanying drawings:
the implementation provides a robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation, which comprises the following steps as shown in fig. 1 and 2:
step P100: and recognizing the expression of the user.
The step P100 specifically comprises:
step P101: acquiring facial expression images of a user through a PC end camera;
step P102: and (3) realizing expression recognition in the facial expression image of the user through the deep neural network, further recognizing the expression as neutral, anger, surprise, fear, aversion, high-speed and/or sadness, and obtaining the probability value of the expression output by the deep neural network belonging to each category.
The deep neural network is deployed at the PC end, and the calling and expression recognition processes are carried out at the PC end. The deep neural network is trained through a training set before prediction, and expression recognition in facial expression images of a user is realized by adopting the trained deep neural network. The training set comprises a plurality of facial expression images of the user and corresponding expression category labels.
Wherein the probability value of expression belonging to each category output by the deep neural network comprises the probability value E of anger 1 Probability value E of aversion 2 Probability value E of fear 3 Probability value E of happiness 4 Neutral probability value E 5 Probability value E of sadness 6 And a surprising probability value E 7 。
After the deep neural network performs classified prediction on the facial expression image of the user, a corresponding normalized tensor is output, as in [ E ] 1 ,E 2 ,E 3 ,E 4 ,E 5 ,E 6 ,E 7 ]. Wherein E is i All between 0-1, i=1, 2,3 … … 7; and is also provided with
Step P200: and (5) carrying out emotion quantification and calculation analysis according to the expression recognition result in the step (P100) to obtain a current emotion quantification calculation analysis result.
The step P200 specifically comprises the following steps:
step P201: and constructing an emotion quantification calculation analysis model based on an EKman emotion theory and Euclidean space, quantifying emotion by using an Euclidean space, and calculating an emotion quantification result by using the EKman emotion theory. And finally, determining the expression vectors of 125 discrete emotion states in the three-dimensional space of the emotion quantitative calculation analysis model.
Wherein the Euclidean space refers to a three-dimensional coordinate system space, and the EKman emotion theory is as follows: each emotion can be represented as a vector, one basic emotion for each dimension. Each dimension of this vector represents the intensity of the corresponding basic emotion.
The step P201 specifically includes:
step P2011: 13 basic emotional states were determined, including happy, sad, anger, surprise, aversion, fear, somewhat happy, somewhat sad, somewhat anger, somewhat surprise, somewhat aversive, somewhat fear, and neutral.
Step P2012: establishing a three-dimensional space coordinate system of an emotion quantitative calculation analysis model based on Euclidean space, wherein an origin is neutral, an X-axis positive direction is fear, an X-axis negative direction is surprise, a Y-axis positive direction is sadness, a Y-axis negative direction is happy, a Z-axis positive direction is anger, a Z-axis negative direction is aversion, the closer the origin tends to, the calm emotion is represented, and the farther the origin is from, the stronger the emotion is represented; the expression mode based on the European space can more intuitively express abstract emotion and intensity thereof.
Step P2013: as shown in fig. 4, the coordinate values of discrete points corresponding to 13 basic emotion states are determined in a three-dimensional space coordinate system, the discrete points (0, 0) corresponding to the neutral, the discrete points (0.5, 0) corresponding to the somewhat fear, the discrete points (0,0.5,0) corresponding to the somewhat sad, the discrete points (0, 0.5) corresponding to the somewhat anger, the point of the three points discrete points corresponding to fear (1, 0), discrete points corresponding to sadness (0, 1, 0), discrete points corresponding to anger (0, 1), discrete points corresponding to surprise (-0.5, 0), discrete points corresponding to happiness (0, -0.5, 0), discrete points (0, -0.5) corresponding to a point aversion, discrete points (-1, 0) corresponding to a surprise, discrete points (0, -1, 0) corresponding to a happiness, and discrete points (0, -1) corresponding to a point aversion. The emotion quantitative calculation analysis model takes 13 emotion discrete points as basic quantity, and the vector formed by a certain emotion discrete point and an origin point is used for representing a single basic emotion state, namely, the vector formed by 13 discrete points and the origin point is used for respectively representing 13 basic emotion state representing vectors, and the 13 basic emotion state representing vectors are mutually combined to form 112 composite three-dimensional vectors, so that 112 composite emotion state representing vectors are represented. And combining the 13 basic emotion state expression vectors with 112 composite emotion state expression vectors to finally obtain 125 discrete emotion state expression vectors. Because the space of the emotion quantitative calculation analysis model is a three-dimensional space, the complex emotion state can be quantized into a three-dimensional vector, and the three-dimensional vector can be expressed in a quantization representation mode based on Euclidean space.
The vector formed by 13 discrete points and the origin point represents the representation vector of 13 basic emotion states, and the rest 112 composite emotion states are respectively mapped by three or three of the 13 vectors.
Step P202: normalized probability value tensor [ E ] output in step P102 1 ,E 2 ,E 3 ,E 4 ,E 5 ,E 6 ,E 7 ]Inputting into emotion quantitative calculation analysis model for decomposition to obtain anger, aversion, fear, happiness, neutrality, sadness and surprise expression vector, namely [0, E ] 1 ],[0,0,E 2 ],[E 3 ,0,0],[0,E 4 ,0],[0,0,E 5 ],[0,E 6 ,0],[E 7 ,0,0]Since the neutral state corresponds to the origin, the following weighting calculation is omitted.
Step P203: and (3) carrying out weighted calculation on each representation vector in the step P202 to obtain a three-dimensional vector E.
E=[α 1 E 3 +α 2 E 7 ,β 1 E 6 +β 2 E 4 ,γ 1 E 1 +γ 2 E 2 ]。
Step P204: and (3) calculating the representative vector of the discrete emotion state with the highest similarity with the three-dimensional vector output in the step (P203) from 125 kinds of representative vectors of the discrete emotion states, and taking the obtained representative vector of the discrete emotion state as a current emotion quantization calculation analysis result.
Since all vector starting points are origins and considering that they are required to represent emotion intensities, it is sufficient to directly calculate the euclidean distance of two vector end points, which indicates that the smaller the euclidean distance of the two vector end points, the more similar the two vectors.
Step P300: and transmitting the current emotion quantitative calculation analysis result to the interactive robot, determining corresponding interactive contents by the interactive robot according to the current emotion quantitative calculation analysis result, and outputting the interactive contents in a man-machine interaction mode so as to carry out emotion adjustment on the user.
As shown in fig. 3, step P300 specifically includes:
and P301, the PC encodes the current emotion quantitative calculation analysis result and transmits the result to the interactive robot.
And step P302, the interactive robot receives the encoded current emotion quantitative calculation analysis result and decodes the analysis result.
And step P303, the interactive robot determines corresponding interactive contents according to the current emotion quantitative calculation analysis result.
And step P304, the interactive robot outputs the interactive content in a man-machine interaction mode according to the determined interactive content.
The output of the interactive content specifically comprises the following steps:
step P3041, the interactive robot calls the body API.
And step P3042, searching an interactive content file corresponding to the current emotion quantitative calculation analysis result in the library by the interactive robot.
And step P3043, loading and executing the interactive content file by the interactive robot through the API.
And P3044, completing limb actions by the interactive robot by adopting a servo steering engine according to action instructions of the interactive content file, playing interactive voices by the interactive robot by adopting a body loudspeaker according to voice instructions of the interactive content file, and completing action interaction and voice interaction, so as to perform emotion adjustment on the user.
In the step P300, the PC and the interactive robot are located in the same local area network, and the PC and the interactive robot communicate wirelessly through the local area network. The local area network wireless communication is based on the UDP protocol. The use of local area network wireless communication requires json encoding of the transmission information. In the embodiment, the interactive actions are set in the matched action editing software of the interactive robot in advance, the interactive voice is imported, the action voice file is used as an interactive content file, and the built-in storage is stored in the internal environment of the interactive robot. In the interactive robot, the servo steering engine is used for controlling corresponding manipulators, mechanical arms, mechanical legs, mechanical feet and heads, and imitating actions such as advancing, retreating, kissing, face covering, handshaking, boxing and the like.
The scope of the present invention includes, but is not limited to, the above embodiments, and any alterations, modifications, and improvements made by those skilled in the art are intended to fall within the scope of the invention.
Claims (6)
1. The robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation is characterized by comprising the following steps of:
step P100: identifying the expression of the user;
step P200: carrying out emotion quantification and calculation analysis according to the expression recognition result in the step P100 to obtain a current emotion quantification calculation analysis result;
step P300: and transmitting the current emotion quantitative calculation analysis result to the interactive robot, determining corresponding interactive contents by the interactive robot according to the current emotion quantitative calculation analysis result, and outputting the interactive contents in a man-machine interaction mode so as to carry out emotion adjustment on the user.
2. The robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation according to claim 1, wherein the step P100 specifically includes:
step P101: acquiring facial expression images of a user through a PC end camera;
step P102: and (3) realizing expression recognition in the facial expression image of the user through the deep neural network, further recognizing the expression as neutral, anger, surprise, fear, aversion, high-speed and/or sadness, and obtaining the probability value of the expression output by the deep neural network belonging to each category.
3. The robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation according to claim 2, wherein in step P102:
the probability value of each category includes the probability value E of anger 1 Probability value E of aversion 2 Probability value E of fear 3 Probability value E of happiness 4 Neutral probability value E 5 Probability value E of sadness 6 And a surprising probability value E 7 Wherein E is i All between 0-1, i=1, 2,3 … … 7; and is also provided with
4. The robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation according to claim 2, wherein the step P200 specifically includes:
step P201: constructing an emotion quantitative calculation analysis model, and determining the expression vectors of 125 discrete emotion states in a three-dimensional space of the emotion quantitative calculation analysis model;
step P2011: determining 13 basic emotional states including happy, sad, anger, surprise, aversion, fear, somewhat happy, somewhat sad, somewhat anger, somewhat surprise, somewhat aversive, somewhat fear, and neutral;
step P2012: establishing a three-dimensional space coordinate system of an emotion quantitative calculation analysis model based on Euclidean space, wherein an origin is neutral, an X-axis positive direction is fear, an X-axis negative direction is surprise, a Y-axis positive direction is sadness, a Y-axis negative direction is happy, a Z-axis positive direction is anger, a Z-axis negative direction is aversion, the closer the origin tends to, the calm emotion is represented, and the farther the origin is from, the stronger the emotion is represented;
step P2013: determining the coordinate values of discrete points corresponding to 13 basic emotion states respectively in a three-dimensional space coordinate system, the discrete points (0, 0) corresponding to the neutral, the discrete points (0.5, 0) corresponding to the somewhat fear, the discrete points (0,0.5,0) corresponding to the somewhat sad, the discrete points (0, 0.5) corresponding to the somewhat anger, the discrete points (1, 0) corresponding to the fear, the discrete points (0, 1, 0) corresponding to the sad, the discrete points (0, 1) corresponding to the anger, the discrete points (-0.5, 0) corresponding to the somewhat surprise, the discrete points (0, -0.5, 0), discrete points (0, -0.5), discrete points (-1, 0) corresponding to surprise, discrete points (0, -1, 0) corresponding to happiness and discrete points (0, -1) corresponding to aversion, vectors composed of 13 discrete points and an origin are used for representing the representing vectors of 13 basic emotion states, the representing vectors of 13 basic emotion states are mutually combined to form 112 composite three-dimensional vectors, the representing vectors of 112 composite emotion states are further represented, the representing vectors of 13 basic emotion states are combined with the representing vectors of 112 composite emotion states, and finally 125 representing vectors of discrete emotion states are obtained;
step P202: inputting the probability value tensor output in the step P102 into a emotion quantitative calculation analysis model for decomposition, wherein the probability value tensor is decomposed into a neutral representation vector, an angry representation vector, a surprise representation vector, a fear representation vector, an aversion representation vector, a happy representation vector and a sad representation vector;
step P203: performing weighted calculation on each representation vector in the step P202;
step P204: and (3) finding out the representative vector of the discrete emotion state with the highest similarity with the output result of the step (P203) from the representative vectors of 125 discrete emotion states in the three-dimensional space of the emotion quantitative calculation analysis model, and taking the found representative vector of the discrete emotion state as the current emotion quantitative calculation analysis result.
5. The robot man-machine interaction method based on expression recognition and emotion quantitative analysis calculation of claim 1, wherein the step P300 specifically includes:
step P301, the PC encodes the current emotion quantitative calculation analysis result and then transmits the encoded result to the interactive robot;
step P302, the interactive robot receives the encoded current emotion quantitative calculation analysis result and decodes the analysis result;
step P303, the interactive robot calculates an analysis result according to the current emotion quantification, and determines corresponding interactive contents;
step P304, the interactive robot outputs the interactive content in a man-machine interaction mode according to the determined interactive content;
the output of the interactive content specifically comprises the following steps:
step P3041, the interactive robot calls an entity API;
step P3042, the interactive robot searches the interactive content file corresponding to the current emotion quantitative calculation analysis result in the library;
step P3043, the interactive robot loads and executes the interactive content file through the API;
and P3044, completing limb actions by the interactive robot by adopting a servo steering engine according to action instructions of the interactive content file, playing interactive voices by the interactive robot by adopting a body loudspeaker according to voice instructions of the interactive content file, and completing action interaction and voice interaction, so as to perform emotion adjustment on the user.
6. The method for robot man-machine interaction based on expression recognition and emotion quantitative analysis calculation of claim 5, wherein in step P301,
the PC and the interactive robot are in the same local area network, and the PC and the interactive robot are in wireless communication through the local area network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310634720.2A CN116665273B (en) | 2023-05-31 | 2023-05-31 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310634720.2A CN116665273B (en) | 2023-05-31 | 2023-05-31 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116665273A true CN116665273A (en) | 2023-08-29 |
CN116665273B CN116665273B (en) | 2023-11-17 |
Family
ID=87727338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310634720.2A Active CN116665273B (en) | 2023-05-31 | 2023-05-31 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116665273B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130012349A (en) * | 2011-07-25 | 2013-02-04 | 한국생산기술연구원 | Apparatus and method for generating emotion of robot |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
CN108564007A (en) * | 2018-03-27 | 2018-09-21 | 深圳市智能机器人研究院 | A kind of Emotion identification method and apparatus based on Expression Recognition |
CN110301117A (en) * | 2017-11-24 | 2019-10-01 | 微软技术许可有限责任公司 | Response is provided in a session |
-
2023
- 2023-05-31 CN CN202310634720.2A patent/CN116665273B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130012349A (en) * | 2011-07-25 | 2013-02-04 | 한국생산기술연구원 | Apparatus and method for generating emotion of robot |
CN103488293A (en) * | 2013-09-12 | 2014-01-01 | 北京航空航天大学 | Man-machine motion interaction system and method based on expression recognition |
CN110301117A (en) * | 2017-11-24 | 2019-10-01 | 微软技术许可有限责任公司 | Response is provided in a session |
CN108564007A (en) * | 2018-03-27 | 2018-09-21 | 深圳市智能机器人研究院 | A kind of Emotion identification method and apparatus based on Expression Recognition |
Non-Patent Citations (1)
Title |
---|
徐桂芝 等: "基于深度分离卷积的情绪识别机器人即时交互研究", 《仪器仪表学报》, vol. 40, no. 10 * |
Also Published As
Publication number | Publication date |
---|---|
CN116665273B (en) | 2023-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110688911B (en) | Video processing method, device, system, terminal equipment and storage medium | |
Bretan et al. | Emotionally expressive dynamic physical behaviors in robots | |
WO2017168870A1 (en) | Information processing device and information processing method | |
US6526395B1 (en) | Application of personality models and interaction with synthetic characters in a computing system | |
CN108942919B (en) | Interaction method and system based on virtual human | |
CN109086860B (en) | Interaction method and system based on virtual human | |
CN108009573B (en) | Robot emotion model generation method, emotion model and interaction method | |
CN114995657B (en) | Multimode fusion natural interaction method, system and medium for intelligent robot | |
CN109308466A (en) | The method that a kind of pair of interactive language carries out Emotion identification | |
CN106294726A (en) | Based on the processing method and processing device that robot role is mutual | |
CN109101663A (en) | A kind of robot conversational system Internet-based | |
EP4144425A1 (en) | Behavior control device, behavior control method, and program | |
Paleari et al. | Toward multimodal fusion of affective cues | |
CN110125932B (en) | Dialogue interaction method for robot, robot and readable storage medium | |
Rodriguez et al. | Spontaneous talking gestures using generative adversarial networks | |
JP6201212B2 (en) | Character generating apparatus and program | |
CN109800295A (en) | The emotion session generation method being distributed based on sentiment dictionary and Word probability | |
KR102573465B1 (en) | Method and system for providing emotion correction during video chat | |
CN116665273B (en) | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation | |
CN109961152B (en) | Personalized interaction method and system of virtual idol, terminal equipment and storage medium | |
Pérez-Espinosa et al. | Emotion recognition: from speech and facial expressions | |
KR20230103665A (en) | Method, device, and program for providing text to avatar generation | |
Srinivasan et al. | A reference architecture for social head gaze generation in social robotics | |
Sönmez et al. | The necessity of emotion recognition from speech signals for natural and effective human-robot interaction in society 5.0 | |
CN113246156A (en) | Child accompanying robot based on intelligent emotion recognition and control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |