CN112906555B - Artificial intelligence mental robot and method for recognizing expressions from person to person - Google Patents

Artificial intelligence mental robot and method for recognizing expressions from person to person Download PDF

Info

Publication number
CN112906555B
CN112906555B CN202110182806.7A CN202110182806A CN112906555B CN 112906555 B CN112906555 B CN 112906555B CN 202110182806 A CN202110182806 A CN 202110182806A CN 112906555 B CN112906555 B CN 112906555B
Authority
CN
China
Prior art keywords
emotion
person
expression
recognized
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110182806.7A
Other languages
Chinese (zh)
Other versions
CN112906555A (en
Inventor
朱定局
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202110182806.7A priority Critical patent/CN112906555B/en
Publication of CN112906555A publication Critical patent/CN112906555A/en
Application granted granted Critical
Publication of CN112906555B publication Critical patent/CN112906555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

An artificial intelligence mental robot and method for recognizing expressions differently from person to person, comprising: acquiring a normal time range; acquiring a normal expression; recognizing the normal emotion; acquiring an expression to be recognized; recognizing the emotion to be recognized; and (5) correcting the emotion to be recognized. According to the method, the system and the robot, the relationship between the normal emotions and the normal expressions of different people is fully utilized to obtain the normal emotions corresponding to the normal emotions of different people, so that the real emotions represented by the expressions of different people can be known.

Description

Artificial intelligence mental robot and method for recognizing expressions from person to person
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence psychological robot and a method for recognizing expressions from person to person.
Background
In the prior art, the emotion recognition technology enables the robot to recognize seven emotions of happiness, anger, surprise, nausea, fear, sadness and neutrality through the camera, and if the fact that a user holds a negative emotion is detected, the robot can initiatively initiate chatting. In the language interaction, the robot can give psychological support to the user aiming at anxiety, depression, anger and fear caused by work, family and emotion, for example, psychological knowledge and preliminary psychological counseling service are provided for the user. "
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art: in the prior art, the robot does not distinguish the characteristics of different people, and emotion judgment is carried out uniformly according to expressions, namely all people are regarded as the same people, so that the difference among different people is ignored inevitably, and the result of emotion judgment is inaccurate and deviates from the reality.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
Based on this, it is necessary to provide an artificial intelligence mental robot and a method for recognizing expressions differently from person to person, which recognize emotions through expressions on the basis of different expression normalities of different persons, so as to solve the problem in the prior art that the expression normalities are not considered when recognizing emotions through expressions. For example, some athletes have a depressed appearance in their natural expression, i.e., they have the expression always when they are not playing or not playing, and also have the expression before playing. The expression recognition emotion test is performed on the athlete before the race, if the prior art is used, the psychological state of the athlete before the race is judged to be depression by mistake, and if the technology of the application is used, the psychological state of the athlete before the race is judged to be normal correctly. For another example, some athletes naturally have a happy appearance, that is, the athlete usually has the same appearance in the absence of a match or before the match, and before the match, the athlete naturally has an unpleasant appearance and is not depressed. The expression recognition emotion test is performed on the athlete before the match, if the prior art is adopted, the psychological state of the athlete before the match is judged to be normal by mistake, and if the technology of the application is adopted, the psychological state of the athlete before the match is judged to be depression correctly. As another example, a very inward person, whose expression at ordinary times looks depressed, while he is not depressed at ordinary times, would be recognized as depressed at ordinary times if the prior art were used, while his psychological state would be recognized as normal by the technology of the present application. For another example, a very outsider, whose expression at ordinary times looks very happy, and who is depressed, though not so happy, looks like a normal person, but actually he already has a depression condition at this time, but recognizes his psychological state as normal by the prior art, and recognizes his depression by the technology of the present application. That is to say, the normal expression of the tested person is taken as a reference, so that the change of the expression of the tested person can be judged more accurately, the change of the emotion of the tested person is identified through the change of the expression of the tested person, and the emotion of the tested person can be inferred if the normal emotion of the tested person is assumed.
In a first aspect, an embodiment of the present invention provides an artificial intelligence method, where the method includes:
a usual time range obtaining step: acquiring a time range corresponding to the ordinary time;
ordinarily, acquiring expressions: acquiring the normal expression of the identified person in the time range;
recognizing the normal emotion: identifying through the ordinary expression of the identified person to obtain the ordinary emotion of the identified person;
obtaining the expression to be recognized: acquiring an expression to be recognized of a recognized person;
and emotion recognition to be recognized: identifying the emotion to be identified of the identified person according to the expression to be identified of the identified person;
and (3) correcting the emotion to be recognized: and comparing the emotion to be recognized of the recognized person with the usual emotion of the recognized person, calculating the change of the emotion to be recognized of the recognized person relative to the usual emotion, and setting the usual emotion as the normal emotion so as to obtain the corrected emotion to be recognized.
Preferably, the step of recognizing the usual emotion specifically includes: inputting each facial picture or video of the recognized person in the normal expression into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the recognized person in the normal expression.
Preferably, the emotion recognition step to be recognized specifically includes: inputting each facial picture or video of the expression to be recognized of the recognized person into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the expression to be recognized of the recognized person.
Preferably, the emotion correcting step to be recognized specifically includes: the emotion to be recognized of the recognized person is recorded as Y, the normal emotion of the recognized person is recorded as X, the normal emotion of the recognized person is recorded as P, the corrected emotion to be recognized of the recognized person is recorded as Q, f is an emotion change calculation function, 2 emotions are input into f, f outputs the change degree of the 2 emotions, and f (X, Y) is f (P, Q), and X, Y and P are known, so that Q can be solved.
Preferably, the step of constructing the preset emotion recognition model includes:
a data acquisition step, wherein a plurality of facial expressions of a plurality of persons are acquired (the larger the acquisition amount is, the better the acquisition amount is), and all or part of the facial expressions are labeled manually;
model initialization step: initializing an emotion recognition model into a deep learning model or a convolutional neural network model or other machine learning models;
unsupervised training: if the emotion recognition model supports unsupervised learning, each facial expression is used as input, and unsupervised training is carried out on the emotion recognition model;
a step of supervised training: each facial expression with the emotion label is used as input data, the emotion label of the facial expression is used as expected output data, and supervised training is carried out on an emotion recognition model;
the testing steps are as follows: and testing the trained emotion recognition model, if the test is passed, using the emotion recognition model as the preset emotion recognition model, and if the test is not passed, acquiring more facial expressions, labeling and then re-training the emotion recognition model.
Preferably, the step of obtaining the usual expression further includes:
similar expressions are replaced: if the normal expression of the identified person fails to be obtained within the time range, obtaining the normal expression of the person belonging to the same subclass as the identified person as the normal expression of the identified person, if the normal expression of the person belonging to the same subclass as the identified person fails to be obtained, obtaining the normal expression of the person belonging to the same larger subclass as the identified person as the normal expression of the identified person, and so forth, performing preset attempts or trying all the time until the obtaining is successful;
and (3) substituting the expression of the relatives: and if the ordinary expression of the identified person fails to be obtained within the time range and the ordinary expression of the person belonging to the same class as the identified person also fails to be obtained, obtaining the ordinary expression of the person having relationship with the identified person as the ordinary expression of the identified person.
Preferably, the collected facial expressions of the plurality of people are facial expressions of any people and are not expression collection made for a specific population, and the emotional state is obtained by manually judging the general population through manually marking emotional labels and is not obtained by judging the emotion made for the specific population.
In a second aspect, an embodiment of the present invention provides an artificial intelligence system, where the system includes:
the ordinary time range acquisition module: acquiring a time range corresponding to the ordinary time;
expression acquisition module at ordinary times: acquiring the normal expression of the identified person in the time range;
the normal emotion recognition module: identifying through the ordinary expression of the identified person to obtain the ordinary emotion of the identified person;
the expression to be recognized acquisition module: acquiring an expression to be recognized of a recognized person;
the emotion recognition module to be recognized: identifying the emotion to be identified of the identified person according to the expression to be identified of the identified person;
the emotion correction module to be recognized: and comparing the emotion to be recognized of the recognized person with the normal emotion of the recognized person, calculating the change of the emotion to be recognized of the recognized person relative to the normal emotion, and setting the normal emotion to be normal, so as to obtain the corrected emotion to be recognized.
Preferably, the module for recognizing a usual emotion specifically includes: inputting each facial picture or video of the recognized person in the normal expression into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the recognized person in the normal expression.
Preferably, the emotion recognition module to be recognized specifically includes: inputting each facial picture or video of the expression to be recognized of the recognized person into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the expression to be recognized of the recognized person.
Preferably, the module for correcting emotion to be recognized specifically includes: the emotion to be recognized of the recognized person is recorded as Y, the normal emotion of the recognized person is recorded as X, the normal emotion of the recognized person is recorded as P, the corrected emotion to be recognized of the recognized person is recorded as Q, f is an emotion change calculation function, 2 emotions are input into f, f outputs the change degree of the 2 emotions, and f (X, Y) is f (P, Q), and X, Y and P are known, so that Q can be solved.
Preferably, the building module of the preset emotion recognition model includes:
the data acquisition module is used for acquiring a plurality of facial expressions of a plurality of persons (the larger the acquisition amount is, the better the acquisition amount is), and marking all or part of the facial expressions with emotional labels manually;
a model initialization module: initializing an emotion recognition model into a deep learning model or a convolutional neural network model or other machine learning models;
an unsupervised training module: if the emotion recognition model supports unsupervised learning, each facial expression is used as input, and unsupervised training is carried out on the emotion recognition model;
the supervised training module: each facial expression with the emotion label is used as input data, the emotion label of the facial expression is used as expected output data, and supervised training is carried out on an emotion recognition model;
a test module: and testing the trained emotion recognition model, if the test is passed, using the emotion recognition model as the preset emotion recognition model, and if the test is not passed, acquiring more facial expressions, labeling and then re-training the emotion recognition model.
Preferably, the usual expression obtaining module further includes:
similar expression replaces the module: if the normal expression of the identified person fails to be obtained within the time range, obtaining the normal expression of the person belonging to the same subclass as the identified person as the normal expression of the identified person, if the normal expression of the person belonging to the same subclass as the identified person fails to be obtained, obtaining the normal expression of the person belonging to the same larger subclass as the identified person as the normal expression of the identified person, and so on, carrying out preset times of trial or trial until the obtaining is successful;
the relative expression replacing module: and if the ordinary expression of the identified person fails to be obtained within the time range and the ordinary expression of the person belonging to the same class as the identified person also fails to be obtained, obtaining the ordinary expression of the person having affinity with the identified person as the ordinary expression of the identified person.
Preferably, the collected facial expressions of the plurality of people are facial expressions of any people and are not expression collection made for a specific population, and the emotional state is obtained by manually judging the general population through manually marking emotional labels and is not obtained by judging the emotion made for the specific population.
In a third aspect, an embodiment of the present invention provides an artificial intelligence apparatus, where the apparatus includes the modules of the system in any one of the embodiments of the second aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method according to any one of the embodiments of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a robot, including a memory, a processor, and an artificial intelligence robot program stored in the memory and executable on the processor, where the processor executes the program to implement the steps of the method according to any one of the embodiments of the first aspect.
The artificial intelligence mental robot and the method for recognizing expressions from person to person provided by the embodiment include: acquiring a normal time range; acquiring a normal expression; recognizing the normal emotion; acquiring an expression to be recognized; recognizing the emotion to be recognized; and (5) correcting the emotion to be recognized. According to the method, the system and the robot, the relationship between the normal emotions and the normal expressions of different people is fully utilized to obtain the normal emotions corresponding to the normal emotions of different people, so that the real emotions represented by the expressions of different people can be known. The prior art cannot consider the difference of ordinary expressions of different people under normal emotions, if the prior art is improved, only one emotion recognition model can be trained for each person, so that thousands of people need to train one thousand of emotion recognition models, the data volume required to be collected for each person is too large, and each person cannot have too much data for training, so that the emotion recognition model of each person cannot be obtained at all, and even if the training is successful, the workload and the calculated amount are too large to implement due to too many models. The method and the device have the advantages that only one emotion recognition model needs to be trained, but the emotion state of the expression to be recognized can be corrected according to the emotion state of the expression at ordinary times, so that the emotion state of the facial expression can be recognized more accurately.
Drawings
FIG. 1 is a flow diagram of an artificial intelligence method provided by one embodiment of the invention;
FIG. 2 is a flow diagram of an artificial intelligence method comprising according to an embodiment of the invention;
FIG. 3 is a flow chart of an artificial intelligence method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the examples of the present invention are described in detail below with reference to the embodiments of the present invention.
Basic embodiment of the invention
One embodiment of the present invention provides an artificial intelligence method, as shown in fig. 1, the method including: acquiring a normal time range; acquiring a normal expression; recognizing the normal emotion; obtaining an expression to be recognized; recognizing the emotion to be recognized; and (5) correcting the emotion to be recognized. The technical effects are as follows: the method identifies the expression to be identified by taking the emotion of the expression at ordinary times as a base line and a reference, so that the same expression of different people can not be identified into the same emotion all the time, but the targeted identification is carried out according to the characteristics of each person, so that the accuracy of emotion identification can be improved, the misjudgment is reduced, the psychological problem is found as soon as possible, the psychological problem is avoided being misjudged by people without the psychological problem, and the method has extremely important effect and significance on psychological diagnosis.
In a preferred embodiment, as shown in fig. 2, the step of constructing the preset emotion recognition model includes: a data acquisition step; initializing a model; carrying out unsupervised training; a step of supervised training; and (5) testing. The technical effects are as follows: according to the method, an emotion recognition model does not need to be established for each person, the cost for establishing the emotion recognition model for each person is too high, a large amount of data of each person needs to be acquired for the model of each person, which is often impossible, because the data are difficult to acquire by a single person, the data acquisition of a special population, namely athletes, is more difficult to examine and approve, only a general emotion recognition model needs to be established, and specific emotion recognition and correction can be performed on each person through emotion correction, so that a more accurate emotion recognition effect is obtained.
In a preferred embodiment, as shown in fig. 3, the step of obtaining the normal expression further includes: replacing the similar expressions; and (5) substituting the expression of the relative. The technical effects are as follows: the method avoids the situation that the expression cannot be acquired at ordinary times through the similar expression substitution and the relative expression substitution, so that the application range is wider, and the technical scheme of the application has a good effect because the similar expression and the relative expression are similar to the expression of the identified person.
PREFERRED EMBODIMENTS OF THE PRESENT INVENTION
A usual time range obtaining step: and acquiring a time range corresponding to the ordinary time. For example, if the identified person is an athlete, the time range at ordinary times is not the period before, during, or after the game. If the identified person is a student, the time range at ordinary times is not the period before, during and after the examination. If the identified person is a patient, the time frame is not the period before, during or after the operation. If the identified person is a pregnant woman, the time range at ordinary times is not the period before, during and after delivery. If the identified person is a warrior, the normal time range is not the period before, during, or after the war.
Ordinarily, acquiring expressions: and acquiring the normal expression of the identified person in the time range. Specifically, one or more facial pictures or videos of the identified person's usual expression are obtained. The expression obtaining step at ordinary times further comprises: and a similar expression replacing step, wherein if the normal expression of the identified person fails to be obtained within the time range, the normal expression of the person belonging to the same subclass as the identified person is obtained as the normal expression of the identified person, if the normal expression of the person belonging to the same subclass as the identified person fails to be obtained, the normal expression of the person belonging to the same larger subclass as the identified person is obtained as the normal expression of the identified person, and so forth, a preset number of attempts are made or the attempts are made until the obtaining is successful. For example, if the identified person is a skier, other skiers who belong to the same subclass as the identified person, other athletes who belong to the same class as the identified person, other sports enthusiasts who belong to the same larger class as the identified person, and other people who belong to the same larger class as the identified person; and (3) replacing the expression of the relative: if the ordinary expression of the identified person fails to be obtained within the time range and the ordinary expression of the person belonging to the same class as the identified person also fails to be obtained, the ordinary expression of the person having a relationship with the identified person is obtained as the ordinary expression of the identified person, the person having the relationship preferably selects a sibling, then selects parents, then selects children, and then selects relatives having a relationship with blood from near to far.
Recognizing the normal emotion: and identifying the normal emotion of the identified person through the normal expression of the identified person. Specifically, each face picture or video of the identified person with the normal expression is input into an emotion identification model, the emotion corresponding to each face picture or video is obtained, and the emotion corresponding to each face picture or video is weighted and averaged, so that the emotion corresponding to the normal expression of the identified person is obtained. The steps further include: 1) training an emotion recognition model: a data acquisition step, wherein a plurality of facial expressions of a plurality of persons are acquired (the larger the acquisition amount is, the better the acquisition amount is), and all or part of the facial expressions are labeled manually; model initialization step: initializing an emotion recognition model into a deep learning model or a convolutional neural network model or other machine learning models; unsupervised training: if the emotion recognition model supports unsupervised learning, each facial expression is used as input, and unsupervised training is carried out on the emotion recognition model; a step of supervised training: taking each facial expression with the emotion label as input data, taking the emotion label of the facial expression as expected output data, and performing supervised training on an emotion recognition model; the testing steps are as follows: and testing the trained emotion recognition model, if the test is passed, using the emotion recognition model as the preset emotion recognition model, and if the test is not passed, acquiring more facial expressions, labeling and then re-training the emotion recognition model. The collected facial expressions of the multiple persons are facial expressions of any person and are not expression collection aiming at specific people, the emotional state is obtained through common judgment of the general people and is not obtained through emotion judgment aiming at the specific people by manually marking emotional labels.
Obtaining the expression to be recognized: and acquiring the expression to be recognized of the recognized person. Specifically, one or more facial pictures or videos of the identified person's expression to be identified are obtained.
And (3) emotion recognition to be recognized: and identifying the emotion to be identified of the identified person through the expression to be identified of the identified person. Specifically, each facial picture or video of the to-be-recognized expression of the recognized person is input into an emotion recognition model, the emotion corresponding to each facial picture or video is obtained, and the emotion corresponding to each facial picture or video is weighted and averaged, so that the emotion corresponding to the to-be-recognized expression of the recognized person is obtained.
And (3) correcting the emotion to be recognized: and comparing the emotion to be recognized of the recognized person with the normal emotion of the recognized person, calculating the change of the emotion to be recognized of the recognized person relative to the normal emotion, and setting the normal emotion to be normal, so as to obtain the corrected emotion to be recognized. Specifically, the emotion to be recognized of the recognized person is represented as Y, the normal emotion of the recognized person is represented as X, the normal emotion of the recognized person is represented as P, the corrected emotion to be recognized of the recognized person is represented as Q, f is an emotion change calculation function, 2 emotions are input into f, f outputs the degree of change between 2 emotions, and f (X, Y) ═ f (P, Q) can be obtained because X, Y, and P are known.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present invention, and these changes and modifications are within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. An artificial intelligence method, the method comprising:
a usual time range obtaining step: acquiring a time range corresponding to the ordinary time;
ordinarily, acquiring expressions: acquiring the normal expression of the identified person in the time range;
recognizing the normal emotion: identifying through the ordinary expression of the identified person to obtain the ordinary emotion of the identified person;
obtaining the expression to be recognized: acquiring an expression to be recognized of a recognized person;
and (3) emotion recognition to be recognized: identifying the emotion to be identified of the identified person according to the expression to be identified of the identified person;
and (3) correcting the emotion to be recognized: comparing the emotion to be recognized of the recognized person with the normal emotion of the recognized person, calculating the change of the emotion to be recognized of the recognized person relative to the normal emotion, and setting the normal emotion to be normal emotion so as to obtain the corrected emotion to be recognized;
the step of recognizing the usual emotion specifically comprises the following steps: inputting each facial picture or video of the recognized person in the normal expression into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the recognized person in the normal expression;
the emotion recognition step to be recognized specifically includes: inputting each facial picture or video of the expression to be recognized of the recognized person into a preset emotion recognition model to obtain the emotion corresponding to each facial picture or video, and carrying out weighted average on the emotion corresponding to each facial picture or video to obtain the emotion corresponding to the expression to be recognized of the recognized person;
the emotion correction step to be recognized specifically includes: the emotion to be recognized of the recognized person is recorded as Y, the normal emotion of the recognized person is recorded as X, the normal emotion of the recognized person is recorded as P, the emotion to be recognized after the recognized person is corrected is recorded as Q, f is an emotion change calculation function, 2 emotions are input into f, the f outputs the change degree of the 2 emotions, and then f (X, Y) = f (P, Q) is solved through the known X, Y and P;
the construction steps of the preset emotion recognition model comprise:
the method comprises the steps of data acquisition, wherein a plurality of facial expressions of a plurality of persons are acquired, and all or part of the facial expressions are labeled manually;
model initialization: initializing the emotion recognition model into a deep learning model or other machine learning models;
unsupervised training: if the emotion recognition model supports unsupervised learning, each facial expression is used as input, and unsupervised training is carried out on the emotion recognition model;
a step of supervised training: each facial expression with the emotion label is used as input data, the emotion label of the facial expression is used as expected output data, and supervised training is carried out on an emotion recognition model;
the testing steps are as follows: and testing the trained emotion recognition model, if the test is passed, using the emotion recognition model as the preset emotion recognition model, and if the test is not passed, acquiring more facial expressions, labeling and then re-training the emotion recognition model.
2. The artificial intelligence method of claim 1, wherein the ordinary expression obtaining step further comprises:
similar expressions are replaced: if the normal expression of the identified person fails to be obtained within the time range, obtaining the normal expression of the person belonging to the same subclass as the identified person as the normal expression of the identified person, if the normal expression of the person belonging to the same subclass as the identified person fails to be obtained, obtaining the normal expression of the person belonging to the same larger subclass as the identified person as the normal expression of the identified person, and so forth, performing preset attempts or trying all the time until the obtaining is successful;
and (3) replacing the expression of the relative: and if the ordinary expression of the identified person fails to be obtained within the time range and the ordinary expression of the person belonging to the same class as the identified person also fails to be obtained, obtaining the ordinary expression of the person having relationship with the identified person as the ordinary expression of the identified person.
3. The artificial intelligence method of claim 1, wherein the collected facial expressions of the plurality of persons are facial expressions of any person and are not expression collections made for a specific group of persons, and the manually labeled emotional state is an emotional state obtained by a general judgment on a general group of persons and is not an emotional state obtained by an emotional judgment made for a specific group of persons.
4. An artificial intelligence device, wherein the device is configured to implement the steps of the method of any of claims 1-3.
5. A robot comprising a memory, a processor and an artificial intelligence robot program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 3 are carried out when the program is executed by the processor.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN202110182806.7A 2021-02-10 2021-02-10 Artificial intelligence mental robot and method for recognizing expressions from person to person Active CN112906555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110182806.7A CN112906555B (en) 2021-02-10 2021-02-10 Artificial intelligence mental robot and method for recognizing expressions from person to person

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110182806.7A CN112906555B (en) 2021-02-10 2021-02-10 Artificial intelligence mental robot and method for recognizing expressions from person to person

Publications (2)

Publication Number Publication Date
CN112906555A CN112906555A (en) 2021-06-04
CN112906555B true CN112906555B (en) 2022-08-05

Family

ID=76123360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110182806.7A Active CN112906555B (en) 2021-02-10 2021-02-10 Artificial intelligence mental robot and method for recognizing expressions from person to person

Country Status (1)

Country Link
CN (1) CN112906555B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
CN107463874A (en) * 2017-07-03 2017-12-12 华南师范大学 The intelligent safeguard system of Emotion identification method and system and application this method
KR20190056792A (en) * 2017-11-17 2019-05-27 한국생산기술연구원 System and method for face detection and emotion recognition based deep-learning
CN110134316A (en) * 2019-04-17 2019-08-16 华为技术有限公司 Model training method, Emotion identification method and relevant apparatus and equipment
CN111353366A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Emotion detection method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9231989B2 (en) * 2012-02-06 2016-01-05 Milligrace Productions, LLC Experience and emotion online community system and method
KR101531664B1 (en) * 2013-09-27 2015-06-25 고려대학교 산학협력단 Emotion recognition ability test system using multi-sensory information, emotion recognition training system using multi- sensory information
WO2019216504A1 (en) * 2018-05-09 2019-11-14 한국과학기술원 Method and system for human emotion estimation using deep physiological affect network for human emotion recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
CN107463874A (en) * 2017-07-03 2017-12-12 华南师范大学 The intelligent safeguard system of Emotion identification method and system and application this method
KR20190056792A (en) * 2017-11-17 2019-05-27 한국생산기술연구원 System and method for face detection and emotion recognition based deep-learning
CN110134316A (en) * 2019-04-17 2019-08-16 华为技术有限公司 Model training method, Emotion identification method and relevant apparatus and equipment
CN111353366A (en) * 2019-08-19 2020-06-30 深圳市鸿合创新信息技术有限责任公司 Emotion detection method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐桂芝,等.基于深度分离卷积的情绪识别机器人即时交互研究.《仪器仪表学报》.2019,第40卷(第10期),第164-171页. *
黄泳锐,等.结合人脸图像和脑电的情绪识别技术.《计算机系统应用》.2018,第27卷(第2期),第9-15页. *

Also Published As

Publication number Publication date
CN112906555A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN108256433B (en) Motion attitude assessment method and system
Kalantarian et al. A mobile game for automatic emotion-labeling of images
CN106295313B (en) Object identity management method and device and electronic equipment
CN110704732A (en) Cognitive diagnosis-based time-sequence problem recommendation method
CN110464367B (en) Psychological anomaly detection method and system based on multi-channel cooperation
CN113837153B (en) Real-time emotion recognition method and system integrating pupil data and facial expressions
CN111080624B (en) Sperm movement state classification method, device, medium and electronic equipment
CN112232276B (en) Emotion detection method and device based on voice recognition and image recognition
CN111401105A (en) Video expression recognition method, device and equipment
CN112906555B (en) Artificial intelligence mental robot and method for recognizing expressions from person to person
CN111951950B (en) Three-dimensional data medical classification system based on deep learning
CN108197593B (en) Multi-size facial expression recognition method and device based on three-point positioning method
US20230034709A1 (en) Method and apparatus for analying experienced difficulty
Gervasi et al. A method for predicting words by interpreting labial movements
Melgare et al. Investigating emotion style in human faces and avatars
KR102548970B1 (en) Method, system and non-transitory computer-readable recording medium for generating a data set on facial expressions
Cacciatori et al. On Developing Facial Stress Analysis and Expression Recognition Platform
CN112927681B (en) Artificial intelligence psychological robot and method for recognizing speech according to person
CN115810099B (en) Image fusion device for virtual immersion type depression treatment system
CN117671774B (en) Face emotion intelligent recognition analysis equipment
CN112613436B (en) Examination cheating detection method and device
WO2023102880A1 (en) Method and system for processing tracheal intubation images and method for evaluating tracheal intubation effectiveness
Hupont et al. From a discrete perspective of emotions to continuous, dynamic, and multimodal affect sensing
Sam et al. Doodle Detection to Spot Level of Autism
CN117179767A (en) Method and equipment for assisting in assessing depression based on facial frequency domain features of patient

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant