CN110399836A - User emotion recognition methods, device and computer readable storage medium - Google Patents
User emotion recognition methods, device and computer readable storage medium Download PDFInfo
- Publication number
- CN110399836A CN110399836A CN201910679779.7A CN201910679779A CN110399836A CN 110399836 A CN110399836 A CN 110399836A CN 201910679779 A CN201910679779 A CN 201910679779A CN 110399836 A CN110399836 A CN 110399836A
- Authority
- CN
- China
- Prior art keywords
- user
- emotional state
- recognition methods
- facial image
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of user emotion recognition methods, the Emotion identification method is the following steps are included: obtain the facial image of user;The location information of multiple default key points is determined according to the facial image;The emotional state of the user is determined according to the positional information.The invention also discloses a kind of Emotion identification device and computer readable storage mediums.The present invention, which improves, to realize precisely identification to user emotion by facial image, improve the accuracy rate of Emotion identification.
Description
Technical field
The present invention relates to robot and internet of things field more particularly to a kind of user emotion recognition methods, device with
And computer readable storage medium.
Background technique
Emotion identification is necessary in many scenes, and current robot can carry out the people of all ages and classes layer
It is simple to accompany, such as the old robot family doctor to accompany and attend to and teenager's cooperation robot, need mesh robot can
Understand accompany object emotional change, current Emotion identification mainly whole face is matched with standard faces image into
The accuracy rate of row Emotion identification, identification is lower.
Above content is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that above content is existing skill
Art.
Summary of the invention
The main purpose of the present invention is to provide a kind of user emotion recognition methods, device and computer-readable storage mediums
Matter, it is intended to the technical issues of improving the accuracy rate of Emotion identification.
To achieve the above object, the present invention a kind of user emotion recognition methods is provided the following steps are included:
Obtain the facial image of user;
The location information of multiple default key points is determined according to the facial image;
The emotional state of the user is determined according to the positional information.
Optionally, the step of location information that multiple default key points are determined according to the facial image includes:
Face three-dimensional coordinate model is established according to the facial image;
The corresponding coordinate value of the multiple default key point is determined according to the face three-dimensional coordinate model.
Optionally, the step of emotional state for determining the user according to the positional information includes:
The distance between key point described in every two information is calculated according to the corresponding coordinate value of the multiple default key point;
The emotional state of the user is determined according to the range information.
Optionally, the step of emotional state that the user is determined according to the range information includes:
The range information is compared with corresponding preset range information;
The emotional state of the user is determined according to the comparison result.
Optionally, the step of emotional state that the user is determined according to the comparison result includes:
It is the corresponding distance value of the range information in the first preset range in the comparison result, determines the user
Emotional state be negative feeling;
It is the corresponding distance value of the range information in the second preset range in the comparison result, determines the user
Emotional state be active mood;
It is the corresponding distance value of the range information in third preset range in the comparison result, determines the user
Emotional state be neutral mood.
Optionally, the step of emotional state for determining the user according to the positional information includes:
The location information of the corresponding default key point in each region is obtained according to the facial image region divided in advance;
According to the corresponding local emotional state in each region of the positional information calculation of acquisition;
The emotional state of user is determined according to each local emotional state.
Optionally, described the step of determining the emotional state of the user according to each local emotional state, includes:
Whether judge in each local emotional state comprising negative feeling;
If so, using the negative feeling as the emotional state of the user;
If it is not, then calculating the emotional state of the user according to the weight of each local emotional state.
Optionally, the default key point includes canthus, eyebrow, the corners of the mouth.
In addition, to achieve the above object, the present invention also provides a kind of user emotion identification device, the user emotion identification
Device include memory, processor and storage on a memory and the user emotion recognizer that can run on a processor, institute
It states when processor executes the user emotion recognizer and realizes the step of weighing above-mentioned user emotion recognition methods.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
User emotion recognizer is stored on storage medium, the user emotion recognizer is executed by processor above-mentioned user's feelings
The step of thread recognition methods.
A kind of user emotion recognition methods that the embodiment of the present invention proposes by obtaining facial image, and determines face figure
The location information of multiple default key points as in when user is in different emotional states, presets the location information of key point
It is not identical, according to the emotional state of the variation identification user of these location informations for presetting key point, to improve Emotion identification
Accuracy rate.
Detailed description of the invention
Fig. 1 is the flow diagram of one embodiment of user emotion recognition methods of the present invention;
Fig. 2 is the flow diagram that the location information of key point is determined in one embodiment of the invention;
Fig. 3 is the flow diagram for determining user emotion state in one embodiment of the invention according to location information;
Fig. 4 is the flow diagram for determining user emotion state in Fig. 3 according to range information;
Fig. 5 is the flow diagram for determining user emotion state in Fig. 4 according to comparison result;
Fig. 6 is the flow diagram that the present invention is used for the another embodiment of Emotion identification method;
Fig. 7 is the flow diagram for determining user emotion state in Fig. 6 according to each local emotional state;
Fig. 8 is the terminal structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The primary solutions of the embodiment of the present invention are: obtaining the facial image of user;And it is true according to the facial image
Determine the location information of key point;The emotional state of the user is determined further according to the location information of the key point.
Emotion identification is carried out since the prior art matches whole face with standard faces image, but the mankind are micro-
The accurate identification that cannot achieve user emotion is only compared in expressiveness with standard faces image.
As shown in figure 8, Fig. 8 is the electronic devices structure signal for the hardware running environment that the embodiment of the present invention is related to
Figure.The electronic equipment may include: processor 1001, such as CPU, network interface 1004, user interface 1003, memory
1005, communication bus 1002.Wherein, communication bus 1002 is for realizing the connection communication between these components.User interface
1003 may include display screen (Display), input unit such as keyboard (Keyboard), and optional user interface 1003 can be with
Including standard wireline interface and wireless interface.Network interface 1004 optionally may include standard wireline interface and wireless interface
(such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory (non-volatile
), such as magnetic disk storage memory.Memory 1005 optionally can also be the storage dress independently of aforementioned processor 1001
It sets.
It will be understood by those skilled in the art that terminal structure shown in Fig. 8 does not constitute the restriction to electronic equipment, it can
To include perhaps combining certain components or different component layouts than illustrating more or fewer components.As shown in figure 8, making
For may include in a kind of memory 1005 of computer readable storage medium image collection module, key point information obtain mould
Block, identification module and user emotion recognition application.
Referring to Fig.1, one embodiment of the invention provides a kind of user emotion recognition methods, the user emotion recognition methods
Comprising steps of
S10 obtains the facial image of user;
In the present embodiment, using the facial image of client acquisition user, client can be nurse robot, utilize
The camera for nursing machine acquires facial image.Client can pre-process facial image, such as gray processing handles, returns
One changes processing and carries out gamma correction, so that face characteristic is more prominent, and weakens the interference of the facial image of external light source.
Furthermore, it is possible to which the image to acquisition carries out noise reduction process, the interference of outer signals is reduced.It can use human face detection tech detection
Facial image, and cut, parts of images only comprising facial image is obtained, the pressure and processor of data transmission are reduced
Handle the pressure of data.It is understood that the video of user within a preset time can be acquired, according to preset temporal frequency
The facial image of user in video is obtained, and these facial images are analyzed.
S20 determines the location information of multiple default key points according to the facial image;
When the emotional change of people, facial expression also can also change with variation, facial muscles.Such as people is smiling
When, the corners of the mouth raises up, and eyes can narrow, and the muscle of cheek can also change;It can frown when angry, the corners of the mouth can stick up
Come etc..Certainly, in addition to the variation that these can be clearly visible, there are also some small variations, according to these variations of facial image
Determine key point and position.Specifically, default key point can be forehead, eyebrow, eyes, nose, cheek, ear etc.
Deng.The change in location of especially canthus, the corners of the mouth and eyebrow etc., the key point at these positions is close compared with the emotional change to people
It is related.
S30 determines the emotional state of the user according to the positional information.
It determines the key point in facial image, analyzes user's according to the variation of the position of the key point in facial image
Mood.Key point in facial image is compared with emotional state judgment models, and human feelings not-ready status judgment models are pre-
First it is arranged, the key point in emotional state judgment models is also pre-set, and emotional state judgment models can lead to
RNN (Recognition with Recurrent Neural Network) progress deep learning is crossed to adjust emotional state judgment models according to the emotional change of user
It is whole, to generate emotional state judgment models according to the emotional change of the user, to improve the accuracy of Emotion identification.
The present embodiment by obtaining facial image, and is determined multiple in facial image by the facial image of acquisition user
The location information of default key point, when user is in different emotional states, the location information for presetting key point is not also identical, root
According to the emotional state of the variation identification user of these location informations for presetting key point, single still image or only root are avoided
According to the one-sidedness that face mask identifies user emotion, to improve the accuracy rate of Emotion identification.
Referring to figure 2., the step of determining the location information of multiple default key points according to the facial image include:
S21 establishes face three-dimensional coordinate model according to the facial image;
S22 determines the corresponding coordinate value of the multiple default key point according to the face three-dimensional coordinate model.
In one embodiment, can use human face detection tech determine the transverse width of facial image, longitudinal height and
Depth (along the short transverse of nose) can determine three-dimensional (length) range of facial image with this, according to this foundation
Three-dimensional coordinate model, then each key point of each organ can correspond to the different location in three-dimensional coordinate model, with
This determines the coordinate value of key point.For example, being laterally x-axis using the center of face as origin, longitudinal is y-axis, depth direction z
Axis establishes three-dimensional coordinate model.When angry or glad, not only the abscissa of two eyebrows can change, ordinate
Also it can change, and cheek also changes relative to the z-axis coordinate of nose, key point is determined according to three-dimensional coordinate model
Position coordinates.For the organ occurred in pairs, such as eyes, ear, cheek, eyebrow etc., can all be chosen in the two respectively
One key point, such as a key point is respectively taken in each eye;For the organ not occurred in pairs, example nose, mouth etc. can
To choose two key points, such as mouth in these organs by position and the corners of the mouth in person of modern times.According to the same three-dimensional coordinate mould
Type is convenient for determining the position of key point, and is convenient for comparing.
Referring to Fig. 3, the step of emotional state for determining the user according to the positional information, includes:
S31 calculates the distance between key point described in every two according to the corresponding coordinate value of the multiple default key point
Information;
S32 determines the emotional state of the user according to the range information.
In one embodiment, the range information of key point is calculated according to three-dimensional coordinate model.It should be noted that each
Organ can choose multiple key points, be configured with specific reference to actual needs, but calculate apart from when, every time only select
Take two key points therein.For example, people is when angry or glad, the distance between two eyebrows can change, that
A key point is respectively selected in two eyebrows respectively, calculates the distance between two eyebrows according to the two key points.Certainly,
Key point can also choose the different location of same eyebrow, such as respectively choose a key at the eyebrow angle of same eyebrow and eyebrow peak
The distance between point, and calculate the two key points.
Referring to Fig. 4, the step of emotional state that the user is determined according to the range information, includes:
The range information is compared by S321 with corresponding preset range information;
S322 determines the emotional state of the user according to comparison result.
People is when being in different emotional states, the distance between different key points difference, then the feelings correspondingly arrived
Not-ready status is also different, and range information and emotional state correspond, and is judged range information and emotional state according to this feature
Preset range information is compared in model, obtains the mood of user, simple, intuitive.Specifically, by the Emotion identification method
When nurse for mental patient, it can determine whether the mood of user is in abnormal state of affairs according to comparison result, if
Human intervention etc. is needed, to take immediate steps.
The generation method of emotional state judgment models are as follows: obtaining sampling emotional state in advance is active mood, negative feeling
And the range information of the key point of the corresponding facial image of neutrality mood, the range information of sampling is inputted into convolutional neural networks
And deep-neural-network DNN is trained, and is obtained emotional state judgment models, i.e., is instructed according to the range information of key point
Get emotional state judgment models.User can be become more apparent upon with the growth with the time of getting along of user, and to certainly
I is corrected, to accurately identify the emotional state of user.
Specifically, referring to Fig. 5, the step of emotional state that the user is determined according to the comparison result, includes:
S321 is the corresponding distance value of the range information in the first preset range in the comparison result, determines institute
The emotional state for stating user is negative feeling;
S322 is the corresponding distance value of the range information in the second preset range in the comparison result, determines institute
The emotional state for stating user is active mood;
S323 is the corresponding distance value of the range information in third preset range in the comparison result, determines institute
The emotional state of user is stated as neutral mood.
Mood is usually broadly divided into three classes: active mood, negative feeling and neutral mood, such as active mood is to open
The heart, excitement, appreciation etc., negative feeling are indignation, anger, hatred, anxiety, worry etc., and neutral Emotion expression is poker-faced out.When
When people is in different emotional states, the distance between key point information is not also identical, therefore is determined according to the difference of distance value
The emotional state of user.By multiple range informations of key point under different emotional states gathered in advance, and by multiple distances
Value is divided into three preset ranges, respectively the first preset range, the second preset range and third preset range, at distance value
Different emotional states is then corresponded in different ranges.For example, in happiness, eyes have the tendency that narrowing conjunction, two eyebrows it
Between distance can become smaller;In anger, the distance between eyebrow can become larger;When there is no mood swing, the distance between eyebrow
In normal range (NR).In addition, can also be divided, specifically be gone after determining user emotion is one kind of these three states
When it is any in these three moods to judge that user belongs to, such as judging user emotion according to distance value for passive pole mood, In
Value further determines that user is angry or anxiety etc. according to this distance.
Referring to Fig. 6, the step of emotional state for determining the user according to the positional information, includes:
S301 believes according to the position that the facial image region divided in advance obtains the corresponding default key point in each region
Breath;
S302, according to the corresponding local emotional state in each region of the positional information calculation of acquisition;
S303 determines the emotional state of user according to each local emotional state.
For user when showing some moods, some regions of face will appear the phenomenon that pretending, such as people angry
When, mouth, eyebrow etc. may show to smile, and eyes show indignation, if at this time by whole face and standard faces
Model compares, and just will appear the problem of taking a part for the whole, causes the error of Emotion identification.In order to reduce Emotion identification error, need
Region division is carried out to facial image, includes different key points in each region, by the distance of key point in each region
Information is compared with emotional state judgment models respectively, so that the corresponding local emotional state in each region is obtained, further according to
Multiple part emotional states obtain the final emotional state of user.For example, facial image is divided into three regions of upper, middle and lower, on
Portion region is eyes or more, including eyes, eyebrow, forehead;Region of the intermediate region between eyes and mouth, including cheek,
Ear and nose etc.;Lower area is nose region below, including mouth and lower jaw etc., divides region certainly and is not limited only to this
Kind mode.The quantity for the key point that each region is chosen can be different, such as the small variation of the mankind can all bring eyes and eyebrow
The variation of hair, then several key points will be arranged the upper area, so as to improve the accuracy of Emotion identification more.
Referring to Fig. 7, described the step of determining the emotional state of the user according to each local emotional state, includes:
Whether S3031 judges in each local emotional state comprising negative feeling;
S3032, if so, using the negative feeling as the emotional state of the user;
S3033, if it is not, then calculating the emotional state of the user according to the weight of each local emotional state.
After the range information of key point in each region and emotional state judgment models are compared, each region is obtained
Emotional state judges whether there is negative feeling in all areas, as long as containing negative feeling in any region, then the use
The current emotional state in family is negative feeling, since the National People's Congress is in most cases all in active mood or neutral emotional state,
And when people is in active mood or neutral emotional state, face will not usually show passive states, therefore when local mood
State whether in when including negative feeling, show that user needs certain care at this time.Robot can be according to the passiveness feelings
Thread judges the next action of user, to take communication to comfort or by the way that the result is transferred to its household, mention user
Wake up its household need in the recent period the more pay close attention to user situation.When all not including negative feeling in each region, according to each area
Specific gravity shared by domain calculates numerical value most, which is compared with pre-stored emotional state judgment models, is obtained
The final emotional state of user.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be service
Device, robot or network equipment etc.) execute method described in each embodiment of the present invention.
The present invention also provides a kind of user emotion identification device, the user emotion identification device includes memory, processing
Device and the user emotion recognizer being storable on processor, the processor execute real when the user emotion recognizer
The step of showing above-mentioned user emotion recognition methods.The specific embodiment of user emotion identification device of the invention and above-mentioned use
The embodiment of family Emotion identification method is essentially identical, repeats no more.
When identifying user emotion, the pretreatment of facial image is locally being carried out, and obtain the key point of facial image
Location information, then the location information of key point is uploaded to by cloud database by network, location information and emotional state are sentenced
Disconnected model is compared, and obtains a result, then result is fed back to terminal, which can be answering in robot or mobile phone
With.Key point is handled facial image and obtained in local, is conducive to the pressure and cloud data that mitigate data transmission
The processing pressure in library.
The present invention also provides a kind of computer readable storage medium, user is stored on the computer readable storage medium
Emotion identification program, the user emotion recognizer realize the step of above-mentioned user emotion recognition methods when being executed by processor
Suddenly.The basic phase of embodiment of the specific embodiment of computer readable storage medium of the invention and above-mentioned user emotion recognition methods
Together, it repeats no more.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of user emotion recognition methods, which comprises the following steps:
Obtain the facial image of user;
The location information of multiple default key points is determined according to the facial image;
The emotional state of the user is determined according to the positional information.
2. user emotion recognition methods according to claim 1, which is characterized in that described to be determined according to the facial image
The step of location information of multiple default key points includes:
Face three-dimensional coordinate model is established according to the facial image;
The corresponding coordinate value of the multiple default key point is determined according to the face three-dimensional coordinate model.
3. user emotion recognition methods according to claim 2, which is characterized in that described to determine according to the positional information
The step of emotional state of the user includes:
The distance between key point described in every two information is calculated according to the corresponding coordinate value of the multiple default key point;
The emotional state of the user is determined according to the range information.
4. user emotion recognition methods according to claim 3, which is characterized in that described to be determined according to the range information
The step of emotional state of the user includes:
The range information is compared with corresponding preset range information;
The emotional state of the user is determined according to comparison result.
5. user emotion recognition methods according to claim 4, which is characterized in that described to be determined according to the comparison result
The step of emotional state of the user includes:
It is the corresponding distance value of the range information in the first preset range in the comparison result, determines the feelings of the user
Not-ready status is negative feeling;
It is the corresponding distance value of the range information in the second preset range in the comparison result, determines the feelings of the user
Not-ready status is active mood;
It is the corresponding distance value of the range information in third preset range in the comparison result, determines the feelings of the user
Not-ready status is neutral mood.
6. user emotion recognition methods according to claim 1, which is characterized in that described to determine according to the positional information
The step of emotional state of the user includes:
The location information of the corresponding default key point in each region is obtained according to the facial image region divided in advance;
According to the corresponding local emotional state in each region of the positional information calculation of acquisition;
The emotional state of user is determined according to each local emotional state.
7. user emotion recognition methods according to claim 6, which is characterized in that described according to each local emotional state
The step of determining the emotional state of the user include:
Whether judge in each local emotional state comprising negative feeling;
If so, using the negative feeling as the emotional state of the user;
If it is not, then calculating the emotional state of the user according to the weight of each local emotional state.
8. the user emotion recognition methods according to any one of claim 2 to 7, which is characterized in that the default key
Point includes canthus, eyebrow, the corners of the mouth.
9. a kind of user emotion identification device, which is characterized in that on a memory and can be including memory, processor and storage
The user emotion recognizer run on processor realizes that right is wanted when the processor executes the user emotion recognizer
The step of asking 1-8 any described user emotion recognition methods.
10. a kind of computer readable storage medium, which is characterized in that be stored with user's feelings on the computer readable storage medium
Thread recognizer, the user emotion recognizer are executed by processor such as user's feelings described in any item of the claim 1 to 8
The step of thread recognition methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910679779.7A CN110399836A (en) | 2019-07-25 | 2019-07-25 | User emotion recognition methods, device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910679779.7A CN110399836A (en) | 2019-07-25 | 2019-07-25 | User emotion recognition methods, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110399836A true CN110399836A (en) | 2019-11-01 |
Family
ID=68325058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910679779.7A Pending CN110399836A (en) | 2019-07-25 | 2019-07-25 | User emotion recognition methods, device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110399836A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889908A (en) * | 2019-12-10 | 2020-03-17 | 吴仁超 | Intelligent sign-in system integrating face recognition and data analysis |
CN111191609A (en) * | 2019-12-31 | 2020-05-22 | 上海能塔智能科技有限公司 | Face emotion recognition method and device, electronic equipment and storage medium |
CN111582708A (en) * | 2020-04-30 | 2020-08-25 | 北京声智科技有限公司 | Medical information detection method, system, electronic device and computer-readable storage medium |
WO2021082045A1 (en) * | 2019-10-29 | 2021-05-06 | 平安科技(深圳)有限公司 | Smile expression detection method and apparatus, and computer device and storage medium |
CN112784733A (en) * | 2021-01-21 | 2021-05-11 | 敖客星云(北京)科技发展有限公司 | Emotion recognition method and device based on online education and electronic equipment |
CN113144374A (en) * | 2021-04-09 | 2021-07-23 | 上海探寻信息技术有限公司 | Method and device for adjusting user state based on intelligent wearable device |
CN116682159A (en) * | 2023-06-07 | 2023-09-01 | 广东辉杰智能科技股份有限公司 | Automatic stereo recognition method |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850847A (en) * | 2015-06-02 | 2015-08-19 | 上海斐讯数据通信技术有限公司 | Image optimization system and method with automatic face thinning function |
CN105847734A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based video communication method and system |
CN106570496A (en) * | 2016-11-22 | 2017-04-19 | 上海智臻智能网络科技股份有限公司 | Emotion recognition method and device and intelligent interaction method and device |
CN106980848A (en) * | 2017-05-11 | 2017-07-25 | 杭州电子科技大学 | Facial expression recognizing method based on warp wavelet and sparse study |
CN107220624A (en) * | 2017-05-27 | 2017-09-29 | 东南大学 | A kind of method for detecting human face based on Adaboost algorithm |
CN107392124A (en) * | 2017-07-10 | 2017-11-24 | 珠海市魅族科技有限公司 | Emotion identification method, apparatus, terminal and storage medium |
CN107595301A (en) * | 2017-08-25 | 2018-01-19 | 英华达(上海)科技有限公司 | Intelligent glasses and the method based on Emotion identification PUSH message |
CN107862292A (en) * | 2017-11-15 | 2018-03-30 | 平安科技(深圳)有限公司 | Personage's mood analysis method, device and storage medium |
CN107895146A (en) * | 2017-11-01 | 2018-04-10 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device, system and computer-readable recording medium |
CN108062546A (en) * | 2018-02-11 | 2018-05-22 | 厦门华厦学院 | A kind of computer face Emotion identification system |
CN108319933A (en) * | 2018-03-19 | 2018-07-24 | 广东电网有限责任公司中山供电局 | A kind of substation's face identification method based on DSP technologies |
CN108875464A (en) * | 2017-05-16 | 2018-11-23 | 南京农业大学 | A kind of light music control system and control method based on three-dimensional face Emotion identification |
CN109447001A (en) * | 2018-10-31 | 2019-03-08 | 深圳市安视宝科技有限公司 | A kind of dynamic Emotion identification method |
CN109858215A (en) * | 2017-11-30 | 2019-06-07 | 腾讯科技(深圳)有限公司 | Resource acquisition, sharing, processing method, device, storage medium and equipment |
-
2019
- 2019-07-25 CN CN201910679779.7A patent/CN110399836A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850847A (en) * | 2015-06-02 | 2015-08-19 | 上海斐讯数据通信技术有限公司 | Image optimization system and method with automatic face thinning function |
CN105847734A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based video communication method and system |
CN106570496A (en) * | 2016-11-22 | 2017-04-19 | 上海智臻智能网络科技股份有限公司 | Emotion recognition method and device and intelligent interaction method and device |
CN106980848A (en) * | 2017-05-11 | 2017-07-25 | 杭州电子科技大学 | Facial expression recognizing method based on warp wavelet and sparse study |
CN108875464A (en) * | 2017-05-16 | 2018-11-23 | 南京农业大学 | A kind of light music control system and control method based on three-dimensional face Emotion identification |
CN107220624A (en) * | 2017-05-27 | 2017-09-29 | 东南大学 | A kind of method for detecting human face based on Adaboost algorithm |
CN107392124A (en) * | 2017-07-10 | 2017-11-24 | 珠海市魅族科技有限公司 | Emotion identification method, apparatus, terminal and storage medium |
CN107595301A (en) * | 2017-08-25 | 2018-01-19 | 英华达(上海)科技有限公司 | Intelligent glasses and the method based on Emotion identification PUSH message |
CN107895146A (en) * | 2017-11-01 | 2018-04-10 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device, system and computer-readable recording medium |
CN107862292A (en) * | 2017-11-15 | 2018-03-30 | 平安科技(深圳)有限公司 | Personage's mood analysis method, device and storage medium |
CN109858215A (en) * | 2017-11-30 | 2019-06-07 | 腾讯科技(深圳)有限公司 | Resource acquisition, sharing, processing method, device, storage medium and equipment |
CN108062546A (en) * | 2018-02-11 | 2018-05-22 | 厦门华厦学院 | A kind of computer face Emotion identification system |
CN108319933A (en) * | 2018-03-19 | 2018-07-24 | 广东电网有限责任公司中山供电局 | A kind of substation's face identification method based on DSP technologies |
CN109447001A (en) * | 2018-10-31 | 2019-03-08 | 深圳市安视宝科技有限公司 | A kind of dynamic Emotion identification method |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021082045A1 (en) * | 2019-10-29 | 2021-05-06 | 平安科技(深圳)有限公司 | Smile expression detection method and apparatus, and computer device and storage medium |
CN110889908A (en) * | 2019-12-10 | 2020-03-17 | 吴仁超 | Intelligent sign-in system integrating face recognition and data analysis |
CN110889908B (en) * | 2019-12-10 | 2020-11-27 | 苏州鱼得水电气科技有限公司 | Intelligent sign-in system integrating face recognition and data analysis |
CN111191609A (en) * | 2019-12-31 | 2020-05-22 | 上海能塔智能科技有限公司 | Face emotion recognition method and device, electronic equipment and storage medium |
CN111582708A (en) * | 2020-04-30 | 2020-08-25 | 北京声智科技有限公司 | Medical information detection method, system, electronic device and computer-readable storage medium |
CN112784733A (en) * | 2021-01-21 | 2021-05-11 | 敖客星云(北京)科技发展有限公司 | Emotion recognition method and device based on online education and electronic equipment |
CN113144374A (en) * | 2021-04-09 | 2021-07-23 | 上海探寻信息技术有限公司 | Method and device for adjusting user state based on intelligent wearable device |
CN116682159A (en) * | 2023-06-07 | 2023-09-01 | 广东辉杰智能科技股份有限公司 | Automatic stereo recognition method |
CN116682159B (en) * | 2023-06-07 | 2024-02-02 | 广东辉杰智能科技股份有限公司 | Automatic stereo recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399836A (en) | User emotion recognition methods, device and computer readable storage medium | |
Cavallo et al. | Emotion modelling for social robotics applications: a review | |
WO2017193497A1 (en) | Fusion model-based intellectualized health management server and system, and control method therefor | |
CN107894833B (en) | Multi-modal interaction processing method and system based on virtual human | |
CN110460837A (en) | With central fovea display and the electronic equipment for watching prediction attentively | |
CN109298779A (en) | Virtual training System and method for based on virtual protocol interaction | |
CN107392124A (en) | Emotion identification method, apparatus, terminal and storage medium | |
CN110399837A (en) | User emotion recognition methods, device and computer readable storage medium | |
US20120007859A1 (en) | Method and apparatus for generating face animation in computer system | |
CN110147729A (en) | User emotion recognition methods, device, computer equipment and storage medium | |
CN113240778B (en) | Method, device, electronic equipment and storage medium for generating virtual image | |
CN107427233A (en) | Pulse wave detection device and pulse wave detection program | |
CN116755558A (en) | Pupil modulation as cognitive control signal | |
CN105955490A (en) | Information processing method based on augmented reality, information processing device based on augmented reality and mobile terminal | |
CN109949438A (en) | Abnormal driving monitoring model method for building up, device and storage medium | |
KR101734845B1 (en) | Emotion classification apparatus using visual analysis and method thereof | |
CN110147822A (en) | A kind of moos index calculation method based on the detection of human face action unit | |
WO2020261977A1 (en) | Space proposal system and space proposal method | |
CN115702436A (en) | Animating physiological characteristics on 2D or 3D avatars | |
CN113035000A (en) | Virtual reality training system for central integrated rehabilitation therapy technology | |
Joshi | An automated framework for depression analysis | |
CN116269385A (en) | Method, device, equipment and storage medium for monitoring equipment use experience | |
KR102437583B1 (en) | System And Method For Providing User-Customized Color Content For Preferred Colors Using Biosignals | |
JP2021033359A (en) | Emotion estimation device, emotion estimation method, program, information presentation device, information presentation method and emotion estimation system | |
US20230095350A1 (en) | Focus group apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |