CN108960024A - A kind of Emotion identification method based on personal user - Google Patents
A kind of Emotion identification method based on personal user Download PDFInfo
- Publication number
- CN108960024A CN108960024A CN201710845941.9A CN201710845941A CN108960024A CN 108960024 A CN108960024 A CN 108960024A CN 201710845941 A CN201710845941 A CN 201710845941A CN 108960024 A CN108960024 A CN 108960024A
- Authority
- CN
- China
- Prior art keywords
- library
- personal
- face
- newly
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- User Interface Of Digital Computer (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Bioinformatics & Computational Biology (AREA)
Abstract
The Emotion identification method based on personal user that the present invention provides a kind of, including facial characteristics acquisition is carried out to user, newly-built feature database is established according to facial characteristics, creates the step of feature database is matched with history feature library, prompts matching difference, prompt matching result.The present invention changes according to face characteristic, constantly discovers facial obvious characteristic abundant data library.And after the present invention carries out identification by face recognition, the facial characteristics under personal different moods is matched, the precision of Emotion identification is higher.
Description
Technical field
The present invention relates to image processing application fields, and in particular to a kind of Emotion identification method based on personal user.
Background technique
With the rapid development of Artificial Intelligence Science, how to enable a computer to the expression of the identification mankind and then obtain the mankind
Affective state, the concern by subjects such as computer science, psychology more and more.
There are many emotion models in affection computation field at present, but has been confined to the overall fusion instruction of image mostly
Practice, emotion model result is not accurate enough, the emotional state judgement being unable to satisfy under slight expression state.In particular in individual
User can not be quickly obtained the mood expression information of personal user, and there is an urgent need to be improved.
Summary of the invention
To solve the above problems, the present invention provides a kind of Emotion identification method based on personal user.The present invention according to
Face characteristic variation, constantly discovers facial obvious characteristic abundant data library.And the present invention carries out identification by face recognition
Afterwards, the facial characteristics under personal different moods is matched, the precision of Emotion identification is higher.
To realize the technical purpose, the technical scheme is that a kind of Emotion identification method based on personal user,
The following steps are included:
S1: facial characteristics acquisition is carried out to user;
S2: newly-built feature database is established according to facial characteristics;
S3: newly-built feature database is matched with history feature library;
S4: prompt matching difference;
S5: prompt matching result.
Further, the newly-built feature database and history feature library include colour of skin library, face shape library, face library, difference library
It is at least one.
Further, the difference library is in newly-built feature database and the comparative analysis of history feature library, and a human face obtained is special
Sign is in the distinguishing characteristics in addition to colour of skin library, face shape library, face library.
Further, the history feature library is stored using personal ID and categories of emotions as catalogue.
Further, the history feature library foundation the following steps are included:
T1: it plays video and infects user emotion;
T2: facial characteristics of the acquisition user under different categories of emotions;
T3: using personal ID as first class catalogue, using categories of emotions as second-level directory, colour of skin library, the face of user's face are stored
Shape library, face library, difference library.
Further, the prompt matching difference, confirmation matching result the following steps are included:
P1: newly-built feature database is matched with history feature library, prompts difference library, is prompted and is confirmed personal ID matching knot
Fruit;
P2: the colour of skin library under personal ID catalogue, face shape library, face library, difference in newly-built feature database and step T3
Library is matched, and confirms categories of emotions matching result.
Further, the newly-built feature database is matched with history feature library, is prompted difference library, is prompted and confirm personal ID
The matching step of matching result is matched using difference library as major weight progress.
Further, the newly-built feature database is matched with history feature library, after prompting and confirming personal ID matching result,
Newly-built feature database is added in the history feature library under personal ID catalogue.
As a preference of the present invention, facial characteristics acquisition can be used take pictures, one kind of 3D scanning.
As a preference of the present invention, the face library includes the proportional positions relationship in five, three front yard.
The beneficial effects of the present invention are:
1) present invention changes according to face, constantly discovers the facial obvious characteristic in difference library, depth, which is excavated, more may be used
As the high weight feature of facial match, the recognition efficiency and precision of personal identification ID are substantially increased.
2) for the present invention before carrying out Emotion identification, progress mood rendering first records the face under user's difference categories of emotions
Each changing features in portion depend on the matched mode of facial characteristics, so that categories of emotions recognition result confidence level in Emotion identification
Height is more applicable for precisely identifying the application scenarios of categories of emotions.
To sum up, the present invention changes according to face characteristic, constantly discovers facial obvious characteristic abundant data library.And the present invention is logical
After crossing face recognition progress identification, the facial characteristics under personal different moods is matched, the precision of Emotion identification
It is higher.
Detailed description of the invention
Fig. 1 is the framework map in history feature library of the present invention.
Specific embodiment
Technical solution of the present invention will be clearly and completely described below.
A kind of Emotion identification method based on personal user, comprising the following steps:
S1: facial characteristics acquisition is carried out to user;
S2: newly-built feature database is established according to facial characteristics;
S3: newly-built feature database is matched with history feature library;
S4: prompt matching difference;
S5: prompt matching result.
Further, the newly-built feature database and history feature library include colour of skin library, face shape library, face library, difference library
It is at least one.Facial characteristics acquisition describes the features such as user colour, face shape, face shape by image processing techniques,
And these characteristic parameters are stored to newly-built feature database.The history feature library describes all used user's face features
Parameter.
Further, the difference library is in newly-built feature database and the comparative analysis of history feature library, and a human face obtained is special
Sign is in the distinguishing characteristics in addition to colour of skin library, face shape library, face library.Such as personal face scar, facial spot etc..
Further, the history feature library is stored using personal ID and categories of emotions as catalogue.It is with personal ID
The foundation of the database of catalogue, facilitates comparing, improves the matched efficiency of mood and precision.
Further, as shown in Figure 1, the foundation in the history feature library the following steps are included:
T1: it plays video and infects user emotion;
T2: facial characteristics of the acquisition user under different categories of emotions;Such as the colour of skin under different moods changes, face is whole
Shape variation, the proportional positions relationship change in five, three front yard in face library, variation of each obvious characteristic etc. in difference library.This
Invention is before carrying out Emotion identification, progress mood rendering first, records each changing features of face under user's difference categories of emotions,
In Emotion identification, it is more applicable for dependent on the matched mode of facial characteristics so that categories of emotions recognition result is with a high credibility
It need to precisely identify the application scenarios of categories of emotions.
T3: using personal ID as first class catalogue, using categories of emotions as second-level directory, colour of skin library, the face of user's face are stored
Shape library, face library, difference library.
Further, the prompt matching difference, confirmation matching result the following steps are included:
P1: newly-built feature database is matched with history feature library, and whether confirmation face has except the colour of skin, face, facial shape
Other obvious characteristics such as spot, scar outside shape, and in this, as difference library, difference content is prompted to personal user, by user
It is confirmed whether to largely avoided the difference library of personal ID identification fault and addition there are this obvious characteristic, then prompts and confirm
Personal ID matching result;Meanwhile the present invention can be automatic to establish new personal ID when without personal ID matching result, and is added extremely
Personal ID catalogue in database.
P2: the colour of skin library under personal ID catalogue, face shape library, face library, difference in newly-built feature database and step T3
Library is matched, and confirms categories of emotions matching result.It is this with identification after, to the facial characteristics under personal different moods into
Row matching, the more accurate matching mood classification of energy.
The present invention matches personal ID in large database concept first, and of mood classification is secondly carried out under personal ID catalogue
Match, substantially reduce arithmetic speed, improves the reliability of Emotion identification.
Further, the newly-built feature database is matched with history feature library, is prompted difference library, is prompted and confirm personal ID
The matching step of matching result is matched using difference library as major weight progress.Using the obvious characteristic of a human face as identity
Identification, the main foundation for matching individual ID can faster be matched to personal ID, and then carry out Emotion identification.It is main with difference library
Weight carries out matching and greatly reduces the matched efficiency of personal ID, and matching result is more accurate.
Further, the newly-built feature database is matched with history feature library, after prompting and confirming personal ID matching result,
Newly-built feature database is added in the history feature library under personal ID catalogue, history feature library is constantly updated, especially in accordance with face
Variation, constantly discovers the facial obvious characteristic in difference library, and depth excavates the high weight feature that more can be used as facial match.
As a preference of the present invention, facial characteristics acquisition can be used take pictures, one kind of 3D scanning.
As a preference of the present invention, the face library includes the proportional positions relationship in five, three front yard.
For those of ordinary skill in the art, without departing from the concept of the premise of the invention, it can also do
Several modifications and improvements out, these are all within the scope of protection of the present invention.
Claims (10)
1. a kind of Emotion identification method based on personal user, which comprises the following steps:
S1: facial characteristics acquisition is carried out to user;
S2: newly-built feature database is established according to facial characteristics;
S3: newly-built feature database is matched with history feature library;
S4: prompt matching difference;
S5: prompt matching result.
2. a kind of Emotion identification method based on personal user according to claim 1, which is characterized in that the newly-built spy
Sign library and history feature library include colour of skin library, face shape library, face library, difference library at least one.
3. a kind of Emotion identification method based on personal user according to claim 2, which is characterized in that the difference library
It is in newly-built feature database and the comparative analysis of history feature library, the personal facial characteristics obtained is except colour of skin library, face shape library, five
Distinguishing characteristics outside official library is as difference library content.
4. a kind of Emotion identification method based on personal user according to claim 3, which is characterized in that the history is special
Levying library is stored using personal ID and categories of emotions as catalogue.
5. a kind of Emotion identification method based on personal user according to claim 4, which is characterized in that the history is special
Levy library foundation the following steps are included:
T1: it plays video and infects user emotion;
T2: facial characteristics of the acquisition user under different categories of emotions;
T3: using personal ID as first class catalogue, using categories of emotions as second-level directory, colour of skin library, face are stored classifiedly under categories of emotions
Portion's shape library, face library, difference library are at least one.
6. a kind of Emotion identification method based on personal user according to claim 5, which is characterized in that the step S4
Prompt matching result in middle prompt matching difference and step S5 the following steps are included:
P1: newly-built feature database is matched with history feature library, prompts individual's ID matching result;
P2: the colour of skin library under personal ID catalogue, face shape library, face library, difference library in newly-built feature database and step T3 into
Row matching, prompts categories of emotions matching result.
7. a kind of Emotion identification method based on personal user according to claim 6, which is characterized in that the newly-built spy
Sign library is matched with history feature library, and it is main that the matching step of prompt difference library and individual's ID matching result, which is with difference library,
Weight carries out matched.
8. a kind of Emotion identification method based on personal user according to claim 5, which is characterized in that the newly-built spy
Sign library is matched with history feature library, and after prompting individual's ID matching result, the history feature library adds under personal ID catalogue
Enter newly-built feature database.
9. a kind of Emotion identification method based on personal user according to claim 1, which is characterized in that the face is special
Sign acquisition can be used take pictures, 3D scanning one kind.
10. a kind of Emotion identification method based on personal user according to claim 1, which is characterized in that the face
Library includes the proportional positions relationship in five, three front yard.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710845941.9A CN108960024A (en) | 2017-09-19 | 2017-09-19 | A kind of Emotion identification method based on personal user |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710845941.9A CN108960024A (en) | 2017-09-19 | 2017-09-19 | A kind of Emotion identification method based on personal user |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108960024A true CN108960024A (en) | 2018-12-07 |
Family
ID=64494755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710845941.9A Pending CN108960024A (en) | 2017-09-19 | 2017-09-19 | A kind of Emotion identification method based on personal user |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108960024A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672937A (en) * | 2018-12-28 | 2019-04-23 | 深圳Tcl数字技术有限公司 | TV applications method for switching theme, TV, readable storage medium storing program for executing and system |
CN110197677A (en) * | 2019-05-16 | 2019-09-03 | 北京小米移动软件有限公司 | A kind of control method for playing back, device and playback equipment |
CN111717219A (en) * | 2020-06-03 | 2020-09-29 | 智车优行科技(上海)有限公司 | Method and system for converting skylight pattern and automobile |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103871200A (en) * | 2012-12-14 | 2014-06-18 | 深圳市赛格导航科技股份有限公司 | Safety warning system and method used for automobile driving |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN105938543A (en) * | 2016-03-30 | 2016-09-14 | 乐视控股(北京)有限公司 | Addiction-prevention-based terminal operation control method, device, and system |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN107038413A (en) * | 2017-03-08 | 2017-08-11 | 合肥华凌股份有限公司 | recipe recommendation method, device and refrigerator |
-
2017
- 2017-09-19 CN CN201710845941.9A patent/CN108960024A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103871200A (en) * | 2012-12-14 | 2014-06-18 | 深圳市赛格导航科技股份有限公司 | Safety warning system and method used for automobile driving |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN105938543A (en) * | 2016-03-30 | 2016-09-14 | 乐视控股(北京)有限公司 | Addiction-prevention-based terminal operation control method, device, and system |
CN106909873A (en) * | 2016-06-21 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | The method and apparatus of recognition of face |
CN107038413A (en) * | 2017-03-08 | 2017-08-11 | 合肥华凌股份有限公司 | recipe recommendation method, device and refrigerator |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672937A (en) * | 2018-12-28 | 2019-04-23 | 深圳Tcl数字技术有限公司 | TV applications method for switching theme, TV, readable storage medium storing program for executing and system |
CN110197677A (en) * | 2019-05-16 | 2019-09-03 | 北京小米移动软件有限公司 | A kind of control method for playing back, device and playback equipment |
CN111717219A (en) * | 2020-06-03 | 2020-09-29 | 智车优行科技(上海)有限公司 | Method and system for converting skylight pattern and automobile |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jain et al. | Hybrid deep neural networks for face emotion recognition | |
US20230049135A1 (en) | Deep learning-based video editing method, related device, and storage medium | |
CN110348387A (en) | A kind of image processing method, device and computer readable storage medium | |
CN107423398A (en) | Exchange method, device, storage medium and computer equipment | |
CN111444873A (en) | Method and device for detecting authenticity of person in video, electronic device and storage medium | |
CN111126280B (en) | Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method | |
CN109278051A (en) | Exchange method and system based on intelligent robot | |
CN110503076A (en) | Video classification methods, device, equipment and medium based on artificial intelligence | |
CN109176535A (en) | Exchange method and system based on intelligent robot | |
CN111160264A (en) | Cartoon figure identity recognition method based on generation of confrontation network | |
CN108960024A (en) | A kind of Emotion identification method based on personal user | |
CN109935294A (en) | Text report output method, text report output device, storage medium and terminal | |
CN109409199B (en) | Micro-expression training method and device, storage medium and electronic equipment | |
CN111028216A (en) | Image scoring method and device, storage medium and electronic equipment | |
CN105631456B (en) | A kind of leucocyte method for extracting region based on particle group optimizing ITTI model | |
CN110472495A (en) | A kind of deep learning face identification method based on graphical inference global characteristics | |
Wang et al. | Exploring multimodal video representation for action recognition | |
CN112307975A (en) | Multi-modal emotion recognition method and system integrating voice and micro-expressions | |
Fu et al. | Learning semantic-aware spatial-temporal attention for interpretable action recognition | |
CN109063643A (en) | A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part | |
CN113392781A (en) | Video emotion semantic analysis method based on graph neural network | |
Shengtao et al. | Facial expression recognition based on global and local feature fusion with CNNs | |
CN110287912A (en) | Method, apparatus and medium are determined based on the target object affective state of deep learning | |
Swathi et al. | Emotion classification using feature extraction of facial expression | |
CN111062345B (en) | Training method and device for vein recognition model and vein image recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181207 |
|
RJ01 | Rejection of invention patent application after publication |