CN108596011A - A kind of face character recognition methods and device based on combined depth network - Google Patents
A kind of face character recognition methods and device based on combined depth network Download PDFInfo
- Publication number
- CN108596011A CN108596011A CN201711498120.9A CN201711498120A CN108596011A CN 108596011 A CN108596011 A CN 108596011A CN 201711498120 A CN201711498120 A CN 201711498120A CN 108596011 A CN108596011 A CN 108596011A
- Authority
- CN
- China
- Prior art keywords
- face
- attribute
- probability
- expression
- wearing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Abstract
The present invention provides a kind of face character recognition methods based on combined depth neural network, include the following steps:S1:Extracted region is carried out to the facial image of input;S2:According to the extracted region as a result, face significant points are positioned and extracted with relevant range, human face region feature and face key position feature are obtained;S3:Using the result of the extracted region as training sample, the real age and gender of each face are marked, is trained based on deep neural network, exports age estimated value and gender recognition result;S4:The human face region feature and eyes of output based on above-mentioned S2 and lip local features, using random forests algorithm, the prediction probability of expression and wearing is exported respectively, for different attribute, the probability of corresponding attribute is added, output human face expression and wearing attribute results.In addition the present invention also provides a kind of face character identification devices based on combined depth neural network.
Description
Technical field
The invention belongs to the image identification technical fields of artificial intelligence, are related to a kind of face category based on combined depth network
Property recognition methods and device.
Background technology
With the development of society and the progress of science and technology, face character is identified in finance, army, medical treatment, public security etc.
The application of numerous enterprises and institutions is more and more extensive.Biological nature be people specific inherent attribute, due to its with it is complicated,
The features such as variations in detail is more, and have certain individual difference and stability, therefore can be as people's authentication can
By foundation.Face is a kind of very important biological characteristic, it can characterize the attributes such as the mood, gender, age of people, compared to
The feature acquisition of other human body biological characteristics, face is more direct and not noticeable, is to carry out authentication and information analysis
Most naturally convenient means.
The face character identification content of mainstream includes mainly at present:Gender identification, race's identification, age estimation, expression are known
Not etc..Key step includes:Recognition of face, face extraction, sample training and Attribute Recognition.Face character recognition methods is main
Including two classes:The first kind is to be identified using grader, the second class is based on depth based on traditional subjective local shape factor
Convolutional neural networks extraction global characteristics are classified.First kind method is overly dependent upon experience, is not suitable for multi-source big data
Processing, universality is poor, after depth learning technology appearance, has been rarely employed.Although nowadays depth convolutional neural networks are answered
With extensive, but its precision depends on sample data volume, and insufficient in sample is difficult to obtain ideal effect.
Invention content
The present invention is intended to provide a kind of face character recognition methods and device based on combined depth neural network, for examining
Survey age, gender, expression and the wearing of user.
A kind of face character recognition methods based on combined depth neural network provided by the invention, includes the following steps:
S1:Extracted region is carried out to the facial image of input;S2:According to the extracted region as a result, being carried out to face significant points
Relevant range is positioned and extracted, human face region feature and face key position feature are obtained;S3:By the knot of the extracted region
Fruit is marked the real age and gender of each face, is trained based on deep neural network as training sample, exports the age
Estimated value and gender recognition result;S4:The human face region feature and eyes of output based on above-mentioned S2 and lip regional area
Feature exports the prediction probability of expression and wearing respectively using random forests algorithm, for different attribute, by corresponding attribute
Probability is added, output human face expression and wearing attribute results.
Wherein, the S1 includes:S11:Based on three cascade deep convolutional neural networks, face is identified and is carried
It takes, obtains the recurrence rectangular coordinates of human face region;S12:Centered on the centre coordinate that face returns rectangle, face area is extracted
Human face region is amplified the 15% of former recurrence rectangular area by domain;S13:Face is cut, and uniformly to predetermined size.
Wherein, the cascade deep convolutional neural networks scale is followed successively by 12 layers, 24 layers and 48 layers.
Wherein, the S2 includes:S21:Based on one 6 layers of depth convolutional neural networks, to the obtained faces of step S1
Face in extracted region result are positioned, and respectively obtain 9 crucial point coordinates, i.e., the canthus of right and left eyes and centre coordinate,
Nose coordinate and two corners of the mouth coordinates;S22:According to 9 crucial point coordinates extraction eyes, nose and lip region;S23:It carries
Take the human face region feature and face key position feature of each convolutional layer.
Wherein, in the S22, the width of face eye areas is the distance at right and left eyes canthus, a height of initial detecting face
The a quarter of rectangle frame height;The width of face mouth region is the width of the left and right corners of the mouth, and width is initial detecting face rectangle frame
/ 6th of height.
Wherein, in the S3, the output age estimated value includes:S31:Sample database is collected, and marks each sample
Real age;S32:Sample is trained based on 16 layer depth convolutional neural networks, obtains parameter model;S33:To input
Face exports the probability at each age, and summation is weighted to age and corresponding probability, and obtained numerical value is the final age
Estimated value, and childhood, teenager, youth, middle age and old age will be divided into the age according to age range.
Wherein, the age range in the sample database is 1~100 years old, and the sample in sample database comes from IMDB-WIKI data
Library, in the database, the header record of picture date of birth of shooting time and corresponding personage.
Wherein, the formula of the weighted sum isWherein, yiFor 0~100 years old age prediction probability,
OiFor the age.
Wherein, in the S3, the output gender recognition result includes:S41:Sample database is collected, and marks each sample
This true gender;S42:Sample is trained based on 16 layer depth convolutional neural networks, obtains parameter model;S43:To defeated
Enter face, the probability of output gender identification, the high as gender recognition result of probability.
Wherein, the prediction probability for being classified and being exported each expression in the S4 to human face expression includes:S51:It collects
Sample database, and the expression attribute of each sample is marked, it does not laugh at respectively, smile and laughs;S52:Based on VGG deep neural networks
Sample is trained, parameter model is obtained;S53:Human face region picture is inputted, the probability of each expression attribute is exported;S54:
The feature of each convolutional layer based on S2 extractions, extracts the local convolution feature of eyes and mouth;S55:It is extracted based on S2
Each convolutional layer feature and eyes mouth convolution feature, classified to expression using random forests algorithm, output is each
The prediction probability of expression;S56:Depth network is added with the output prediction probability of random forests algorithm, according to after being added pre-
Expression attribute is divided by survey probability not to be laughed at, smile and laughs;
Wherein, in the S4, the wearing attribute to face, which is classified and exports face wearing attribute results, includes:
S61:Sample database is collected, and marks the table of each sample to dress attribute and is non-wearing spectacles, wears common spectacles and wear sunglasses;
S62:Sample is trained based on VGG deep neural networks, obtains parameter model;S63:Input human face region picture, output
The probability of each glasses wearing attribute;S64:The feature of each convolutional layer based on step S2 extractions, extracts the office of eyes
Portion's convolution feature;S65:The feature and eyes convolution feature of each convolutional layer based on step S2 extractions, are calculated based on random forest
Method classifies to glasses wearing attribute;S66:Depth network is added with the output prediction probability of random forests algorithm, foundation
Prediction probability will dress attribute and be divided into non-wearing spectacles, wears common spectacles and wear sunglasses after addition.
In addition the present invention also provides a kind of face character identification devices based on combined depth neural network, including:
Extraction unit carries out extracted region for the facial image to input;Deep neural network, for based on extraction
As a result face significant points are positioned and extracted with relevant range, human face region feature is obtained and face key position is special
It levies, and the result for being extracted to the extraction element based on sample database is trained, in conjunction with the human face region feature
And face key position feature recognition goes out gender, age, expression attribute probability and the wearing attribute probability of face;It calculates single
Member, the face key position feature obtained based on the deep neural network, expression attribute probability and wearing attribute probability calculation
The expression attribute and wearing attribute of the face.
Wherein, the deep neural network includes five sub-networks, executes the detection of face key point respectively, obtains expression
Attribute probability obtains glasses wearing probability, identification face gender, the function of identifying face.
Wherein, the extraction unit is based on three cascade depth convolutional networks, is identified and extracts to face, obtains
The recurrence rectangular coordinates of human face region extract human face region, by human face region centered on the centre coordinate that face returns rectangle
Amplification is former to return the 15% of rectangular area, and is cut to face, and uniformly to predetermined size.
Wherein, include two random forests algorithm modules in the computing unit, be respectively used to expression and wearing attribute
Classify, exports the prediction probability of the prediction probability and wearing attribute of expression, wherein the computing unit is by expression attribute
Probability is added with the prediction probability of expression attribute, according to probability after being added, expression attribute is divided into and do not laugh at, smile and laughs,
The probability for dressing attribute is added by the computing unit with the prediction probability of wearing attribute, belongs to wearing according to probability after being added
Property be divided into non-wearing spectacles, wear common spectacles and wear sunglasses.
Method and apparatus using the present invention identify while capable of realizing a variety of face characters, and have good Shandong
Stick is applicable to multiple application fields such as business, security protection.The method of the present invention is not overly dependent upon big-sample data amount, and energy
Keep higher accuracy.Network structure used by the method for the present invention has good expansibility, is easier to carry out other people
The identification of face attribute with it is integrated.
Description of the drawings
Attached drawing is included to provide one for further understanding and being incorporated and constituting this specification of the present invention
Point, the attached drawing shows the embodiment of the present invention and together with specification principle used to explain the present invention, in the accompanying drawings:
Fig. 1 is the face character recognition methods flow chart provided by the invention based on combined depth neural network;
Fig. 2 provides the face character identification device block diagram based on combined depth neural network for the present invention;
Specific implementation mode
Below in conjunction with the specific implementation of the description of the drawings present invention.
An embodiment of the present invention provides a kind of face character recognition methods based on combined depth neural network, for detecting
Age, gender, expression and the wearing of user.The present invention will be divided into childhood (0~10 years old), teenager the age according to face characteristic
(10~18 years old), young (18~35 years old), middle aged (35~60 years old) and old (60 years old or more) five stages, expression is divided into
It does not laugh at, smile and laughs three kinds, and to whether wearing spectacles and glasses type (common spectacles and sunglasses) are identified.
Fig. 1 is the flow diagram that the face character of the embodiment of the present invention identifies, is specifically comprised the following steps:
S1:Extracted region is carried out to the facial image of input;
S2:According to the extracted region as a result, being determined face significant points by sub-network 1 shown in Fig. 2
Relevant range is simultaneously extracted in position, obtains human face region feature and face key position feature;
S3:Using the result of the extracted region as training sample, the real age and gender of each face are marked, respectively
It is trained by sub-network 4 shown in Fig. 2 and sub-network 5, corresponding output age estimated value and gender recognition result;
S4:Human face region feature and eyes and lip regional area spy are extracted by sub-network in Fig. 21 based on above-mentioned S2
Sign exports each expression and dresses the prediction probability of attribute, pass through sub-network in Fig. 22 and subnet using random forests algorithm
Network 3 exports the prediction probability of expression and wearing respectively, for different attribute, the probability of corresponding attribute is added, face table is exported
Feelings and wearing attribute results.
Wherein for the extraction of the human face region in S1.The identification of effective face is carried out to obtaining picture, and intercepts people
Face effective coverage.
S11:Based on three cascade deep convolutional neural networks, face is identified and is extracted.It is deep by three cascades
Convolutional network model is spent, the recurrence rectangular coordinates of human face region are obtained.Wherein, cascade deep convolutional neural networks scale is followed successively by
12 layers, 24 layers and 48 layers.
S12:Centered on the centre coordinate that face returns rectangle, human face region is extracted, human face region is amplified into former return
The 15% of rectangular area.
S13:Face is cut, and uniform sizes are to 40 × 40 pixel sizes.
For in S2 human face five-sense-organ region and feature extraction.Extraction based on human face region is as a result, important to face
Position is positioned and is extracted relevant range, the face characteristic of every layer of extraction depth network.
S21:Based on one 6 layers of depth convolutional network (sub-network 1), including four convolutional layers and two full articulamentums,
Face (eyes, nose and mouth) in the obtained human face regions of step S1 are positioned, 9 crucial point coordinates are respectively obtained:
The canthus and centre coordinate of right and left eyes, nose coordinate and two corners of the mouth coordinates.
S22:According to 9 crucial point coordinates extraction eyes, nose and lip region, wherein the width of face eye areas is
The distance at right and left eyes canthus, a quarter of a height of initial detecting face rectangle frame height;The width of face mouth region is left and right
The width of the corners of the mouth, width are 1/6th of initial detecting face rectangle frame height.
S23:Extract the human face region feature and face key position feature of each convolutional layer.
Identification for the face age in S3.Using effective human face region of extraction as training sample, everyone is marked
The real age of face, is trained it, and the real age of output is divided into childhood, teenager, youth, middle age according to age range
And old five stages.
S31:Sample database is collected, and marks the real age of each sample.Age range is 1~100 years old, in sample database
Sample comes from IMDB-WIKI databases, and IMDB-WIKI databases mainly have collected the face picture of famous person, in the database, figure
The header record of piece date of birth of shooting time and corresponding personage.
S32:Sample is trained based on 16 layer depth neural networks (sub-network 4), obtains parameter model.
S33:To inputting face, the probability at each age is exported, summation is weighted to age and corresponding probability, is obtained
Numerical value be final age estimated value, and childhood, teenager, youth, middle age and old age will be divided into the age according to age range.
The formula of weighted sum such as formula:
Wherein, yiFor 0~100 years old age prediction probability, OiFor the age.
Identification for the face gender of S3.Using effective human face region of extraction as training sample, each face is marked
Gender, it is trained based on deep neural network, and identify face gender.
S41:Sample database is collected, and marks the true gender of each sample.Effective face has been selected from IMDB-WIKI databases
200,000, picture is used as training set.
S42:Sample is trained based on 16 layers of VGG deep neural networks (sub-network 5), obtains parameter model.
S43:To inputting face, the probability of output gender identification, the high as gender recognition result of probability.
Identification for the human face expression in S4.Effective human face region feature and eyes based on extraction and lip part
Human face expression is divided into using random forests algorithm and does not laugh at, smiles and laugh by provincial characteristics.
S51:Sample database is collected, and marks the expression attribute of each sample, do not laugh at respectively, smile and laughs.
S52:Sample is trained based on VGG deep neural networks (sub-network 2), obtains parameter model.
S53:Human face region picture is inputted, the probability of each expression attribute is exported.
S54:The feature of each convolutional layer based on S2 extractions, extracts the local convolution feature of eyes and mouth
S55:The feature and eyes mouth convolution feature of each convolutional layer based on step 2 extraction, are calculated using random forest
Method classifies to expression, exports the prediction probability of each expression.
S56:Depth network is added with the output prediction probability of random forests algorithm, according to prediction probability by expression attribute
It is divided into and does not laugh at, smiles and laugh.
Identification for the face wearing in S4.Effective human face region feature and eyes regional area based on extraction are special
Input face is divided into non-wearing spectacles, wears common spectacles and wears sunglasses by sign using random forests algorithm.
S61:Collect sample database, and mark each sample table dress attribute be non-wearing spectacles, wear common spectacles and
Wear sunglasses.
S62:Sample is trained based on VGG deep neural networks (sub-network 3), obtains parameter model.
S63:Human face region picture is inputted, the probability of each glasses wearing attribute is exported.
S64:The feature of each convolutional layer based on step S2 extractions, extracts the local convolution feature of eyes.
S65:The feature and eyes convolution feature of each convolutional layer based on step S2 extractions, are based on random forests algorithm pair
Glasses wearing attribute is classified.
S66:Depth network is added with the output prediction probability of random forests algorithm, according to prediction probability by expression attribute
It is divided into non-wearing spectacles, wear common spectacles and wears sunglasses.
It should be apparent to those skilled in the art that can done in invention without deviating from the spirit or scope of the invention
Go out various modifications and deformation.Therefore, if the present invention is directed to the modification and variation of the present invention to fall into appended claims and he
In the range of equivalent form, then the present invention covers these modification and variation.
Claims (15)
1. a kind of face character recognition methods based on combined depth neural network, includes the following steps:
S1:Extracted region is carried out to the facial image of input;
S2:According to the extracted region as a result, face significant points are positioned and extracted with relevant range, face area is obtained
Characteristic of field and face key position feature;
S3:Using the result of the extracted region as training sample, the real age and gender of each face are marked, is based on depth
Neural network is trained, and exports age estimated value and gender recognition result;
S4:The human face region feature and eyes of output based on above-mentioned S2 and lip local features, utilize random forest
Algorithm exports the prediction probability of expression and wearing respectively, for different attribute, the probability of corresponding attribute is added, face is exported
Expression and wearing attribute results.
2. the method as described in claim 1, the S1 include:
S11:Based on three cascade deep convolutional neural networks, face is identified and is extracted, obtains the recurrence of human face region
Rectangular coordinates;
S12:Centered on the centre coordinate that face returns rectangle, human face region is extracted, human face region is amplified into former recurrence rectangle
The 15% of region;
S13:Face is cut, and uniformly to predetermined size.
3. method as claimed in claim 2, the cascade deep convolutional neural networks scale is followed successively by 12 layers, 24 layers and 48
Layer.
4. the method as described in claim 1, the S2 include:
S21:Based on one 6 layers of depth convolutional neural networks, the face in result are extracted to the obtained human face regions of step S1
It is positioned, respectively obtains 9 crucial point coordinates, the i.e. canthus of right and left eyes and centre coordinate, nose coordinate and two corners of the mouths
Coordinate;
S22:According to 9 crucial point coordinates extraction eyes, nose and lip region;
S23:Extract the human face region feature and face key position feature of each convolutional layer.
5. method as claimed in claim 4, in the S22, the width of face eye areas is the distance at right and left eyes canthus, high
For a quarter of initial detecting face rectangle frame height;The width of face mouth region is the width of the left and right corners of the mouth, and width is initial
Detect 1/6th of face rectangle frame height.
6. the method as described in claim 1, in the S3, the output age estimated value includes:
S31:Sample database is collected, and marks the real age of each sample;
S32:Sample is trained based on 16 layer depth convolutional neural networks, obtains parameter model;
S33:To inputting face, the probability at each age is exported, summation is weighted to age and corresponding probability, obtained number
Value is final age estimated value, and will be divided into childhood, teenager, youth, middle age and old age the age according to age range.
7. method as claimed in claim 6, wherein the age range in the sample database is 1~100 years old, the sample in sample database
This comes from IMDB-WIKI databases, in the database, the header record of the picture birth of shooting time and corresponding personage
Days.
8. method as claimed in claim 6, the formula of the weighted sum isWherein, yiIt is 0~100 years old
Age prediction probability, OiFor the age.
9. the method as described in claim 1, in the S3, the output gender recognition result includes:
S41:Sample database is collected, and marks the true gender of each sample;
S42:Sample is trained based on 16 layer depth convolutional neural networks, obtains parameter model;
S43:To inputting face, the probability of output gender identification, the high as gender recognition result of probability.
10. the method as described in claim 1, is classified to human face expression in the S4 and the prediction for exporting each expression is general
Rate includes:
S51:Sample database is collected, and marks the expression attribute of each sample, do not laugh at respectively, smile and laughs;
S52:Sample is trained based on VGG deep neural networks, obtains parameter model;
S53:Human face region picture is inputted, the probability of each expression attribute is exported;
S54:The feature of each convolutional layer based on S2 extractions, extracts the local convolution feature of eyes and mouth;
S55:The feature and eyes mouth convolution feature of each convolutional layer based on S2 extractions, using random forests algorithm to expression
Classify, exports the prediction probability of each expression;
S56:Depth network is added with the output prediction probability of random forests algorithm, according to prediction probability after being added by expression
Attribute, which is divided into, not to be laughed at, smile and laughs.
11. the method as described in claim 1, in the S4, the wearing attribute to face is classified and exports face
Dressing attribute results includes:
S61:Sample database is collected, and marks the table of each sample to dress attribute and is non-wearing spectacles, wears common spectacles and wearing
Sunglasses;
S62:Sample is trained based on VGG deep neural networks, obtains parameter model;
S63:Human face region picture is inputted, the probability of each glasses wearing attribute is exported;
S64:The feature of each convolutional layer based on step S2 extractions, extracts the local convolution feature of eyes;
S65:The feature and eyes convolution feature of each convolutional layer based on step S2 extractions, based on random forests algorithm to glasses
Wearing attribute is classified;
S66:Depth network is added with the output prediction probability of random forests algorithm, belongs to wearing according to prediction probability after being added
Property be divided into non-wearing spectacles, wear common spectacles and wear sunglasses.
12. a kind of face character identification device based on combined depth neural network, including:
Extraction unit carries out extracted region for the facial image to input;
Deep neural network positions face significant points for the result based on extraction and is extracted relevant range, obtains
Human face region feature and face key position feature, and the result for being extracted to the extraction element based on sample database
It is trained, goes out the gender of face, age, expression category in conjunction with the human face region feature and face key position feature recognition
Property probability and wearing attribute probability;
Computing unit, the face key position feature obtained based on the deep neural network, expression attribute probability and wearing are belonged to
Property probability calculation described in face expression attribute and wearing attribute.
13. device as claimed in claim 12 executes face respectively wherein the deep neural network includes five sub-networks
The detection of key point obtains expression attribute probability, obtains glasses wearing probability, identification face gender, the function of identifying face.
14. device as described in claim 1, the extraction unit is based on three cascade depth convolutional networks, to face into
Row identification and extraction, obtain the recurrence rectangular coordinates of human face region, centered on the centre coordinate that face returns rectangle, extract people
Human face region is amplified the 15% of former recurrence rectangular area, and is cut to face by face region, and uniformly to predetermined size.
15. device as described in claim 1, includes two random forests algorithm modules in the computing unit, be respectively used to
Classify to expression and wearing attribute, exports the prediction probability of the prediction probability and wearing attribute of expression, wherein the calculating
The probability of expression attribute is added by unit with the prediction probability of expression attribute, and according to probability after being added, expression attribute is divided into
It does not laugh at, smile and laughs, the probability for dressing attribute is added by the computing unit with the prediction probability of wearing attribute, according to addition
Probability afterwards will dress attribute and be divided into non-wearing spectacles, wears common spectacles and wear sunglasses.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711498120.9A CN108596011A (en) | 2017-12-29 | 2017-12-29 | A kind of face character recognition methods and device based on combined depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711498120.9A CN108596011A (en) | 2017-12-29 | 2017-12-29 | A kind of face character recognition methods and device based on combined depth network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108596011A true CN108596011A (en) | 2018-09-28 |
Family
ID=63633110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711498120.9A Pending CN108596011A (en) | 2017-12-29 | 2017-12-29 | A kind of face character recognition methods and device based on combined depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596011A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492711A (en) * | 2018-12-07 | 2019-03-19 | 杭州电子科技大学 | Malignant mela noma and non-malignant melanotic nevus classification method based on deep learning |
CN109711356A (en) * | 2018-12-28 | 2019-05-03 | 广州海昇教育科技有限责任公司 | A kind of expression recognition method and system |
CN109803090A (en) * | 2019-01-25 | 2019-05-24 | 睿魔智能科技(深圳)有限公司 | Unmanned shooting automatic zooming method and system, unmanned cameras and storage medium |
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN110008922A (en) * | 2019-04-12 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Image processing method, unit, medium for terminal device |
CN110009059A (en) * | 2019-04-16 | 2019-07-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN110069994A (en) * | 2019-03-18 | 2019-07-30 | 中国科学院自动化研究所 | Face character identifying system, method based on face multizone |
CN110378306A (en) * | 2019-07-25 | 2019-10-25 | 厦门美图之家科技有限公司 | Age prediction technique, device and image processing equipment |
CN110414483A (en) * | 2019-08-13 | 2019-11-05 | 山东浪潮人工智能研究院有限公司 | A kind of face identification method and system based on deep neural network and random forest |
CN110427795A (en) * | 2019-01-28 | 2019-11-08 | 厦门瑞为信息技术有限公司 | A kind of property analysis method based on head photo, system and computer equipment |
CN110472611A (en) * | 2019-08-21 | 2019-11-19 | 图谱未来(南京)人工智能研究院有限公司 | Method, apparatus, electronic equipment and the readable storage medium storing program for executing of character attribute identification |
CN110532965A (en) * | 2019-08-30 | 2019-12-03 | 京东方科技集团股份有限公司 | Age recognition methods, storage medium and electronic equipment |
CN110532970A (en) * | 2019-09-02 | 2019-12-03 | 厦门瑞为信息技术有限公司 | Age-sex's property analysis method, system, equipment and the medium of face 2D image |
CN110909609A (en) * | 2019-10-26 | 2020-03-24 | 湖北讯獒信息工程有限公司 | Expression recognition method based on artificial intelligence |
CN111105028A (en) * | 2018-10-26 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | Neural network training method and device and sequence prediction method |
CN111144557A (en) * | 2019-12-31 | 2020-05-12 | 中国电子科技集团公司信息科学研究院 | Action strategy method based on cascade mode |
CN111209874A (en) * | 2020-01-09 | 2020-05-29 | 北京百目科技有限公司 | Method for analyzing and identifying wearing attribute of human head |
WO2020134858A1 (en) * | 2018-12-29 | 2020-07-02 | 北京市商汤科技开发有限公司 | Facial attribute recognition method and apparatus, electronic device, and storage medium |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111428628A (en) * | 2020-03-23 | 2020-07-17 | 北京每日优鲜电子商务有限公司 | Face detection method, device, equipment and storage medium |
CN111444787A (en) * | 2020-03-12 | 2020-07-24 | 江西赣鄱云新型智慧城市技术研究有限公司 | Fully intelligent facial expression recognition method and system with gender constraint |
CN111523367A (en) * | 2020-01-22 | 2020-08-11 | 湖北科技学院 | Intelligent facial expression recognition method and system based on facial attribute analysis |
CN111582336A (en) * | 2020-04-23 | 2020-08-25 | 海信集团有限公司 | Image-based garbage type identification device and method |
CN111723613A (en) * | 2019-03-20 | 2020-09-29 | 广州慧睿思通信息科技有限公司 | Face image data processing method, device, equipment and storage medium |
CN111723612A (en) * | 2019-03-20 | 2020-09-29 | 北京市商汤科技开发有限公司 | Face recognition and face recognition network training method and device, and storage medium |
CN112163462A (en) * | 2020-09-08 | 2021-01-01 | 北京数美时代科技有限公司 | Face-based juvenile recognition method and device and computer equipment |
CN112395908A (en) * | 2019-08-13 | 2021-02-23 | 天津大学青岛海洋技术研究院 | Living body face detection algorithm based on three-dimensional convolutional neural network |
CN112733570A (en) * | 2019-10-14 | 2021-04-30 | 北京眼神智能科技有限公司 | Glasses detection method and device, electronic equipment and storage medium |
CN112825115A (en) * | 2019-11-20 | 2021-05-21 | 北京眼神智能科技有限公司 | Monocular image-based glasses detection method and device, storage medium and equipment |
CN113792718A (en) * | 2021-11-18 | 2021-12-14 | 北京的卢深视科技有限公司 | Method for positioning face area in depth map, electronic device and storage medium |
CN111401198B (en) * | 2020-03-10 | 2024-04-23 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824054A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded depth neural network-based face attribute recognition method |
CN105631398A (en) * | 2014-11-24 | 2016-06-01 | 三星电子株式会社 | Method and apparatus for recognizing object, and method and apparatus for training recognizer |
CN105760834A (en) * | 2016-02-14 | 2016-07-13 | 北京飞搜科技有限公司 | Face feature point locating method |
CN105809178A (en) * | 2014-12-31 | 2016-07-27 | 中国科学院深圳先进技术研究院 | Population analyzing method based on human face attribute and device |
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN106156702A (en) * | 2015-04-01 | 2016-11-23 | 北京市商汤科技开发有限公司 | Identity identifying method and equipment |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106909896A (en) * | 2017-02-17 | 2017-06-30 | 竹间智能科技(上海)有限公司 | Man-machine interactive system and method for work based on character personality and interpersonal relationships identification |
CN107423663A (en) * | 2017-03-23 | 2017-12-01 | 深圳市金立通信设备有限公司 | A kind of image processing method and terminal |
-
2017
- 2017-12-29 CN CN201711498120.9A patent/CN108596011A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824054A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded depth neural network-based face attribute recognition method |
CN105631398A (en) * | 2014-11-24 | 2016-06-01 | 三星电子株式会社 | Method and apparatus for recognizing object, and method and apparatus for training recognizer |
CN105809178A (en) * | 2014-12-31 | 2016-07-27 | 中国科学院深圳先进技术研究院 | Population analyzing method based on human face attribute and device |
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN106156702A (en) * | 2015-04-01 | 2016-11-23 | 北京市商汤科技开发有限公司 | Identity identifying method and equipment |
CN105760834A (en) * | 2016-02-14 | 2016-07-13 | 北京飞搜科技有限公司 | Face feature point locating method |
CN106203395A (en) * | 2016-07-26 | 2016-12-07 | 厦门大学 | Face character recognition methods based on the study of the multitask degree of depth |
CN106909896A (en) * | 2017-02-17 | 2017-06-30 | 竹间智能科技(上海)有限公司 | Man-machine interactive system and method for work based on character personality and interpersonal relationships identification |
CN107423663A (en) * | 2017-03-23 | 2017-12-01 | 深圳市金立通信设备有限公司 | A kind of image processing method and terminal |
Non-Patent Citations (1)
Title |
---|
王振永: "《模式识别:算法及实现方法》", 31 October 2017 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111105028A (en) * | 2018-10-26 | 2020-05-05 | 杭州海康威视数字技术股份有限公司 | Neural network training method and device and sequence prediction method |
CN111105028B (en) * | 2018-10-26 | 2023-10-24 | 杭州海康威视数字技术股份有限公司 | Training method, training device and sequence prediction method for neural network |
CN109492711A (en) * | 2018-12-07 | 2019-03-19 | 杭州电子科技大学 | Malignant mela noma and non-malignant melanotic nevus classification method based on deep learning |
CN109492711B (en) * | 2018-12-07 | 2020-08-25 | 杭州电子科技大学 | Malignant melanoma and non-malignant melanoma classification system based on deep learning |
CN109711356A (en) * | 2018-12-28 | 2019-05-03 | 广州海昇教育科技有限责任公司 | A kind of expression recognition method and system |
CN109711356B (en) * | 2018-12-28 | 2023-11-10 | 广州海昇教育科技有限责任公司 | Expression recognition method and system |
WO2020134858A1 (en) * | 2018-12-29 | 2020-07-02 | 北京市商汤科技开发有限公司 | Facial attribute recognition method and apparatus, electronic device, and storage medium |
CN111382642A (en) * | 2018-12-29 | 2020-07-07 | 北京市商汤科技开发有限公司 | Face attribute recognition method and device, electronic equipment and storage medium |
CN109803090A (en) * | 2019-01-25 | 2019-05-24 | 睿魔智能科技(深圳)有限公司 | Unmanned shooting automatic zooming method and system, unmanned cameras and storage medium |
CN110427795A (en) * | 2019-01-28 | 2019-11-08 | 厦门瑞为信息技术有限公司 | A kind of property analysis method based on head photo, system and computer equipment |
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN110069994A (en) * | 2019-03-18 | 2019-07-30 | 中国科学院自动化研究所 | Face character identifying system, method based on face multizone |
CN111723613A (en) * | 2019-03-20 | 2020-09-29 | 广州慧睿思通信息科技有限公司 | Face image data processing method, device, equipment and storage medium |
CN111723612A (en) * | 2019-03-20 | 2020-09-29 | 北京市商汤科技开发有限公司 | Face recognition and face recognition network training method and device, and storage medium |
CN110008922A (en) * | 2019-04-12 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Image processing method, unit, medium for terminal device |
CN110009059A (en) * | 2019-04-16 | 2019-07-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN110009059B (en) * | 2019-04-16 | 2022-03-29 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating a model |
CN110378306A (en) * | 2019-07-25 | 2019-10-25 | 厦门美图之家科技有限公司 | Age prediction technique, device and image processing equipment |
CN110414483A (en) * | 2019-08-13 | 2019-11-05 | 山东浪潮人工智能研究院有限公司 | A kind of face identification method and system based on deep neural network and random forest |
CN112395908A (en) * | 2019-08-13 | 2021-02-23 | 天津大学青岛海洋技术研究院 | Living body face detection algorithm based on three-dimensional convolutional neural network |
CN110472611A (en) * | 2019-08-21 | 2019-11-19 | 图谱未来(南京)人工智能研究院有限公司 | Method, apparatus, electronic equipment and the readable storage medium storing program for executing of character attribute identification |
CN110532965B (en) * | 2019-08-30 | 2022-07-26 | 京东方科技集团股份有限公司 | Age identification method, storage medium and electronic device |
CN110532965A (en) * | 2019-08-30 | 2019-12-03 | 京东方科技集团股份有限公司 | Age recognition methods, storage medium and electronic equipment |
US11361587B2 (en) | 2019-08-30 | 2022-06-14 | Boe Technology Group Co., Ltd. | Age recognition method, storage medium and electronic device |
CN110532970A (en) * | 2019-09-02 | 2019-12-03 | 厦门瑞为信息技术有限公司 | Age-sex's property analysis method, system, equipment and the medium of face 2D image |
CN110532970B (en) * | 2019-09-02 | 2022-06-24 | 厦门瑞为信息技术有限公司 | Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces |
CN112733570A (en) * | 2019-10-14 | 2021-04-30 | 北京眼神智能科技有限公司 | Glasses detection method and device, electronic equipment and storage medium |
CN110909609A (en) * | 2019-10-26 | 2020-03-24 | 湖北讯獒信息工程有限公司 | Expression recognition method based on artificial intelligence |
CN112825115A (en) * | 2019-11-20 | 2021-05-21 | 北京眼神智能科技有限公司 | Monocular image-based glasses detection method and device, storage medium and equipment |
CN111144557A (en) * | 2019-12-31 | 2020-05-12 | 中国电子科技集团公司信息科学研究院 | Action strategy method based on cascade mode |
CN111209874A (en) * | 2020-01-09 | 2020-05-29 | 北京百目科技有限公司 | Method for analyzing and identifying wearing attribute of human head |
CN111523367B (en) * | 2020-01-22 | 2022-07-22 | 湖北科技学院 | Intelligent facial expression recognition method and system based on facial attribute analysis |
CN111523367A (en) * | 2020-01-22 | 2020-08-11 | 湖北科技学院 | Intelligent facial expression recognition method and system based on facial attribute analysis |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111401198B (en) * | 2020-03-10 | 2024-04-23 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111444787A (en) * | 2020-03-12 | 2020-07-24 | 江西赣鄱云新型智慧城市技术研究有限公司 | Fully intelligent facial expression recognition method and system with gender constraint |
CN111444787B (en) * | 2020-03-12 | 2023-04-07 | 江西赣鄱云新型智慧城市技术研究有限公司 | Fully intelligent facial expression recognition method and system with gender constraint |
CN111428628A (en) * | 2020-03-23 | 2020-07-17 | 北京每日优鲜电子商务有限公司 | Face detection method, device, equipment and storage medium |
CN111582336B (en) * | 2020-04-23 | 2023-11-03 | 海信集团有限公司 | Device and method for identifying garbage types based on images |
CN111582336A (en) * | 2020-04-23 | 2020-08-25 | 海信集团有限公司 | Image-based garbage type identification device and method |
CN112163462A (en) * | 2020-09-08 | 2021-01-01 | 北京数美时代科技有限公司 | Face-based juvenile recognition method and device and computer equipment |
CN113792718A (en) * | 2021-11-18 | 2021-12-14 | 北京的卢深视科技有限公司 | Method for positioning face area in depth map, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596011A (en) | A kind of face character recognition methods and device based on combined depth network | |
Yuan et al. | Fingerprint liveness detection using an improved CNN with image scale equalization | |
CN104143079B (en) | The method and system of face character identification | |
CN107016370B (en) | A kind of partial occlusion face identification method based on data enhancing | |
CN105718873B (en) | Stream of people's analysis method based on binocular vision | |
CN108961675A (en) | Fall detection method based on convolutional neural networks | |
US20100316263A1 (en) | Iris and ocular recognition system using trace transforms | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN107423696A (en) | Face identification method and system | |
CN110532970A (en) | Age-sex's property analysis method, system, equipment and the medium of face 2D image | |
CN106570447B (en) | Based on the matched human face photo sunglasses automatic removal method of grey level histogram | |
CN108268814A (en) | A kind of face identification method and device based on the fusion of global and local feature Fuzzy | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN105912991A (en) | Behavior identification method based on 3D point cloud and key bone nodes | |
CN106529377A (en) | Age estimating method, age estimating device and age estimating system based on image | |
CN107025444A (en) | Piecemeal collaboration represents that embedded nuclear sparse expression blocks face identification method and device | |
Paul et al. | Extraction of facial feature points using cumulative histogram | |
CN109063643A (en) | A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part | |
Wan et al. | Addressing location uncertainties in GPS‐based activity monitoring: A methodological framework | |
Volokitin et al. | Predicting when saliency maps are accurate and eye fixations consistent | |
CN107977618A (en) | A kind of face alignment method based on Cascaded Double-layer neutral net | |
Pukhrambam et al. | A smart study on medicinal plants identification and classification using image processing techniques | |
GB2471192A (en) | Iris and Ocular Recognition using Trace Transforms | |
Janadri et al. | Multiclass classification of kirlian images using svm technique | |
CN114639153A (en) | Face recognition method with dynamic capture function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180928 |