CN104463243B - Sex-screening method based on average face feature - Google Patents
Sex-screening method based on average face feature Download PDFInfo
- Publication number
- CN104463243B CN104463243B CN201410720504.0A CN201410720504A CN104463243B CN 104463243 B CN104463243 B CN 104463243B CN 201410720504 A CN201410720504 A CN 201410720504A CN 104463243 B CN104463243 B CN 104463243B
- Authority
- CN
- China
- Prior art keywords
- layer
- face
- convolutional neural
- neural networks
- sex
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides the sex-screening method based on average face feature, including learning procedure and detecting step:Learning procedure includes:A, face database is classified, and calculate the average face of all kinds of facial images;B, by each average face data configuration be the output layers of convolutional neural networks, the face each position that the human face data under each average face generic is concentrated be configured to the input layers of convolutional neural networks, convolutional neural networks are learnt;C, using the output layer of the convolutional neural networks as Gender Classification layer input layer, different sexes as Gender Classification layer output layer, to Gender Classification layer learn;Detecting step includes:By the convolutional neural networks after the facial image input study of personnel to be tested, by Gender Classification layer output sex.The feature representation of average face is obtained to average face constraint study based on convolutional neural networks, the limitation of the feature of existing classical engineer is effectively solved, it is realisation other to accurately identify.
Description
Technical field
The present invention relates to field of image recognition, more particularly to a kind of sex-screening method based on average face feature.
Background technology
In the prior art, sex recognizer is broadly divided into three classes, one is based on voice, another kind of is the step based on people
State, last class is to be based on facial image.
Because the Technical comparing of current Face datection is ripe, hence in so that this class method is more simple, direct.But due to existing
Complicated background, illumination and camera precision difference in itself and the angle of face etc. factor in real environment, greatly increase
The difficulty for having added sex to recognize, causes accuracy rate than relatively low.
Further, the problem of technology of the sex identification based on facial image is maximum is that accuracy rate is low, unstable.Cause
This problem has many factors:For example in actual environment illumination is complicated, and camera parameter in itself and precision vary;
The change of the angle of face is very more in reality;The colour of skin of different ethnic groups differs greatly.
Three above factor significantly increases the identification difficulty of the sex identifier in facial image.Traditional use base
In the vertical recognition methods of gradient direction (HOG, Histograms of Original Gradients), local binary recognition methods
In (LBP, Local Binary Pattern) and Gabor wavelet transform method mainly using following two modes solve illumination,
The problem of angle and the colour of skin:
First, in the detection, detection image is pre-processed, reduced due to the difference that illumination, angle, the different colours of skin are brought
Property;
2nd, based on a variety of data sets, multiple graders are trained.
But because the difference of environment causes image to vary, first way can not eliminate this influence well;The
Two kinds of modes will be solved due to complexity caused by environment in training, but because otherness is than larger, of training aids
The result of the number training aids different with how to combine all is difficult to have individual good solution.
In addition, the step of above-mentioned several features are also in the presence of a defect, i.e. implementation features described above and parameter are all fixed, such as
It is achieved in that wherein x is input picture using function y=f (x), and y is the feature of output.Above-mentioned hand-designed is (such as
LBP, gabor, HOG etc.) feature in f (x) functional form think to specify, and the parameter of function is also to think setting
, rather than based on being obtained by study on sample set.Therefore easily there is error.
The content of the invention
The application provides a kind of sex-screening method based on average face feature, and average face is passed through based on convolutional neural networks
Constraint study obtains the feature representation of average face, and then detection model is trained, and effectively solves classical engineer
Feature can not express the limitation of the facial image under a variety of environment and angle well so that realisation other standard
Really identification.
The step of sex-screening method based on average face feature includes study and detecting step:
The step of study, includes:
A, face database is classified, and calculate the average face of all kinds of facial images;
B, by each average face data configuration be the output layers of convolutional neural networks, the people under each average face generic
Face each position in face data set is configured to the input layer of convolutional neural networks, and convolutional neural networks are learnt;
C, using the output layer of the convolutional neural networks as the input layer of Gender Classification layer, different sexes are used as sex point
The output layer of class layer, learns to Gender Classification layer;
The detecting step includes:
By the convolutional neural networks after the facial image input study for inputting personnel to be tested, by Gender Classification layer output property
Not.
By upper, the feature representation that study obtains average face is constrained by average face based on convolutional neural networks, and then to inspection
Survey model to be trained, the situation of all features can not be expressed well by effectively solving prior art, so that realisation
It is other to accurately identify.
Optionally, carrying out classification to face database described in step A includes:According to the corresponding sex of face and the colour of skin
Classified.
By upper, it can be achieved to carry out Primary Differentiation for the face of the different colours of skin and sex, complete the accumulative of initial data.
Optionally, step B includes:
By each position pixel of average face, (X is expressed as1、X2... ..., Xn), it is configured to convolutional neural networks output layer
Each neural unit;
By each position pixel of each face of correspondence classification, (O is expressed as1、O2... ..., On), it is configured to convolutional Neural net
Each neural unit configuration of network input layer;
The each position pixel of each face of the input layer and each position pixel of the average face of output layer is set to distinguish minimum
Mode CNN is learnt.
Wherein, each nerve of each neural unit of convolutional neural networks output layer and convolutional neural networks input layer is single
First quantity Matching.
By upper, by face database data for the volume basic unit in convolutional neural networks and the study of hidden layer, realize
Its feature representation exported.
Optionally, each position picture of the average face of each position pixel of each face for making the input layer and output layer
The minimum mode of element difference includes:Using correspondence each position pixel least square error and by the way of.
Optionally, in step C, the Gender Classification layer includes softmax classification layers.
Optionally, it is described to Gender Classification layer learn the step of include:
Learnt using back-propagation algorithm, calculated
Minimum value, w is that the softmax classifies the parameter of layer in formula, and tn is the true of n-th sample in the face database
Real sex, yn be n-th of sample by the sex after model,Corrode for weight.
Optionally, the Gender Classification layer includes vector machine classification or logistic regression classification.
Brief description of the drawings
Fig. 1 is flow chart of the invention;
Fig. 2 is the principle schematic of the sex-screening model based on average face feature.
Embodiment
Sex-screening method involved in the present invention based on average face feature, wherein, passed through based on convolutional neural networks flat
Equal face constraint study obtains the feature representation of average face, and the feature for effectively solving existing classical engineer can not be fine
Expression varying environment and angle under face limitation.
As shown in figure 1, the present invention comprises the following steps:
S10:Human face data is classified, different classes of average face is calculated.
In the present embodiment, according to the colour of skin and sex, human face data collection is classified, the present embodiment uses following 6 classes
Not:In vain-man, white-female, black-male, black-female, Huang-man, Huang-female.The data of each of the above classification should include different illumination and appearance
Gesture, and scale it in unified pixel size.The human face data collection under each classification is directed to, by calculating category data
The pixel average of each face image data, i.e. each facial image same position for concentrating, obtains being averaged for each classification
Face.Wherein, the average face calculated can be expressed as an array, such as (X1、X2... ..., Xn), respectively to should average face each
Position pixel average.This step is the average face for obtaining 6 classifications.
S20:The above-mentioned all kinds of average faces of correspondence, to convolutional neural networks (CNN, Convolutional Neural
Networks) learnt.
Wherein, when learning to CNN, the neuron number of input layer, output layer need to have been configured, then by each average face
Data as output layer, to should be under average face generic the face each position concentrated of human face data as input layer, it is right
CNN intermediate layer (in the present embodiment, intermediate layer is the general designation to convolutional layer, hidden layer) learns, or CNN models are entered
Row training.It is specifically described below:
In the present embodiment, the neural unit number of CNN output layers is configured to match with input layer, and configure as follows:
As shown in Fig. 2 CNN output layers correspondence average face data, each neural unit of CNN output layers is configured to average face
Each position pixel, be as above expressed as (X1、X2... ..., Xn)。
Each neural unit of CNN input layers is configured to each with the human face data concentration under output layer average face generic
The each position pixel of face, is expressed as (O1、O2... ..., On)。
When learning to CNN, the pre- study of CNN models is carried out using back-propagation algorithm so that each sample is defeated
Enter (each human face data that the input of each sample is the human face data concentration of correspondence classification) corresponding average face number
According to the difference of two squares and minimum.Specifically, the corresponding difference of two squares of each sample of correspondence and calculating are expressed as:E (w)=1/2 [(O1-
X1)*(O1-X1)+……+(On-Xn)*(On-Xn)].Make E (w) minimum by solving, even if each sample CNN input layers of correspondence
Facial image each position and output layer average face each position difference it is minimum, to reach to convolutional layer in CNN and implicit
The study of layer.All there are parameter in volume basic unit and hidden layer in CNN, and this step the destination of study is exactly to obtain above-mentioned parameter
Value, is different from the feature of existing classical artificial setting, and the step of it is calculated is manually set with parameter.And in this programme
It is characterized in be obtained based on sample set study.
S30:Feature to CNN output layers in step S20 carries out sex classification learning.
Softmax classification layers are added after the CNN learnt, the output layer neural unit of the softmax classification layer is matched somebody with somebody
Sex is set to, is divided into man, female 2.The data of the input layer of classification layer are the output at the CNN networkings above succeeded in school, i.e., above
(O1、O2... ..., On) pass through the output valve after CNN.Using back-propagation algorithm, based on cross entropy to softmax classification layers
Learnt, the sex of the output layer and the feature representation of input layer is matched.Classification layer object function be:The direction of study is exactly to find a w to cause
The value of this object function is less and less, and the w in formula is exactly that softmax classifies the parameter of layer, and tn is n-th sample (n-th
Face) sex actual value, yn be n-th of sample pass through the sex-screening value after model;Corrode for weight, weight is rotten
The purpose of erosion is to prevent over-fitting.
Only classify to the softmax parameter of layer of this step learns, and the parameter in CNN networkings above no longer changes
Become, softmax classification layers are exactly now a grader, for carrying out Gender Classification according to feature.In addition, can also use as propped up
Hold other sorting algorithms such as vector machine, logistic regression and realize Gender Classification, will not be described here.
Step S40:The head portrait of personnel to be tested is inputted, its sex is detected.
During actual sex abnormality, personnel to be tested's head portrait, the output of its output layer are inputted to the input layer at CNN networkings
Just the feature representation of image to be identified can be drawn, softmax classifies layer according to feature representation progress sex identification, and output is known
Other result.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in the present invention in a word
Spirit and principle within, any modification, equivalent substitution and improvements made etc., should be included in protection scope of the present invention it
It is interior.
Claims (7)
1. the sex-screening method based on average face feature, it is characterised in that including learning procedure and detecting step:
The learning procedure includes:
A, face database is classified, and calculate the average face of all kinds of facial images;
B, by each average face data configuration be the output layers of convolutional neural networks, the face number under each average face generic
The input layer of convolutional neural networks is configured to according to the face each position of concentration, convolutional neural networks are learnt;
C, using the output layer of the convolutional neural networks as Gender Classification layer input layer, different sexes be used as Gender Classification layer
Output layer, to Gender Classification layer learn;
The detecting step includes:
By the convolutional neural networks after the facial image input study of personnel to be tested, by Gender Classification layer output sex.
2. according to the method described in claim 1, it is characterised in that classification bag is carried out to face database described in step A
Include:Classified according to the corresponding sex of face and the colour of skin.
3. according to the method described in claim 1, it is characterised in that step B includes:
By each position pixel of average face, (X is expressed as1、X2... ..., Xn), be configured to convolutional neural networks output layer each
Neural unit;
By each position pixel of each face of correspondence classification, (O is expressed as1、O2... ..., On), it is configured to convolutional neural networks defeated
Enter each neural unit of layer;
The each position pixel of each face of the input layer and each position pixel of the average face of output layer is set to distinguish minimum side
Formula learns to convolutional neural networks;
Wherein, each neural unit number of each neural unit of convolutional neural networks output layer and convolutional neural networks input layer
It is flux matched.
4. method according to claim 3, it is characterised in that each position pixel of each face for making the input layer
Minimum mode includes with each position pixel difference of the average face of output layer:Using the least square error of correspondence each position pixel
The mode of sum.
5. according to the method described in claim 1, it is characterised in that in step C, the Gender Classification layer includes softmax points
Class layer.
6. method according to claim 5, it is characterised in that the step learnt described in step C to Gender Classification layer
Suddenly include:
Learnt using back-propagation algorithm, calculated
Make its result minimum, w is the parameter of softmax classification layers in formula, tn is n-th of sample in the face database
True sex, yn be n-th of sample by the sex after model,Corrode for weight.
7. according to the method described in claim 1, it is characterised in that the Gender Classification layer includes vector machine classification or logic is returned
Return classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410720504.0A CN104463243B (en) | 2014-12-01 | 2014-12-01 | Sex-screening method based on average face feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410720504.0A CN104463243B (en) | 2014-12-01 | 2014-12-01 | Sex-screening method based on average face feature |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104463243A CN104463243A (en) | 2015-03-25 |
CN104463243B true CN104463243B (en) | 2017-09-29 |
Family
ID=52909257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410720504.0A Active CN104463243B (en) | 2014-12-01 | 2014-12-01 | Sex-screening method based on average face feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104463243B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850825B (en) * | 2015-04-18 | 2018-04-27 | 中国计量学院 | A kind of facial image face value calculating method based on convolutional neural networks |
CN106469289A (en) * | 2015-08-16 | 2017-03-01 | 联芯科技有限公司 | Facial image sex-screening method and system |
CN105678381B (en) * | 2016-01-08 | 2019-03-08 | 浙江宇视科技有限公司 | A kind of Gender Classification network training method, gender classification method and relevant apparatus |
CN105825191B (en) * | 2016-03-23 | 2020-05-15 | 厦门美图之家科技有限公司 | Gender identification method and system based on face multi-attribute information and shooting terminal |
CN106127159A (en) * | 2016-06-28 | 2016-11-16 | 电子科技大学 | A kind of gender identification method based on convolutional neural networks |
CN107590460B (en) * | 2017-09-12 | 2019-05-03 | 北京达佳互联信息技术有限公司 | Face classification method, apparatus and intelligent terminal |
CN107844338B (en) * | 2017-10-31 | 2019-09-13 | Oppo广东移动通信有限公司 | Application program management-control method, device, medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8027521B1 (en) * | 2008-03-25 | 2011-09-27 | Videomining Corporation | Method and system for robust human gender recognition using facial feature localization |
CN103544506A (en) * | 2013-10-12 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for classifying images on basis of convolutional neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011119117A1 (en) * | 2010-03-26 | 2011-09-29 | Agency For Science, Technology And Research | Facial gender recognition |
-
2014
- 2014-12-01 CN CN201410720504.0A patent/CN104463243B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8027521B1 (en) * | 2008-03-25 | 2011-09-27 | Videomining Corporation | Method and system for robust human gender recognition using facial feature localization |
CN103544506A (en) * | 2013-10-12 | 2014-01-29 | Tcl集团股份有限公司 | Method and device for classifying images on basis of convolutional neural network |
Non-Patent Citations (3)
Title |
---|
一种基于卷积神经网络的性别识别方法;蔡诗威等;《电视技术》;20141002;第38卷(第19期);I138-718 * |
基于人脸图像的性别分类;陆庆庆;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140715(第7期);第188-191页 * |
基于人脸图像的性别分类研究;张宁;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110815(第08期);I138-472 * |
Also Published As
Publication number | Publication date |
---|---|
CN104463243A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104463243B (en) | Sex-screening method based on average face feature | |
CN104408470B (en) | The sex-screening method learnt in advance based on average face | |
Wang et al. | Interpret neural networks by identifying critical data routing paths | |
CN106650806B (en) | A kind of cooperating type depth net model methodology for pedestrian detection | |
CN106127164B (en) | Pedestrian detection method and device based on conspicuousness detection and convolutional neural networks | |
CN107133616B (en) | Segmentation-free character positioning and identifying method based on deep learning | |
CN109583322B (en) | Face recognition deep network training method and system | |
CN104281853B (en) | A kind of Activity recognition method based on 3D convolutional neural networks | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN109190665A (en) | A kind of general image classification method and device based on semi-supervised generation confrontation network | |
CN110532900A (en) | Facial expression recognizing method based on U-Net and LS-CNN | |
CN109101938B (en) | Multi-label age estimation method based on convolutional neural network | |
CN106408030B (en) | SAR image classification method based on middle layer semantic attribute and convolutional neural networks | |
CN104992142A (en) | Pedestrian recognition method based on combination of depth learning and property learning | |
CN106529442A (en) | Pedestrian identification method and apparatus | |
CN104484658A (en) | Face gender recognition method and device based on multi-channel convolution neural network | |
CN103971106B (en) | Various visual angles facial image gender identification method and device | |
CN107871107A (en) | Face authentication method and device | |
CN110321870B (en) | Palm vein identification method based on LSTM | |
CN104834941A (en) | Offline handwriting recognition method of sparse autoencoder based on computer input | |
CN108596274A (en) | Image classification method based on convolutional neural networks | |
CN113761259A (en) | Image processing method and device and computer equipment | |
Ramya et al. | Leaf disease detection and classification using neural networks | |
CN109344845A (en) | A kind of feature matching method based on Triplet deep neural network structure | |
CN109034281A (en) | The Chinese handwritten body based on convolutional neural networks is accelerated to know method for distinguishing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder |
Address after: Room 101-105, floor 1, Chuangda building, No. 9, Qinghua East Road, Haidian District, Beijing 100083 (Dongsheng District) Patentee after: Thunder Software Technology Co., Ltd. Address before: 100191 Beijing Haidian District Lung Cheung Road No. 1 Tai Xiang 4 storey commercial building Patentee before: Thunder Software Technology Co., Ltd. |
|
CP02 | Change in the address of a patent holder |