CN106295506A - A kind of age recognition methods based on integrated convolutional neural networks - Google Patents
A kind of age recognition methods based on integrated convolutional neural networks Download PDFInfo
- Publication number
- CN106295506A CN106295506A CN201610592214.1A CN201610592214A CN106295506A CN 106295506 A CN106295506 A CN 106295506A CN 201610592214 A CN201610592214 A CN 201610592214A CN 106295506 A CN106295506 A CN 106295506A
- Authority
- CN
- China
- Prior art keywords
- layer
- convolution kernel
- lamination
- volume
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of age recognition methods based on integrated convolutional neural networks, step is as follows: S1, the training subset obtained in age recognition training data base and expand it, the training subset after being expanded;Selecting M the training subset after above-mentioned expansion trains the convolutional neural networks grader obtained as base grader;S2, obtain facial image to be measured;S3, when test, M the base grader that facial image to be measured input step S1 respectively is got, then merge the age categories that M base grader exports, it is thus achieved that a final age categories.Having age recognition accuracy high, the dependency to people is extracted in the age characteristics decreasing facial image, it is possible to estimates the age of multiple crowd, has widely used advantage.
Description
Technical field
The present invention relates to a kind of technical field of image processing, particularly to a kind of age based on integrated convolutional neural networks
Recognition methods.
Background technology
The face age identifies it is the facial image according to gathering, and extracts the age characteristics of facial image, uses computer graphic
As the correlation techniques such as process process and analyze, it is judged that a kind of computer vision technique of age categories belonging to facial image.
Age identifies that problem has a wide range of applications at academic research and business application aspect, such as bar, and Internet bar or private
The public place of entertainment of the adults such as people club, identifies system can forbid less than the minor of 18 one full year of life by the age and enters;Right
In selling medicated cigarette and the automatic vending machine of wine, client's age categories can be obtained in real time by installing real-time photography head, refuse to
Minor less than 18 one full year of life provides sale tobacco and wine service;In public computer service, such as Digital Reading Room, by automatically
Age identifies that system intercepts minor and browses website and other X film of adult's information.Additionally some electronics client
In management system, customer logs when, gather the facial image of client and automatically carry out age identification, thus realizing not
To the customer consumption preference of different age group with shopping feature is collected and subsequent customers data in the case of interference client
Analysis and excavation, the client being different age group according to follow-up data results provides personalized product or service.
In terms of man-machine interaction, can be that user's offer of different age group meets the human-computer interaction interface of Age Characteristics and weighs
Limit controls.
There is a lot of age to know method for distinguishing at present, generally will identify that system is divided into two parts the age: facial image
Feature extraction and the algorithms selection of character classification by age.And classification is just dependent on machine learning algorithm and realizes, effectiveness comparison is significant
There are support vector machine (SVM), neutral net, K nearest neighbor algorithm, gauss hybrid models (GMM), random forest, integrated study etc.
Deng.Chinese scholars, when solving speech emotion recognition problem, mostly uses these sorting algorithms, but these sorting algorithm poles
The earth depends on the feature extraction to facial image, and the Feature Extraction Method used at present is engineer, outside active
See model, face measurement model etc., then reject redundancy or incoherent feature by feature selecting algorithm, draw optimum or
Suboptimum character subset, the step for purpose on the one hand be to improve recognition accuracy, be on the other hand the dimension reducing feature, from
And lift scheme training speed.But this process greatly relies on the experience of human expert, and needs experiment is repeated
Just can complete, not only workload is big, and is difficult to find the face age characteristics of a kind of optimum to express, thus have impact on face
The effect that age identifies.
Summary of the invention
It is an object of the invention to overcome the shortcoming of prior art with not enough, it is provided that a kind of recognition accuracy high based on collection
Become the age recognition methods of convolutional neural networks.
The purpose of the present invention is achieved through the following technical solutions: a kind of age identification side based on integrated convolutional neural networks
Method, it is characterised in that step is as follows:
S1, the training subset obtained in age recognition training data base and it is expanded, the instruction after being expanded
Practice subset;Selecting M the training subset after above-mentioned expansion trains the convolutional neural networks grader obtained to divide as base
Class device;
S2, obtain facial image to be measured;
S3, test time, M the base grader that facial image to be measured input step S1 respectively is got, then merge M
The age categories of individual base grader output, it is thus achieved that a final age categories.
Preferably, in described step S1, base grader acquisition process is specific as follows:
S11, age recognition training storehouse is divided into training set and checking collection;Wherein age recognition training storehouse includes facial image
And each facial image correspondence age categories;
S12, training set is carried out n times stochastic sampling, obtain N number of training subset;
S13, employing image conversion method automatically expand in step S12 and obtain N number of training subset, after obtaining N number of expansion
Training subset;
S14, stochastic generation N number of convolutional neural networks model, then utilizes the instruction after the N number of expansion obtained in step S13
Practice subset respectively N number of convolutional neural networks model to be trained, obtain N number of convolutional neural networks grader;
S15, calculate N number of convolutional neural networks grader age recognition training storehouse checking collection on recognition accuracy;
Before S16, selection recognition accuracy ranking, M the convolutional neural networks grader of M is as base grader.
Further, the n times stochastic sampling in described step S12 carried out training set is for putting back to stochastic sampling.
Further, it is characterised in that in described step S13, image conversion method includes the face figure to training subset
As carrying out image rotation, image RGB channel disturbance and image add Gaussian noise.
Further, use the function in Python graphics process storehouse that facial image in training subset is carried out image rotation
Conversion.
Preferably, described M is 6, i.e. selects 6 convolutional neural networks graders in step S1 as base grader, i.e.
Including 6 base graders.
Further, the convolutional neural networks model training of 6 base grader respectively four layers of convolutional layers obtain
The convolutional neural networks model training of one base grader, the second base grader and the 3rd base grader and three-layer coil lamination obtains
Base grader the 4th base grader, the 5th base grader and the 6th base grader.
Further, the convolutional neural networks mode input layer of the four layers of convolutional layer obtaining the first base grader is trained extremely
It is followed successively by first volume lamination conv11, the first down-sampling layer pool11, volume Two lamination conv12 between output layer, adopts for second time
Sample layer pool12, the 3rd convolutional layer conv13, Volume Four lamination conv14, the 3rd down-sampling layer pool15, the first full articulamentum
The full articulamentum fc17 of fc16 and second;The wherein convolution kernel number 96 of first volume lamination conv11 layer, convolution kernel size is 9*9;
The convolution kernel number of volume Two lamination conv12 layer is 256, and convolution kernel size is 7*7;The convolution of the 3rd convolutional layer conv13 layer
Core number is 256, and convolution kernel size is 5*5;The convolution kernel number of Volume Four lamination conv14 layer is 256, and convolution kernel size is
3*3;
Training obtains the convolutional neural networks mode input layer of four layers of convolutional layer of the second base grader between output layer
It is followed successively by first volume lamination conv21, the first down-sampling layer pool21, volume Two lamination conv22, the second down-sampling layer
Pool22, the 3rd convolutional layer conv23, Volume Four lamination conv24, the 3rd down-sampling layer pool15, the first full articulamentum fc26
With the second full articulamentum fc27;The convolution kernel number of first volume lamination conv21 layer is 128, and convolution kernel size is 9*9;Volume Two
The convolution kernel number of lamination conv22 layer is 256, and convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv23 layer
Being 256, convolution kernel size is 3*3;The convolution kernel number of Volume Four lamination conv24 is 384, and convolution kernel size is 3*3;
Training obtains the convolutional neural networks mode input layer of four layers of convolutional layer of the 3rd base grader between output layer
It is followed successively by first volume lamination conv31, the first down-sampling layer pool31, volume Two lamination conv32, the second down-sampling layer
Pool32, the 3rd convolutional layer conv33, Volume Four lamination conv34, the 3rd down-sampling layer pool35, the first full articulamentum fc36
With the second full articulamentum fc37;The convolution kernel number of first volume lamination conv31 layer is 96, and convolution kernel size is 7*7;Volume Two
The convolution kernel number of lamination conv32 layer is 256, and convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv33 layer
Being 512, convolution kernel size is 5*5;The convolution kernel number of Volume Four lamination conv34 layer is 384, and convolution kernel size is 3*3;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 4th base grader depends between output layer
Secondary for first volume lamination conv41, the first down-sampling layer pool41, volume Two lamination conv42, the second down-sampling layer pool42,
The full articulamentum fc47 of 3rd convolutional layer conv43, the 3rd down-sampling layer pool45, the first full articulamentum fc46 and second;The first volume
The convolution kernel number of lamination conv41 layer is 96, and convolution kernel size is 9*9;The convolution kernel number of volume Two lamination conv42 layer is
256, convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv43 layer is 256, and convolution kernel size is 5*5;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 5th base grader depends between output layer
Secondary for first volume lamination conv51, the first down-sampling layer pool51, volume Two lamination conv52, the second down-sampling layer pool52,
The full articulamentum fc57 of 3rd convolutional layer conv53, the 3rd down-sampling layer pool55, the first full articulamentum fc56 and second;The first volume
The convolution kernel number of lamination conv51 layer is 128, and convolution kernel size is 9*9;The convolution kernel number of volume Two lamination conv52 layer
Being 256, convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv53 layer is 384, and convolution kernel size is 5*5;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 6th base grader depends between output layer
Secondary for first volume lamination conv61, the first down-sampling layer pool61, volume Two lamination conv62, the second down-sampling layer pool62,
The full articulamentum fc17 of 3rd convolutional layer conv63, the 3rd down-sampling layer pool65, the first full articulamentum fc16 and second;The first volume
The convolution kernel number of lamination conv61 layer is 96, and convolution kernel size is 7*7;The convolution kernel number of volume Two lamination conv62 layer is
256, convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv63 layer is 384, and convolution kernel size is 3*3.
Preferably, the grader in described convolutional neural networks grader uses SoftMax grader.
Preferably, described step S3 use simple vote fusion method merge the age categories of M base grader output,
Obtain a final age categories.
The present invention has such advantages as relative to prior art and effect:
(1) first the inventive method selects M the training subset after the expansion in age recognition training data base
The convolutional neural networks grader that training obtains is as base grader;Then facial image to be measured is inputted to M base grader,
Finally merge the age categories of M base grader output, it is thus achieved that a final age categories.There is age recognition accuracy
Height, the dependency to people is extracted in the age characteristics decreasing facial image recognition, it is possible to estimate the age of multiple crowd, and application is wide
General advantage.The training subset additionally in the inventive method being trained convolutional neural networks grader is the training after expanding
Subset, the training subset after expansion increase effectively the sample number in training subset, it is possible to sufficiently to convolutional neural networks mould
Type is trained, and further increases convolutional neural networks grader for age recognition accuracy.
(2) the inventive method stochastic generation N number of convolutional neural networks model, utilizes in age recognition training data base and passes through
N number of convolutional neural networks model of the training subset training stochastic generation after expansion, to get the convolutional neural networks of correspondence
Grader, then utilizes the checking collection in age recognition training data base to verify what each convolutional neural networks grader age identified
Accuracy rate, finally selects before age recognition accuracy ranking the convolutional neural networks grader of M as base grader, it is possible to big
Big raising the inventive method age recognition accuracy.
(3) the base grader selected in the inventive method can be the convolutional neural networks model training that multiple structure is different
Obtaining, the inventive method utilizes the convolutional neural networks model integrated identification of multiple architectural difference, it is possible to obtain preferably
Face age recognition performance, improves the accuracy rate of age identification further.
Accompanying drawing explanation
Fig. 1 is that the inventive method base grader generates block diagram.
Fig. 2 is the convolutional neural networks model structure block diagram of four layers of convolutional layer in the inventive method.
Fig. 3 is the convolutional neural networks model structure block diagram of three-layer coil lamination in the inventive method.
Fig. 4 is age cognitive phase schematic diagram in the inventive method.
Detailed description of the invention
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention do not limit
In this.
Embodiment
Present embodiment discloses a kind of age recognition methods based on integrated convolutional neural networks, step is as follows:
S1, the training subset obtained in age recognition training data base and it is expanded, the instruction after being expanded
Practice subset;Selecting M the training subset after above-mentioned expansion trains the convolutional neural networks grader obtained to divide as base
Class device;In this step, base grader acquisition process is specific as follows:
S11, age recognition training storehouse is divided into training set and checking collection;Wherein age recognition training storehouse includes facial image
And each facial image correspondence age categories;
S12, training set is carried out n times random sampling with replacement, obtain N number of training subset;Wherein N may range from 5~
500。
S13, employing image conversion method automatically expand in step S12 and obtain N number of training subset, after obtaining N number of expansion
Training subset, is respectively and expands training subset 1, expansion training subset 2 ..., expand training subset N;
S14, stochastic generation N number of convolutional neural networks Model B _ CNN_1,,, B_CNN_N, then utilize in step S13
N number of convolutional neural networks model is trained by training subset after the N number of expansion obtained respectively, obtains N number of convolutional Neural net
Network grader CNN1,,, CNNN, the most as shown in Figure 1;In the present embodiment, the grader in convolutional neural networks grader uses
SoftMax grader;
S15, calculate N number of convolutional neural networks grader age recognition training storehouse checking collection on recognition accuracy;
Before S16, selection recognition accuracy ranking, M the convolutional neural networks grader of M is as base grader.
S2, obtain facial image to be measured.
S3, test time, M the base grader that facial image to be measured input step S1 respectively is got, then merge M
The age categories of individual base grader output, it is thus achieved that a final age categories.This step uses simple vote fusion method
Merge the age categories of M base grader output, it is thus achieved that a final age categories.
Image conversion method used in the present embodiment above-mentioned steps S13 includes that the facial image to training subset is carried out
Image rotation, image RGB channel disturbance and image add Gaussian noise.
Image rotation: image rotation refers to put with direction clockwise or counter-clockwise given input people around some
Face image rotates a certain angle.The present embodiment uses Python Imaging Library (Python graphics process
Storehouse, PIL) function in storehouse carries out rotation transformation to facial image in training set.
RGB channel disturbance: the image of RGB model representation uses three kinds of primary colours R, G, B to represent.Any one face of nature
Color can be combined according to different ratios by tri-kinds of primary colours of R, G, B.By changing the value of each passage of RGB, can
To produce the different derivative image of original image, original image is carried out rgb value and carries out random disturbance.
Add Gaussian noise: the noise in image is primarily referred to as producing image the impurity of interference, by adding in the picture
Add Gaussian noise and produce new image pattern.
In the present embodiment method, the image conversion done for facial image includes that the facial image to training subset is carried out
Turn clockwise 5 °, counterclockwise rotate 5 °, 3 channel value to RGB carry out disturbance respectively, and add Gaussian noise conversion behaviour
Making, such 1 image will increase by 6 images, and training subset expands as original 7 times.
In this present embodiment above-mentioned steps S1, M is 6, i.e. selects 6 convolutional neural networks grader conducts in step S1
Base grader, including 6 base graders.
First that in the present embodiment, the convolutional neural networks model training of 6 base grader respectively four layers of convolutional layers obtains
Base grader CNN1, the second base grader CNN2With the 3rd base grader CNN3And the convolutional neural networks mould of three-layer coil lamination
Base grader the 4th base grader CNN that type training obtains4, the 5th base grader CNN5With the 6th base grader CNN6.Such as Fig. 2
Shown in, the convolutional neural networks mode input layer of four layers of convolutional layer of the present embodiment is followed successively by first volume lamination between output layer
Conv1, the first down-sampling layer pool1, volume Two lamination conv2, the second down-sampling layer pool2, the 3rd convolutional layer conv3,
The full articulamentum fc7 of four convolutional layer conv4, the 3rd down-sampling layer pool5, the first full articulamentum fc6 and second;As it is shown on figure 3, this
The convolutional neural networks mode input layer of embodiment three-layer coil lamination to be followed successively by between output layer first volume lamination conv1,
Sample level pool1, volume Two lamination conv2, the second down-sampling layer pool2, the 3rd convolutional layer conv3, the 3rd down-sampling once
The full articulamentum fc7 of layer pool5, the first full articulamentum fc6 and second.
The present embodiment training obtains the convolutional neural networks Model B _ CNN_1 input of four layers of convolutional layer of the first base grader
Layer to be followed successively by between output layer first volume lamination conv11, the first down-sampling layer pool11, volume Two lamination conv12, second
Down-sampling layer pool12, the 3rd convolutional layer conv13, Volume Four lamination conv14, the 3rd down-sampling layer pool15, first entirely connect
Meet the full articulamentum fc17 of layer fc16 and second;The wherein convolution kernel number 96 of first volume lamination conv11 layer, convolution kernel size is
9*9;The convolution kernel number of volume Two lamination conv12 layer is 256, and convolution kernel size is 7*7;3rd convolutional layer conv13 layer
Convolution kernel number is 256, and convolution kernel size is 5*5;The convolution kernel number of Volume Four lamination conv14 layer is 256, and convolution kernel is big
Little for 3*3.
Training obtains the convolutional neural networks Model B _ CNN_2 input layer of four layers of convolutional layer of the second base grader to output
First volume lamination conv21, the first down-sampling layer pool21, volume Two lamination conv22, the second down-sampling layer it is followed successively by between Ceng
Pool22, the 3rd convolutional layer conv23, Volume Four lamination conv24, the 3rd down-sampling layer pool15, the first full articulamentum fc26
With the second full articulamentum fc27;The convolution kernel number of first volume lamination conv21 layer is 128, and convolution kernel size is 9*9;Volume Two
The convolution kernel number of lamination conv22 layer is 256, and convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv23 layer
Being 256, convolution kernel size is 3*3;The convolution kernel number of Volume Four lamination conv24 is 384, and convolution kernel size is 3*3.
Training obtains the convolutional neural networks Model B _ CNN_3 input layer of four layers of convolutional layer of the 3rd base grader to output
First volume lamination conv31, the first down-sampling layer pool31, volume Two lamination conv32, the second down-sampling layer it is followed successively by between Ceng
Pool32, the 3rd convolutional layer conv33, Volume Four lamination conv34, the 3rd down-sampling layer pool35, the first full articulamentum fc36
With the second full articulamentum fc37;The convolution kernel number of first volume lamination conv31 layer is 96, and convolution kernel size is 7*7;Volume Two
The convolution kernel number of lamination conv32 layer is 256, and convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv33 layer
Being 512, convolution kernel size is 5*5;The convolution kernel number of Volume Four lamination conv34 layer is 384, and convolution kernel size is 3*3.
Training obtains the convolutional neural networks Model B _ CNN_4 input layer of 3 layers of convolutional layer of the 4th base grader to output
First volume lamination conv41, the first down-sampling layer pool41, volume Two lamination conv42, the second down-sampling layer it is followed successively by between Ceng
The full articulamentum of pool42, the 3rd convolutional layer conv43, the 3rd down-sampling layer pool45, the first full articulamentum fc46 and second
fc47;The convolution kernel number of first volume lamination conv41 layer is 96, and convolution kernel size is 9*9;Volume Two lamination conv42 layer
Convolution kernel number is 256, and convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv43 layer is 256, and convolution kernel is big
Little for 5*5.
Training obtains the convolutional neural networks Model B _ CNN_5 input layer of 3 layers of convolutional layer of the 5th base grader to output
First volume lamination conv51, the first down-sampling layer pool51, volume Two lamination conv52, the second down-sampling layer it is followed successively by between Ceng
The full articulamentum of pool52, the 3rd convolutional layer conv53, the 3rd down-sampling layer pool55, the first full articulamentum fc56 and second
fc57;The convolution kernel number of first volume lamination conv51 layer is 128, and convolution kernel size is 9*9;Volume Two lamination conv52 layer
Convolution kernel number is 256, and convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv53 layer is 384, and convolution kernel is big
Little for 5*5.
Training obtains the convolutional neural networks Model B _ CNN_6 input layer of 3 layers of convolutional layer of the 6th base grader to output
First volume lamination conv61, the first down-sampling layer pool61, volume Two lamination conv62, the second down-sampling layer it is followed successively by between Ceng
The full articulamentum of pool62, the 3rd convolutional layer conv63, the 3rd down-sampling layer pool65, the first full articulamentum fc16 and second
fc17;The convolution kernel number of first volume lamination conv61 layer is 96, and convolution kernel size is 7*7;Volume Two lamination conv62 layer
Convolution kernel number is 256, and convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv63 layer is 384, and convolution kernel is big
Little for 3*3.
The weights of the convolutional layer of the present embodiment convolutional neural networks model use average to be 0, variance be 0.01 Gauss divide
Cloth function carries out random initializtion, and the random drop rate of full articulamentum Dropout is respectively set to 0.5, the Batchsize of training
Being set to 128, momentum is set to 0.9, and initial learn rate is 0.01, iterations 50000 times.
As shown in Figure 4, in the present embodiment in step S3, facial image to be measured is inputted respectively the first base grader CNN1、
Second base grader CNN2, the 3rd base grader CNN3, the 4th base grader CNN4, the 5th base grader CNN5Divide with the 6th base
Class device CNN6.Then simple vote fusion method is used to merge the age categories of these 6 base grader outputs, it is achieved to these 6
The recognition result of base grader output is voted, and who gets the most votes's age categories is exactly the age of this input facial image
Classification, as a final age categories.The present embodiment method integration convolutional neural networks, it is simply that utilize multiple architectural difference
The grader that the convolutional neural networks changed is constituted carries out integration testing, to obtain more preferable age recognition performance.
When wherein table 1 below is using data set Adience and Gallagher as age recognition training data base, adopt
Know with some age recognition methodss in the present embodiment age recognition methods (integrated+image conversion+CNNs) and employing prior art
Accuracy rate time other, the WA accuracy rate that wherein accuracy rate employing is conventional:
WA=(identifying accurate total sample number)/(all test samples sum);
Table 1
Wherein Adience data set is the Eran Eidinger et al. facial image under the conditions of studying non-laboratory
The data set created in age and Gender Classification problem.The image of this data set is all the true picture under real-life situation,
There is different illumination, different attitudes, the impact of the factors such as different background and different expressions, the face age under production environment is known
Other problem more has Research Significance.Adience data set will be divided into 0-2 year the age, 4-6 year, 8-12 year, 25-32 year, 38-43 year,
48-53 year, > 60 years old totally 8 classification.Gallagher data set is that Andrew C.Gallagher et al. adopts from Flickr.com
The photo of collection, is all the photo under real-life situation.Similarly, these pictures have different illumination, different attitudes, the different back ofs the body
The impact of the factors such as scape and different expressions, obtains extensively in the research of image age identification problem at present under the conditions of non-laboratory
Use.Gallagher data set will be divided into 0-2 year the age, 3-7 year, 8-12 year, 13-19 year, 20-36 year, 37-65 year, > 66
Year totally 7 classifications.From table 1 it follows that other age recognition methodss, the recognition methods of the present embodiment age compared to existing technology
Accuracy rate the highest.
Above-described embodiment is the present invention preferably embodiment, but embodiments of the present invention are not by above-described embodiment
Limit, the change made under other any spirit without departing from the present invention and principle, modify, substitute, combine, simplify,
All should be the substitute mode of equivalence, within being included in protection scope of the present invention.
Claims (10)
1. an age recognition methods based on integrated convolutional neural networks, it is characterised in that step is as follows:
S1, the training subset obtained in age recognition training data base and expand it, the training after being expanded is sub
Collection;Selecting M the training subset after above-mentioned expansion trains the convolutional neural networks grader obtained as base grader;
S2, obtain facial image to be measured;
S3, test time, M the base grader that facial image to be measured input step S1 respectively is got, then fusion M base
The age categories of grader output, it is thus achieved that a final age categories.
Age recognition methods based on integrated convolutional neural networks the most according to claim 1, it is characterised in that described step
In rapid S1, base grader acquisition process is specific as follows:
S11, age recognition training storehouse is divided into training set and checking collection;Wherein age recognition training storehouse include facial image and
Each facial image correspondence age categories;
S12, training set is carried out n times stochastic sampling, obtain N number of training subset;
S13, employing image conversion method automatically expand in step S12 and obtain N number of training subset, obtain the training after N number of expansion
Subset;
S14, stochastic generation N number of convolutional neural networks model, then utilizes of the training after the N number of expansion obtained in step S13
N number of convolutional neural networks model is trained by collection respectively, obtains N number of convolutional neural networks grader;
S15, calculate N number of convolutional neural networks grader age recognition training storehouse checking collection on recognition accuracy;
Before S16, selection recognition accuracy ranking, M the convolutional neural networks grader of M is as base grader.
Age recognition methods based on integrated convolutional neural networks the most according to claim 2, it is characterised in that described step
The n times stochastic sampling in rapid S12 carried out training set is for putting back to stochastic sampling.
Age recognition methods based on integrated convolutional neural networks the most according to claim 2, it is characterised in that its feature
Being, in described step S13, image conversion method includes that the facial image to training subset carries out image rotation, image RGB leads to
Road disturbance and image are added Gaussian noise.
Age recognition methods based on integrated convolutional neural networks the most according to claim 4, it is characterised in that use
The function in Python graphics process storehouse carries out image rotation conversion to facial image in training subset.
Age recognition methods based on integrated convolutional neural networks the most according to claim 2, it is characterised in that described M
It is 6, step S1 i.e. selects 6 convolutional neural networks graders as base grader, i.e. include 6 base graders.
Age recognition methods based on integrated convolutional neural networks the most according to claim 6, it is characterised in that 6 bases
The first base grader that the convolutional neural networks model training of grader respectively four layers of convolutional layer obtains, the second base grader and
Base grader the 4th base grader that the convolutional neural networks model training of the 3rd base grader and three-layer coil lamination obtains,
Five base graders and the 6th base grader.
Age recognition methods based on integrated convolutional neural networks the most according to claim 7, it is characterised in that train
The convolutional neural networks mode input layer of four layers of convolutional layer to the first base grader is followed successively by the first convolution between output layer
Layer conv11, the first down-sampling layer pool11, volume Two lamination conv12, the second down-sampling layer pool12, the 3rd convolutional layer
The full articulamentum of conv13, Volume Four lamination conv14, the 3rd down-sampling layer pool15, the first full articulamentum fc16 and second
fc17;The wherein convolution kernel number 96 of first volume lamination conv11 layer, convolution kernel size is 9*9;Volume Two lamination conv12 layer
Convolution kernel number be 256, convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv13 layer is 256, convolution kernel
Size is 5*5;The convolution kernel number of Volume Four lamination conv14 layer is 256, and convolution kernel size is 3*3;
Training obtain the convolutional neural networks mode input layer of four layers of convolutional layer of the second base grader between output layer successively
For first volume lamination conv21, the first down-sampling layer pool21, volume Two lamination conv22, the second down-sampling layer pool22,
Three convolutional layer conv23, Volume Four lamination conv24, the 3rd down-sampling layer pool15, the first full articulamentum fc26 and second connect entirely
Meet a layer fc27;The convolution kernel number of first volume lamination conv21 layer is 128, and convolution kernel size is 9*9;Volume Two lamination conv22
The convolution kernel number of layer is 256, and convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv23 layer is 256, convolution
Core size is 3*3;The convolution kernel number of Volume Four lamination conv24 is 384, and convolution kernel size is 3*3;
Training obtain the convolutional neural networks mode input layer of four layers of convolutional layer of the 3rd base grader between output layer successively
For first volume lamination conv31, the first down-sampling layer pool31, volume Two lamination conv32, the second down-sampling layer pool32,
Three convolutional layer conv33, Volume Four lamination conv34, the 3rd down-sampling layer pool35, the first full articulamentum fc36 and second connect entirely
Meet a layer fc37;The convolution kernel number of first volume lamination conv31 layer is 96, and convolution kernel size is 7*7;Volume Two lamination conv32
The convolution kernel number of layer is 256, and convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv33 layer is 512, convolution
Core size is 5*5;The convolution kernel number of Volume Four lamination conv34 layer is 384, and convolution kernel size is 3*3;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 4th base grader is followed successively by between output layer
First volume lamination conv41, the first down-sampling layer pool41, volume Two lamination conv42, the second down-sampling layer pool42, the 3rd
The full articulamentum fc47 of convolutional layer conv43, the 3rd down-sampling layer pool45, the first full articulamentum fc46 and second;First volume lamination
The convolution kernel number of conv41 layer is 96, and convolution kernel size is 9*9;The convolution kernel number of volume Two lamination conv42 layer is 256,
Convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv43 layer is 256, and convolution kernel size is 5*5;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 5th base grader is followed successively by between output layer
First volume lamination conv51, the first down-sampling layer pool51, volume Two lamination conv52, the second down-sampling layer pool52, the 3rd
The full articulamentum fc57 of convolutional layer conv53, the 3rd down-sampling layer pool55, the first full articulamentum fc56 and second;First volume lamination
The convolution kernel number of conv51 layer is 128, and convolution kernel size is 9*9;The convolution kernel number of volume Two lamination conv52 layer is
256, convolution kernel size is 7*7;The convolution kernel number of the 3rd convolutional layer conv53 layer is 384, and convolution kernel size is 5*5;
The convolutional neural networks mode input layer of 3 layers of convolutional layer that training obtains the 6th base grader is followed successively by between output layer
First volume lamination conv61, the first down-sampling layer pool61, volume Two lamination conv62, the second down-sampling layer pool62, the 3rd
The full articulamentum fc17 of convolutional layer conv63, the 3rd down-sampling layer pool65, the first full articulamentum fc16 and second;First volume lamination
The convolution kernel number of conv61 layer is 96, and convolution kernel size is 7*7;The convolution kernel number of volume Two lamination conv62 layer is 256,
Convolution kernel size is 5*5;The convolution kernel number of the 3rd convolutional layer conv63 layer is 384, and convolution kernel size is 3*3.
Age recognition methods based on integrated convolutional neural networks the most according to claim 1, it is characterised in that described volume
Grader in long-pending neural network classifier uses SoftMax grader.
Age recognition methods based on integrated convolutional neural networks the most according to claim 1, it is characterised in that described
Step S3 use simple vote fusion method merge the age categories of M base grader output, it is thus achieved that a final age
Classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610592214.1A CN106295506A (en) | 2016-07-25 | 2016-07-25 | A kind of age recognition methods based on integrated convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610592214.1A CN106295506A (en) | 2016-07-25 | 2016-07-25 | A kind of age recognition methods based on integrated convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106295506A true CN106295506A (en) | 2017-01-04 |
Family
ID=57652472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610592214.1A Pending CN106295506A (en) | 2016-07-25 | 2016-07-25 | A kind of age recognition methods based on integrated convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295506A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951867A (en) * | 2017-03-22 | 2017-07-14 | 成都擎天树科技有限公司 | Face identification method, device, system and equipment based on convolutional neural networks |
CN106980830A (en) * | 2017-03-17 | 2017-07-25 | 中国人民解放军国防科学技术大学 | One kind is based on depth convolutional network from affiliation recognition methods and device |
CN107169454A (en) * | 2017-05-16 | 2017-09-15 | 中国科学院深圳先进技术研究院 | A kind of facial image age estimation method, device and its terminal device |
CN107437099A (en) * | 2017-08-03 | 2017-12-05 | 哈尔滨工业大学 | A kind of specific dress ornament image recognition and detection method based on machine learning |
CN107545245A (en) * | 2017-08-14 | 2018-01-05 | 中国科学院半导体研究所 | A kind of age estimation method and equipment |
CN107622261A (en) * | 2017-11-03 | 2018-01-23 | 北方工业大学 | Face age estimation method and device based on deep learning |
CN107704816A (en) * | 2017-09-27 | 2018-02-16 | 珠海格力电器股份有限公司 | The boiling method and device of food |
CN107729078A (en) * | 2017-09-30 | 2018-02-23 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
CN108021863A (en) * | 2017-11-01 | 2018-05-11 | 平安科技(深圳)有限公司 | Electronic device, the character classification by age method based on image and storage medium |
CN108109152A (en) * | 2018-01-03 | 2018-06-01 | 深圳北航新兴产业技术研究院 | Medical Images Classification and dividing method and device |
CN108537026A (en) * | 2018-03-30 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | application control method, device and server |
CN108596042A (en) * | 2018-03-29 | 2018-09-28 | 青岛海尔智能技术研发有限公司 | Enabling control method and system |
CN109063750A (en) * | 2018-07-17 | 2018-12-21 | 西安电子科技大学 | SAR target classification method based on CNN and SVM decision fusion |
CN109726703A (en) * | 2019-01-11 | 2019-05-07 | 浙江工业大学 | A kind of facial image age recognition methods based on improvement integrated study strategy |
CN110956190A (en) * | 2018-09-27 | 2020-04-03 | 深圳云天励飞技术有限公司 | Image recognition method and device, computer device and computer readable storage medium |
CN111242235A (en) * | 2020-01-19 | 2020-06-05 | 中国科学院计算技术研究所厦门数据智能研究院 | Similar characteristic test data set generation method |
CN111680664A (en) * | 2020-06-22 | 2020-09-18 | 南方电网科学研究院有限责任公司 | Face image age identification method, device and equipment |
CN112070535A (en) * | 2020-09-03 | 2020-12-11 | 常州微亿智造科技有限公司 | Electric vehicle price prediction method and device |
CN112446310A (en) * | 2020-11-19 | 2021-03-05 | 杭州趣链科技有限公司 | Age identification system, method and device based on block chain |
CN112656431A (en) * | 2020-12-15 | 2021-04-16 | 中国科学院深圳先进技术研究院 | Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium |
CN115205637A (en) * | 2022-09-19 | 2022-10-18 | 山东世纪矿山机电有限公司 | Intelligent identification method for mine car materials |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101099675A (en) * | 2007-07-26 | 2008-01-09 | 上海交通大学 | Method for detecting human face with weak sorter composite coefficient |
CN103632168A (en) * | 2013-12-09 | 2014-03-12 | 天津工业大学 | Classifier integration method for machine learning |
CN105354565A (en) * | 2015-12-23 | 2016-02-24 | 北京市商汤科技开发有限公司 | Full convolution network based facial feature positioning and distinguishing method and system |
CN105426963A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Convolutional neural network Training method and apparatus for human face identification and application |
-
2016
- 2016-07-25 CN CN201610592214.1A patent/CN106295506A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101099675A (en) * | 2007-07-26 | 2008-01-09 | 上海交通大学 | Method for detecting human face with weak sorter composite coefficient |
CN103632168A (en) * | 2013-12-09 | 2014-03-12 | 天津工业大学 | Classifier integration method for machine learning |
CN105426963A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Convolutional neural network Training method and apparatus for human face identification and application |
CN105354565A (en) * | 2015-12-23 | 2016-02-24 | 北京市商汤科技开发有限公司 | Full convolution network based facial feature positioning and distinguishing method and system |
Non-Patent Citations (3)
Title |
---|
GIL LEVI等: ""Age and Gender Classification using Convolutional Neural Networks"", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》 * |
周志华等: ""神经网络集成"", 《计算机学报》 * |
郭红玲等: ""多分类器选择集成方法"", 《计算机工程与应用》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106980830A (en) * | 2017-03-17 | 2017-07-25 | 中国人民解放军国防科学技术大学 | One kind is based on depth convolutional network from affiliation recognition methods and device |
CN106951867A (en) * | 2017-03-22 | 2017-07-14 | 成都擎天树科技有限公司 | Face identification method, device, system and equipment based on convolutional neural networks |
CN106951867B (en) * | 2017-03-22 | 2019-08-23 | 成都擎天树科技有限公司 | Face identification method, device, system and equipment based on convolutional neural networks |
CN107169454A (en) * | 2017-05-16 | 2017-09-15 | 中国科学院深圳先进技术研究院 | A kind of facial image age estimation method, device and its terminal device |
CN107169454B (en) * | 2017-05-16 | 2021-01-01 | 中国科学院深圳先进技术研究院 | Face image age estimation method and device and terminal equipment thereof |
CN107437099A (en) * | 2017-08-03 | 2017-12-05 | 哈尔滨工业大学 | A kind of specific dress ornament image recognition and detection method based on machine learning |
CN107545245A (en) * | 2017-08-14 | 2018-01-05 | 中国科学院半导体研究所 | A kind of age estimation method and equipment |
CN107704816A (en) * | 2017-09-27 | 2018-02-16 | 珠海格力电器股份有限公司 | The boiling method and device of food |
CN107729078A (en) * | 2017-09-30 | 2018-02-23 | 广东欧珀移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
WO2019062411A1 (en) * | 2017-09-30 | 2019-04-04 | Oppo广东移动通信有限公司 | Method for managing and controlling background application program, storage medium, and electronic device |
CN107729078B (en) * | 2017-09-30 | 2019-12-03 | Oppo广东移动通信有限公司 | Background application management-control method, device, storage medium and electronic equipment |
CN108021863A (en) * | 2017-11-01 | 2018-05-11 | 平安科技(深圳)有限公司 | Electronic device, the character classification by age method based on image and storage medium |
CN107622261A (en) * | 2017-11-03 | 2018-01-23 | 北方工业大学 | Face age estimation method and device based on deep learning |
CN108109152A (en) * | 2018-01-03 | 2018-06-01 | 深圳北航新兴产业技术研究院 | Medical Images Classification and dividing method and device |
CN108596042A (en) * | 2018-03-29 | 2018-09-28 | 青岛海尔智能技术研发有限公司 | Enabling control method and system |
CN108537026A (en) * | 2018-03-30 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | application control method, device and server |
CN109063750A (en) * | 2018-07-17 | 2018-12-21 | 西安电子科技大学 | SAR target classification method based on CNN and SVM decision fusion |
CN109063750B (en) * | 2018-07-17 | 2022-05-13 | 西安电子科技大学 | SAR target classification method based on CNN and SVM decision fusion |
CN110956190A (en) * | 2018-09-27 | 2020-04-03 | 深圳云天励飞技术有限公司 | Image recognition method and device, computer device and computer readable storage medium |
CN109726703A (en) * | 2019-01-11 | 2019-05-07 | 浙江工业大学 | A kind of facial image age recognition methods based on improvement integrated study strategy |
CN111242235A (en) * | 2020-01-19 | 2020-06-05 | 中国科学院计算技术研究所厦门数据智能研究院 | Similar characteristic test data set generation method |
CN111242235B (en) * | 2020-01-19 | 2023-04-07 | 中科(厦门)数据智能研究院 | Similar characteristic test data set generation method |
CN111680664A (en) * | 2020-06-22 | 2020-09-18 | 南方电网科学研究院有限责任公司 | Face image age identification method, device and equipment |
CN112070535A (en) * | 2020-09-03 | 2020-12-11 | 常州微亿智造科技有限公司 | Electric vehicle price prediction method and device |
CN112446310A (en) * | 2020-11-19 | 2021-03-05 | 杭州趣链科技有限公司 | Age identification system, method and device based on block chain |
CN112656431A (en) * | 2020-12-15 | 2021-04-16 | 中国科学院深圳先进技术研究院 | Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium |
CN115205637A (en) * | 2022-09-19 | 2022-10-18 | 山东世纪矿山机电有限公司 | Intelligent identification method for mine car materials |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295506A (en) | A kind of age recognition methods based on integrated convolutional neural networks | |
CN109948425B (en) | Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching | |
CN108171209A (en) | A kind of face age estimation method that metric learning is carried out based on convolutional neural networks | |
CN101271469B (en) | Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method | |
CN109299268A (en) | A kind of text emotion analysis method based on dual channel model | |
Yi et al. | House style recognition using deep convolutional neural network | |
CN108875708A (en) | Behavior analysis method, device, equipment, system and storage medium based on video | |
CN106778496A (en) | Biopsy method and device | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN107451607A (en) | A kind of personal identification method of the typical character based on deep learning | |
CN104504362A (en) | Face detection method based on convolutional neural network | |
CN109902573A (en) | Multiple-camera towards video monitoring under mine is without mark pedestrian's recognition methods again | |
CN110263822B (en) | Image emotion analysis method based on multi-task learning mode | |
Li et al. | Sign language recognition based on computer vision | |
CN110222780A (en) | Object detecting method, device, equipment and storage medium | |
CN109325516A (en) | A kind of integrated learning approach and device towards image classification | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
CN109801225A (en) | Face reticulate pattern stain minimizing technology based on the full convolutional neural networks of multitask | |
CN107301376A (en) | A kind of pedestrian detection method stimulated based on deep learning multilayer | |
Tiwari | Supervised learning: From theory to applications | |
CN107506792A (en) | A kind of semi-supervised notable method for checking object | |
CN104598920A (en) | Scene classification method based on Gist characteristics and extreme learning machine | |
CN110414433A (en) | Image processing method, device, storage medium and computer equipment | |
Wang et al. | Quantifying legibility of indoor spaces using Deep Convolutional Neural Networks: Case studies in train stations | |
CN115966010A (en) | Expression recognition method based on attention and multi-scale feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170104 |
|
RJ01 | Rejection of invention patent application after publication |