CN106295521B - A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment - Google Patents
A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment Download PDFInfo
- Publication number
- CN106295521B CN106295521B CN201610609766.9A CN201610609766A CN106295521B CN 106295521 B CN106295521 B CN 106295521B CN 201610609766 A CN201610609766 A CN 201610609766A CN 106295521 B CN106295521 B CN 106295521B
- Authority
- CN
- China
- Prior art keywords
- neural networks
- convolutional neural
- gender
- face
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of gender identification method based on multi output convolutional neural networks, device and equipment is calculated, this method comprises: obtaining face image data from image data base comprising facial image and face gender;According to face image data, the first convolutional neural networks are trained, first nerves network includes the first convolutional layer being sequentially connected, the first down-sampling layer, the second convolutional layer, the second down-sampling layer, the first full articulamentum and the second full articulamentum;The full articulamentum of third and the 4th full articulamentum are added in trained first convolutional neural networks to generate the second convolutional neural networks;According to face image data, the second convolutional neural networks are trained;Facial image to be identified is input in trained second convolutional neural networks, the first gender output of the second full articulamentum output and the second sex output of the 4th articulamentum output are obtained;The gender of facial image to be identified is judged according to the output of the first gender and second sex output.
Description
Technical field
The present invention relates to technical field of image processing, in particular to a kind of gender based on multi output convolutional neural networks is known
Other method, apparatus and calculating equipment.
Background technique
Face has contained bulk information as important one of biological characteristic on facial image, such as gender, age, ethnic group
Deng.It is further deep with studying in image processing techniques facial image, especially in terms of gender classification, such as
Except the traditional artificial method for extracting feature such as PCA, LBP, with convolutional neural networks (CNN:Convolutional Neural
Network the face gender identification method based on) also has gradually developed.
However, in the existing method for carrying out gender classification using convolutional neural networks, in training convolutional nerve net
When network, since data distribution is uneven, it will lead to prediction result and drawn close to the more one kind of sample size, especially in gender
In this two classification problem, if women sample is on the high side, some males can be predicted as women when prediction, otherwise also
So.The fewer one kind of sample number is passed through one by the problem of for data nonbalance, the mode of the data gain mostly used greatly
Fixed similitude variation, is generated sample, is made up with this and be unevenly distributed defect brought by weighing apparatus.But one kind that sample number is less,
Because data can be excessively single, satisfied effect can not be obtained in situation complexity.
Summary of the invention
For this purpose, the present invention provides a kind of gender identifying schemes based on multi output convolutional neural networks, to try hard to solve or
Person at least alleviate above there are the problem of.
According to an aspect of the present invention, a kind of gender identification method based on multi output convolutional neural networks is provided, is fitted
It is executed in calculating equipment, this method comprises the following steps: firstly, face image data is obtained from image data base, people
Face image data include facial image and face gender, and facial image holding is horizontal positive and meets pre-set dimension, face gender
Including any in male and female;According to face image data, the first convolutional neural networks are trained, the first convolution mind
Include the first convolutional layer being sequentially connected through network, the first down-sampling layer, the second convolutional layer, the second down-sampling layer, first connect entirely
Connect layer and the second full articulamentum;The full articulamentum of third and the 4th full articulamentum are added in trained first convolutional neural networks
To generate the second convolutional neural networks, wherein the full articulamentum of third it is identical as the trained first full articulamentum and with second under adopt
Sample layer is connected, and the 4th full articulamentum is identical as the trained second full articulamentum and is connected with the full articulamentum of third;According to face
Image data is trained the second convolutional neural networks;Facial image to be identified is input to trained second convolution mind
Gender identification is carried out in network, obtains the first gender output and the second of the output of the 4th articulamentum of the second full articulamentum output
Gender output;The gender of facial image to be identified is judged according to the output of the first gender and second sex output.
Optionally, in the gender identification method according to the present invention based on multi output convolutional neural networks, according to face
Image data, being trained to the first convolutional neural networks includes: using facial image as in the first convolutional neural networks first
Output of the input, face gender of convolutional layer as the second convolutional layer in the first convolutional neural networks, to the first convolution nerve net
Network is trained.
Optionally, in the gender identification method according to the present invention based on multi output convolutional neural networks, according to face
Image data, being trained to the second convolutional neural networks includes: to classify to face gender, obtain the first sex types and
Second sex type, the first sex types include it is any in male and non-male, second sex type includes women and non-female
It is any in property;Using facial image as the input of the first convolutional layer in the second convolutional neural networks, the first sex types as
The output of the second full articulamentum, second sex type are complete as the in the second convolutional neural networks the 4th in second convolutional neural networks
The output of articulamentum is trained the second convolutional neural networks.
It optionally, further include pair in the gender identification method according to the present invention based on multi output convolutional neural networks
Images to be recognized is pre-processed to obtain facial image to be identified.
Optionally, in the gender identification method according to the present invention based on multi output convolutional neural networks, to be identified
It includes: to carry out Face datection to images to be recognized that image, which is pre-processed to obtain facial image to be identified, obtains face location
Information;By face location information, convert after the face in images to be recognized is cut to pre-set dimension;According to face key point
Information calculates the transformation matrix that face carries out Plane Rotation;The facial image under pre-set dimension is rotated into water using transformation matrix
Straight and even face is to obtain facial image to be identified.
Optionally, in the gender identification method according to the present invention based on multi output convolutional neural networks, the first gender
Output includes initial male's probability and non-male's probability, and second sex output includes initial women probability and non-women probability.
Optionally, in the gender identification method according to the present invention based on multi output convolutional neural networks, according to first
Gender output and second sex output judge that the gender of facial image to be identified includes: by initial male's probability and non-women probability
The sum of be used as male's probability;It regard the sum of initial women probability and non-male's probability as women probability;If male's probability is greater than female
Property probability, then judge the gender of facial image to be identified for male;If male's probability is less than women probability, people to be identified is judged
The gender of face image is women.
According to a further aspect of the invention, a kind of gender identification device based on multi output convolutional neural networks is provided,
It is calculated in equipment suitable for residing in, which includes obtaining module, the first training module, generation module, the second training module, knowing
Other module and judgment module.Wherein, module is obtained to be suitable for obtaining face image data, face image data from image data base
Including facial image and face gender, the horizontal front of facial image holding and meet pre-set dimension, face gender include male and
It is any in women;First training module is suitable for being trained the first convolutional neural networks, first according to face image data
Convolutional neural networks include the first convolutional layer being sequentially connected, the first down-sampling layer, the second convolutional layer, the second down-sampling layer,
One full articulamentum and the second full articulamentum;Generation module is suitable for adding third in trained first convolutional neural networks and connect entirely
Layer and the 4th full articulamentum are connect to generate the second convolutional neural networks, wherein the full articulamentum of third is connect entirely with trained first
Layer is identical and is connected with the second down-sampling layer, and the 4th full articulamentum is identical as the trained second full articulamentum and connects entirely with third
Layer is connect to be connected;Second training module is suitable for being trained the second convolutional neural networks according to face image data;Identification module
Suitable for facial image to be identified is input to progress gender identification in trained second convolutional neural networks, obtains second and connect entirely
Connect the first gender output of layer output and the second sex output of the 4th articulamentum output;Judgment module is suitable for according to the first gender
Output and second sex output judge the gender of facial image to be identified.
Optionally, in the gender identification device according to the present invention based on multi output convolutional neural networks, the first training
Module is further adapted for: using facial image as the input of the first convolutional layer in the first convolutional neural networks, face gender as
The output of second convolutional layer in first convolutional neural networks, is trained the first convolutional neural networks.
Optionally, in the gender identification device according to the present invention based on multi output convolutional neural networks, the second training
Module is further adapted for: being classified to face gender, is obtained the first sex types and second sex type, the first sex types
Including any in male and non-male, second sex type includes any in women and non-women;Using facial image as
The input of the first convolutional layer, the first sex types connect entirely as in the second convolutional neural networks second in second convolutional neural networks
The output of the output, second sex type of layer as the 4th full articulamentum in the second convolutional neural networks is connect, to the second convolution mind
It is trained through network.
It optionally, further include pre- in the gender identification device according to the present invention based on multi output convolutional neural networks
Processing module, suitable for being pre-processed to images to be recognized to obtain facial image to be identified.
Optionally, in the gender identification device according to the present invention based on multi output convolutional neural networks, mould is pre-processed
Block is further adapted for: being carried out Face datection to images to be recognized, is obtained face location information;It, will be to by face location information
It identifies and converts after the face in image is cut to pre-set dimension;Face, which is calculated, according to face key point information carries out Plane Rotation
Transformation matrix;The facial image under pre-set dimension is rotated into horizontal front to obtain face figure to be identified using transformation matrix
Picture.
Optionally, in the gender identification device according to the present invention based on multi output convolutional neural networks, the first gender
Output includes initial male's probability and non-male's probability, and second sex output includes initial women probability and non-women probability.
Optionally, in the gender identification device according to the present invention based on multi output convolutional neural networks, judgment module
It is further adapted for: regarding the sum of initial male's probability and non-women probability as male's probability;By initial women probability and non-male
The sum of probability is used as women probability;When male's probability is greater than women probability, judge the gender of facial image to be identified for male;
When male's probability is less than women probability, judge the gender of facial image to be identified for women.
According to a further aspect of the invention, a kind of calculating equipment is also provided, including according to the present invention based on multi output
The gender identification device of convolutional neural networks.
The technical solution of gender identification according to the present invention based on multi output convolutional neural networks, first with face figure
As data are trained the first convolutional neural networks, the second convolution mind is generated further according to trained first convolutional neural networks
Through network, and the second convolutional neural networks are trained by face image data, facial image to be identified is finally input to instruction
Gender identification is carried out in the second convolutional neural networks perfected.In the above-mentioned technical solutions, the second convolutional neural networks are to instruction
Full articulamentum part in the first convolutional neural networks perfected is modified and is generated, and a classification problem is become two
Subclass classification problem is trained the two subclass classification problems further according to face image data, and final trained second
Convolutional neural networks are the enhancing model of a gender multi output convolutional neural networks, to avoid because of face image data neutrality
The problem that other training data is unbalanced and causes recognition accuracy lower.
Detailed description of the invention
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings
Face, these aspects indicate the various modes that can practice principles disclosed herein, and all aspects and its equivalent aspect
It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical appended drawing reference generally refers to identical
Component or element.
Fig. 1 shows the schematic diagram according to an embodiment of the invention for calculating equipment 100;
Fig. 2 shows the gender identification methods according to an embodiment of the invention based on multi output convolutional neural networks
200 flow chart;
Fig. 3 shows the structural schematic diagram of the first convolutional neural networks according to an embodiment of the invention;
Fig. 4 shows the structural schematic diagram of the second convolutional neural networks according to an embodiment of the invention;And
Fig. 5 shows the gender identification device according to an embodiment of the invention based on multi output convolutional neural networks
300 schematic diagram.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
Fig. 1 is the block diagram of Example Computing Device 100.In basic configuration 102, calculating equipment 100, which typically comprises, is
System memory 106 and one or more processor 104.Memory bus 108 can be used for storing in processor 104 and system
Communication between device 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to: microprocessor
(μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include such as
The cache of one or more rank of on-chip cache 110 and second level cache 112 etc, processor core
114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU),
Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor
104 are used together, or in some implementations, and Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to: easily
The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System storage
Device 106 may include operating system 120, one or more is using 122 and data 124.In some embodiments, it applies
122 may be arranged to be operated using data 124 on an operating system.
Calculating equipment 100 further includes storage facilities 132 and storage interface bus 134.Storage facilities 132 includes that storage can be removed
Storage 136 and non-removable reservoir 138, can be removed reservoir 136 and non-removable reservoir 138 with storage interface bus
134 connections.Calculating equipment 100 can also include facilitating from various interface equipments (for example, output equipment 142, Peripheral Interface
144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example
Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as facilitate via
One or more port A/V 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example
If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, facilitates
Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch
Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set
Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one
A or multiple other calculate communication of the equipment 162 by network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave
Or computer readable instructions, data structure, program module in the modulated data signal of other transmission mechanisms etc, and can
To include any information delivery media." modulated data signal " can such signal, one in its data set or more
It is a or it change can the mode of encoded information in the signal carry out.As unrestricted example, communication media can be with
Wired medium including such as cable network or private line network etc, and it is such as sound, radio frequency (RF), microwave, infrared
(IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing
Both storage media and communication media.
Calculating equipment 100 can be implemented as a part of portable (or mobile) electronic equipment of small size, these electronics are set
The standby such as cellular phone, personal digital assistant (PDA), personal media player device, wireless network browsing apparatus, a of can be
People's helmet, application specific equipment or may include any of the above function mixing apparatus.Calculating equipment 100 can be with
Be embodied as include desktop computer and notebook computer configuration personal computer.In some embodiments, equipment 100 is calculated
It is configured as executing the gender identification method according to the present invention based on multi output convolutional neural networks.It include basis using 122
Gender identification device 300 based on multi output convolutional neural networks of the invention.
Fig. 2 shows the gender identification methods according to an embodiment of the invention based on multi output convolutional neural networks
200 flow chart.Gender identification method 200 based on multi output convolutional neural networks is suitable for calculating equipment (such as shown in Fig. 1
Calculating equipment 100) in execute.
As shown in Fig. 2, method 200 starts from step S210.In step S210, facial image is obtained from image data base
Data, face image data include facial image and face gender, and facial image holding is horizontal positive and meets pre-set dimension, people
Face gender includes any in male and female.In the present embodiment, for the face image data in image data base,
Including facial image be image after rotation processing, i.e., the facial image is rotated to be into horizontal front, and face in advance
Image is Three Channel Color image, and pre-set dimension is 60px × 60px.
Then, S220 is entered step.In step S220, according to the face image data, to the first convolution nerve net
Network is trained, the first convolutional neural networks include the first convolutional layer being sequentially connected, the first down-sampling layer, the second convolutional layer,
Second down-sampling layer, the first full articulamentum and the second full articulamentum.Using facial image as in the first convolutional neural networks first
Output of the input, face gender of convolutional layer as the second convolutional layer in the first convolutional neural networks, to the first convolution nerve net
Network is trained.Fig. 3 shows the structural schematic diagram of the first convolutional neural networks according to an embodiment of the invention.Such as Fig. 3
It is shown, in the first convolutional neural networks, be using the first convolutional layer as input terminal, behind be sequentially connected the first down-sampling layer,
Two convolutional layers, the second down-sampling layer, the first full articulamentum and the second full articulamentum, wherein the second full articulamentum is output end, with
The output that facial image is the input of the first convolutional layer, face gender is the second full articulamentum carries out the first convolutional neural networks
Training.In the present embodiment, it is illustrated by taking a face image data A in image data base as an example.Face image data A
Including facial image A1 and face gender, the corresponding face gender of facial image A1 is male.
In the first convolutional neural networks, firstly, A1 is input to the first convolutional layer.A1 is Three Channel Color image, ruler
Very little is 60px × 60px.First convolutional layer has 20 convolution kernels, and the number of parameters of each convolution kernel is 3 × 6 × 6, is equivalent to 3
The convolution kernel of 6 × 6 sizes carries out convolution, step-length 1 in each channel.Then after the convolution of the first convolutional layer, according to (60-
6)/1+1=55 it is found that the image obtained at this time size be 55px × 55px, that is, obtain 20 55px × 55px sizes spy
Sign figure.In fact, after the first convolutional layer completes process of convolution, it is also necessary to adjust the defeated of the first convolutional layer by activation primitive
Out, avoid next layer of output that from can not approaching arbitrary function for upper one layer of linear combination.Using ReLU (Rectified
Linear Unit) function as activation primitive, further alleviates overfitting problem, and expression formula is f (x)=max (0, wx+
B), wherein wx+b is conventional linear function.Then, into the first down-sampling layer, down-sampling is also known as pond, is to utilize figure
As the principle of local correlations, sub-sample is carried out to image, to reduce under data processing and retain useful information.Herein,
Pondization uses Maximum overlap pond, i.e., carries out piecemeal to the characteristic pattern of 55px × 55px, each piece of size is 3 × 3, and step-length is
2, and each piece of maximum value is counted, the pixel value as Chi Huahou image.According to (55-3)/2+1=27 it is found that Chi Huahou
Characteristic pattern is having a size of 27px × 27px, then by obtaining the characteristic pattern of 20 27px × 27px after the first down-sampling layer.?
Once between sample level and the second convolutional layer, settable local acknowledgement normalization layer to improve the generalization ability of network, this
Place is known as first partial response normalization layer.Local acknowledgement's normalization can be divided into two kinds of situations, the first is right between characteristic pattern
It answers the normalization of pixel, is for second the part normalization of each pixel of characteristic pattern, selects second in the present embodiment
Kind normalization, i.e., carry out spatial spread to a certain pixel in characteristic pattern, according to the pixel position, around it
Pixel in 5px × 5px size area is normalized, and is used to update the pixel for the result after normalized
The value of point.
Next, the characteristic pattern of 20 27px × 27px after local normalized enters the second convolutional layer.Due to
Triple channel is combined carry out convolution in first convolutional layer, therefore the input of the second convolutional layer is 20 27px × 27px
Single channel image.Second convolutional layer has 48 convolution kernels, and the number of parameters of each convolution kernel is 6 × 6, is equivalent to 16 × 6
The convolution kernel of size carries out convolution, step-length 1.Then after the convolution of the second convolutional layer, according to (27-6)/1+1=22 it is found that
The size of the image obtained at this time be 22px × 22px, due to the second convolutional layer carry out convolution during also to 20 27px
The characteristic pattern of × 27px is weighted combination, therefore finally obtains the characteristic pattern of 48 22px × 22px.Passing through ReLU
After function carries out activation processing to the output of the second convolutional layer, into the second down-sampling layer.It is right according to Maximum overlap pond principle
The characteristic pattern of 22px × 22px carries out piecemeal, and each piece of size is 2 × 2, step-length 2, and counts each piece of maximum value, makees
For the pixel value of Chi Huahou image.According to (22-2)/2+1=11 it is found that the characteristic pattern of Chi Huahou is having a size of 11px × 11px, then
After the second down-sampling layer, the characteristic pattern of 48 11px × 11px is obtained.In the second sample level and the first full articulamentum
Between a settable local acknowledgement normalize layer, referred to herein as the second local acknowledgement normalizes layer, in obtained characteristic pattern
A certain pixel carries out spatial spread, according to the pixel position, by the pixel around it in 5px × 5px size area
Point is normalized, and the result after normalized is used to update the value of the pixel.
Then, into the first full articulamentum, the neuron number of the first full articulamentum selects 512, then the first full connection
The output of layer is the characteristic pattern of 512 1px × 1px sizes.In actual treatment, it will usually first pass through above-mentioned 512 characteristic patterns
The activation of ReLU function, then dropout processing is carried out, dropout can be understood as model and be averaged, i.e., passes in training process in forward direction
When leading, the activation value of some neuron is allowed to stop working with certain Probability p, i.e. the activation value of the neuron is become with Probability p
0.For example, the neuron of the first full articulamentum is 512, if dropout ratio selection 0.4, then this layer of neuron passes through
After dropout, wherein there are about the values of 205 neurons to be set to 0, be equivalent to by prevent the synergistic effect of certain features come
Alleviate over-fitting, the phenomenon that having avoided since the appearance of a neuron with another neuron.By dropout treated feature
Figure is input in the second full articulamentum, is two classification problems due to being identified to gender, therefore the mind of the second full articulamentum
It is 2 through first number, then the final second full articulamentum output is also 2, respectively corresponds the probability of male and female.According to input
The corresponding face gender of facial image A1 be this foreseen outcome of male, the output of the second full articulamentum is adjusted, is pressed
The method backpropagation of minimization error is to adjust each parameter in the first convolutional neural networks.By a large amount of facial image number
After being trained, trained first convolutional neural networks are obtained.
In step S230, the full articulamentum of third and the 4th full connection are added in trained first convolutional neural networks
Layer to generate the second convolutional neural networks, wherein the full articulamentum of third it is identical as the trained first full articulamentum and with second under
Sample level is connected, and the 4th full articulamentum is identical as the trained second full articulamentum and is connected with the full articulamentum of third.Fig. 4 is shown
The structural schematic diagrams of second convolutional neural networks according to an embodiment of the invention.Fig. 4 is adopted under second compared with Fig. 3
It is added to the full articulamentum of the third being sequentially connected and the 4th full articulamentum behind sample layer, the full articulamentum of third and trained the
One full articulamentum is identical, and the 4th full articulamentum is identical as the trained second full articulamentum, so as to form complete with second respectively
Articulamentum is the first branch of output and is the second branch exported with the 4th full articulamentum.
In step S240, according to face image data, the second convolutional neural networks are trained.To face gender into
Row classification, obtains the first sex types and second sex type, the first sex types include it is any in male and non-male, the
Two sex types include any in women and non-women;Using facial image as the first convolutional layer in the second convolutional neural networks
Input, the first sex types are as the output of the second full articulamentum, second sex type conduct in the second convolutional neural networks
The output of 4th full articulamentum in second convolutional neural networks, is trained the second convolutional neural networks.In the present embodiment,
To in the second convolutional neural networks the first branch and the process that is trained of the second branch, in step S220 to the first convolution
The training process of neural network is similar, and details are not described herein again.
After the completion of to the second convolution neural metwork training, facial image to be identified can be input to the second convolution nerve net
Network carries out gender identification.Before this, it needs to pre-process images to be recognized to obtain facial image to be identified.Firstly,
Face datection is carried out to images to be recognized, obtains face location information;By face location information, by the people in images to be recognized
Face is converted after cutting to pre-set dimension;The transformation matrix that face carries out Plane Rotation is calculated according to face key point information;It utilizes
Facial image under pre-set dimension is rotated into horizontal front to obtain facial image to be identified by transformation matrix.In the present embodiment
In, to the advanced row Face datection of images to be recognized, i.e., first determine that scan image is carried out in a region, the position arrived to each sector scanning
Carry out feature extraction, then classification processing are set to judge whether the position includes face.For there are the to be identified of human face region
Image is converted after cutting face to 60px × 60px size.Since above-mentioned face location generally refers to face and outer profile, then
Face rotational value needs the line of two fixing points determination to obtain, and selects pupil as fixed point, passes through the company of two pupils
Line and facial image horizontal line calculate an angle, use affine transformation by the angle, spin matrix are obtained, to the image
After spin matrix, face can be rotated to be to interpupillary line and image level line is in parallel relation, to be kept
Horizontal positive facial image to be identified.According to the line of pupil two o'clock in face, calculate this line and horizontal angle with
The angle A ngleValue for obtaining rotation, can carry out the spin moment using the getRotationMatrix2D function in OpenCV
The relevant calculation of battle array carries out face rotation using warpAffine, and specific function is as follows:
RotateMatrix=cv2.getRotationMatrix2D (center=(Img.shape [1]/2,
Img.shape [0]/2), angle=90, scale=1)
RotImg=cv2.warpAffine (Img, RotateMatrix, (Img.shape [0] * 2, Img.shape [1] *
2))
In step s 250, facial image to be identified is input in trained second convolutional neural networks and carries out gender
Identification obtains the first gender output of the second full articulamentum output and the second sex output of the 4th articulamentum output.Wherein,
The output of one gender includes initial male's probability and non-male's probability, and second sex output includes that initial women probability and non-women are general
Rate.By taking facial image B1 to be identified as an example, corresponding gender is women, and image B1 is input to trained second convolution
It after in neural network, can be obtained from the output of the second full articulamentum in the first branch, initial male's probability is 0.2873, non-
Male's probability is 0.7127, can be obtained from the output of the 4th full articulamentum in the second branch, and initial women probability is 0.8075,
Non- women probability is 0.1925.
Finally, entering step S260, facial image to be identified is judged according to the output of the first gender and second sex output
Gender.It regard the sum of initial male's probability and non-women probability as male's probability;By initial women probability and non-male's probability it
With as women probability;If male's probability is greater than women probability, judge the gender of facial image to be identified for male;If male
Probability is less than women probability, then judges the gender of facial image to be identified for women.By step S250 it is found that initial male's probability
It is 0.2873, non-women probability is 0.1925, then male's probability is 0.4798, and initial women probability is 0.8075, and non-male is general
Rate is 0.7127, then women probability is 1.5202.Because male's probability is less than women probability, the property of facial image B1 to be identified is judged
Not Wei women, be consistent with truth.
Fig. 5 shows the gender identification device according to an embodiment of the invention based on multi output convolutional neural networks
300 schematic diagram.The device includes: to obtain module 310, the first training module 320, generation module 330, the second training module
340, identification module 350 and judgment module 360.The device further includes preprocessing module (not shown), is located at identification module
Before 350.
It obtains module 310 to be suitable for obtaining face image data from image data base, face image data includes face figure
Picture and face gender, facial image holding is horizontal positive and meets pre-set dimension, and face gender includes any in male and female
Kind.In the present embodiment, for the face image data in image data base comprising facial image be by rotation
The facial image is rotated to be horizontal front in advance by the image after reason, and facial image is Three Channel Color image, presets ruler
Very little is 60px × 60px.
First training module 320 is suitable for being trained the first convolutional neural networks, the first volume according to face image data
Product neural network includes the first convolutional layer being sequentially connected, the first down-sampling layer, the second convolutional layer, the second down-sampling layer, first
Full articulamentum and the second full articulamentum.First training module 320 is further adapted for using facial image as the first convolution nerve net
The output of the input of first convolutional layer, face gender as the second convolutional layer in the first convolutional neural networks in network, to the first volume
Product neural network is trained.In the present embodiment, it is said by taking a face image data A in image data base as an example
It is bright.Face image data A includes facial image A1 and face gender, and the corresponding face gender of facial image A1 is male.
In the first convolutional neural networks, firstly, A1 is input to the first convolutional layer.A1 is Three Channel Color image, ruler
Very little is 60px × 60px.First convolutional layer has 20 convolution kernels, and the number of parameters of each convolution kernel is 3 × 6 × 6, step-length 1.
After the convolution of the first convolutional layer, the characteristic pattern of 20 55px × 55px sizes is obtained.It is swashed using ReLU function
After work, into the first down-sampling layer, down-sampling is also known as pond.Herein, pondization use Maximum overlap pond, to 55px ×
The characteristic pattern of 55px carries out piecemeal, and each piece of size is 3 × 3, step-length 2, and counts each piece of maximum value, as pond
The pixel value of image afterwards.After the first down-sampling layer, the characteristic pattern of 20 27px × 27px is obtained.In the first down-sampling layer
Between the second convolutional layer, settable first partial response normalization layer carries out a certain pixel in obtained characteristic pattern
According to the pixel position place is normalized in pixel around it in 5px × 5px size area by spatial spread
It manages, and the result after normalized is used to update the value of the pixel.
Next, the characteristic pattern of 20 27px × 27px after local normalized enters the second convolutional layer.Volume Two
Lamination has 48 convolution kernels, and the number of parameters of each convolution kernel is 6 × 6, step-length 1.Then after the processing of the second convolutional layer,
Obtain the characteristic pattern of 48 22px × 22px.After carrying out activation processing to the output of the second convolutional layer by ReLU function,
Into the second down-sampling layer.According to Maximum overlap pond principle, piecemeal carried out to the characteristic pattern of 22px × 22px, each piece big
Small is 2 × 2, step-length 2, to obtain the characteristic pattern of 48 11px × 11px.In the second sample level and the first full articulamentum
Between settable second local acknowledgement normalize layer, to improve the generalization ability of network.
Then, into the first full articulamentum, the neuron number of the first full articulamentum selects 512, then the first full connection
The output of layer is the characteristic pattern of 512 1px × 1px sizes.In actual treatment, it will usually first pass through above-mentioned 512 characteristic patterns
The activation of ReLU function, then carry out dropout processing.By dropout, treated that characteristic pattern is input in the second full articulamentum, and
The neuron number of two full articulamentums is 2, and output is also 2, respectively corresponds the probability of male and female.According to input
The corresponding face gender of facial image A1 is this foreseen outcome of male, is adjusted to the output of the second full articulamentum, by pole
The method backpropagation of smallization error is to adjust each parameter in the first convolutional neural networks.By a large amount of face image data
After being trained, trained first convolutional neural networks are obtained.
Generation module 330 is suitable for adding the full articulamentum of third and the 4th Quan Lian in trained first convolutional neural networks
Layer is connect to generate the second convolutional neural networks, wherein the full articulamentum of third is identical as the trained first full articulamentum and with second
Down-sampling layer is connected, and the 4th full articulamentum is identical as the trained second full articulamentum and is connected with the full articulamentum of third.Finally,
Formd in the second convolutional neural networks respectively with the second full articulamentum be output the first branch and with the 4th full articulamentum
For the second branch of output.
Second training module 340 is suitable for being trained the second convolutional neural networks, further according to face image data
Suitable for classifying to face gender, obtain the first sex types and second sex type, the first sex types include male and
Any in non-male, second sex type includes any in women and non-women;Using facial image as the second convolution mind
Through the input of the first convolutional layer in network, the first sex types as in the second convolutional neural networks the second full articulamentum it is defeated
Out, output of the second sex type as the 4th full articulamentum in the second convolutional neural networks, to the second convolutional neural networks into
Row training.In the present embodiment, the second training module 340 in the second convolutional neural networks the first branch and the second branch into
The process of row training, it is similar to the training process of the first convolutional neural networks with the first training module 320, it is no longer superfluous herein
It states.
Preprocessing module is suitable for pre-processing images to be recognized to obtain facial image to be identified, is further adapted for pair
Images to be recognized carries out Face datection, obtains face location information;By face location information, by the face in images to be recognized
It converts after cutting to pre-set dimension;The transformation matrix that face carries out Plane Rotation is calculated according to face key point information;Utilize change
It changes matrix and the facial image under pre-set dimension is rotated into horizontal front to obtain facial image to be identified.In the present embodiment,
To the advanced row Face datection of images to be recognized, i.e., first determine that scan image is carried out in a region, the position arrived to each sector scanning
Feature extraction, then classification processing are carried out to judge whether the position includes face.For there are the figures to be identified of human face region
Picture is converted after cutting face to 60px × 60px size.Since above-mentioned face location generally refers to face and outer profile, then people
Face rotational value needs the line of two fixing points determination to obtain, and selects pupil as fixed point, passes through the line of two pupils
An angle is calculated with facial image horizontal line, affine transformation is used by the angle, spin matrix is obtained, which is made
After spin matrix, face can be rotated to be to interpupillary line and image level line is in parallel relation, to obtain holding water
Put down positive facial image to be identified.
Identification module 350 is suitable for facial image to be identified being input to progressive in trained second convolutional neural networks
It does not identify, obtains the first gender output of the second full articulamentum output and the second sex output of the 4th articulamentum output.Wherein,
The output of first gender includes initial male's probability and non-male's probability, and second sex output includes initial women probability and non-women
Probability.In the present embodiment, by taking facial image B1 to be identified as an example, corresponding gender is women, and image B1 is input to
It after in trained second convolutional neural networks, can be obtained from the output of the second full articulamentum in the first branch, initial male
Property probability be 0.2873, non-male's probability be 0.7127, can be obtained from the output of the 4th full articulamentum in the second branch, initially
Women probability is 0.8075, and non-women probability is 0.1925.
Judgment module 360 is suitable for judging according to the output of the first gender and second sex output the property of facial image to be identified
Not, it is further adapted for regarding the sum of initial male's probability and non-women probability as male's probability;By initial women probability and non-male
Property the sum of probability be used as women probability;When male's probability is greater than women probability, judge the gender of facial image to be identified for male
Property;When male's probability is less than women probability, judge the gender of facial image to be identified for women.In the present embodiment, initially
Male's probability is 0.2873, and non-women probability is 0.1925, then male's probability is 0.4798, and initial women probability is 0.8075,
Non- male's probability is 0.7127, then women probability is 1.5202.Because male's probability is less than women probability, face figure to be identified is judged
As B1 gender be women, be consistent with truth.
About the specific steps and embodiment of the gender identification based on multi output convolutional neural networks, it is being based on Fig. 2-4
Description in be disclosed in detail, details are not described herein again.
In the existing method for carrying out gender classification using convolutional neural networks, in training convolutional neural networks,
Since data distribution is uneven, it will lead to prediction result and drawn close to the more one kind of sample size, it is especially this in gender
It is become apparent in two classification problems.The technology of gender identification according to an embodiment of the present invention based on multi output convolutional neural networks
Scheme is trained the first convolutional neural networks first with face image data, further according to trained first convolution mind
The second convolutional neural networks are generated through network, and train the second convolutional neural networks by face image data, it finally will be to
Identification facial image is input to progress gender identification in trained second convolutional neural networks.In the above-mentioned technical solutions,
Two convolutional neural networks are to modify and generate to the full articulamentum part in trained first convolutional neural networks, will
One classification problem becomes two subclass classification problems, instructs further according to face image data to the two subclass classification problems
Practice, final trained second convolutional neural networks are the enhancing models of a gender multi output convolutional neural networks, to keep away
The problem for exempting to cause recognition accuracy lower due to gender training data is unbalanced in face image data.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims than feature more features expressly recited in each claim.More precisely, as following
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it abides by
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
As a separate embodiment of the present invention.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or furthermore be segmented into multiple
Submodule.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
In addition, be described as herein can be by the processor of computer system or by executing by some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, Installation practice
Element described in this is the example of following device: the device be used for implement as in order to implement the purpose of the invention element performed by
Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
Language used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes are obvious for the those of ordinary skill of technical field.For the scope of the present invention, to this
Invent done disclosure be it is illustrative and not restrictive, it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (15)
1. a kind of gender identification method based on multi output convolutional neural networks, suitable for being executed in calculating equipment, the method
Include:
Face image data is obtained from image data base, the face image data includes facial image and face gender, institute
It states the horizontal front of facial image holding and meets pre-set dimension, the face gender includes any in male and female;
According to the face image data, the first convolutional neural networks are trained, first convolutional neural networks include
The first convolutional layer, the first down-sampling layer, the second convolutional layer, the second down-sampling layer, the first full articulamentum and second being sequentially connected
Full articulamentum;
The full articulamentum of third and the 4th full articulamentum are added in trained first convolutional neural networks to generate the second convolution
Neural network, wherein the full articulamentum of the third is identical as the trained first full articulamentum and is connected with the second down-sampling layer,
The 4th full articulamentum is identical as the trained second full articulamentum and is connected with the full articulamentum of third;
According to the face image data, the second convolutional neural networks are trained;
Facial image to be identified is input to progress gender identification in trained second convolutional neural networks, second is obtained and connects entirely
Connect the first gender output of layer output and the second sex output of the 4th articulamentum output;
The gender of facial image to be identified is judged according to first gender output and second sex output.
2. the method as described in claim 1, described according to the face image data, the first convolutional neural networks are instructed
White silk includes:
Using the facial image as the input of the first convolutional layer, the face gender in the first convolutional neural networks as first
The output of second convolutional layer in convolutional neural networks, is trained the first convolutional neural networks.
3. the method as described in claim 1, described according to the face image data, the second convolutional neural networks are instructed
White silk includes:
Classify to the face gender, obtains the first sex types and second sex type, the first sex types packet
Include any in male and non-male, the second sex type includes any in women and non-women;
Using the facial image as the input of the first convolutional layer in the second convolutional neural networks, first sex types as
The output of the second full articulamentum, the second sex type are as in the second convolutional neural networks in second convolutional neural networks
The output of four full articulamentums, is trained the second convolutional neural networks.
4. the method as described in claim 1 further includes being pre-processed to images to be recognized to obtain facial image to be identified.
5. method as claimed in claim 4, described to be pre-processed to images to be recognized to obtain facial image packet to be identified
It includes:
Face datection is carried out to images to be recognized, obtains face location information;
By the face location information, convert after the face in the images to be recognized is cut to pre-set dimension;
The transformation matrix that face carries out Plane Rotation is calculated according to face key point information;
The facial image under pre-set dimension is rotated into horizontal front to obtain facial image to be identified using the transformation matrix.
6. method as claimed in claim 3, wherein the primary Shu Chu not include initial male's probability and non-male's probability,
The second sex output includes initial women probability and non-women probability.
7. method as claimed in claim 6, it is described according to first gender output and the second sex output judge to
Identification facial image gender include:
It regard the sum of initial male's probability and the non-women probability as male's probability;
It regard the sum of the initial women probability and non-male's probability as women probability;
If male's probability is greater than women probability, judge the gender of facial image to be identified for male;
If male's probability is less than women probability, judge the gender of facial image to be identified for women.
8. a kind of gender identification device based on multi output convolutional neural networks calculates in equipment, described device suitable for residing in
Include:
Module is obtained, suitable for obtaining face image data from image data base, the face image data includes facial image
And face gender, the facial image holding is horizontal positive and meets pre-set dimension, and the face gender includes male and female
In it is any;
First training module is suitable for being trained the first convolutional neural networks, described first according to the face image data
Convolutional neural networks include the first convolutional layer being sequentially connected, the first down-sampling layer, the second convolutional layer, the second down-sampling layer,
One full articulamentum and the second full articulamentum;
Generation module, be suitable for adding in trained first convolutional neural networks the full articulamentum of third and the 4th full articulamentum with
Generate the second convolutional neural networks, wherein the full articulamentum of the third it is identical as the trained first full articulamentum and with second under
Sample level is connected, and the 4th full articulamentum is identical as the trained second full articulamentum and is connected with the full articulamentum of third;
Second training module is suitable for being trained the second convolutional neural networks according to the face image data;
Identification module, suitable for facial image to be identified is input to progress gender knowledge in trained second convolutional neural networks
Not, the first gender output of the second full articulamentum output and the second sex output of the 4th articulamentum output are obtained;
Judgment module, suitable for judging the property of facial image to be identified according to first gender output and second sex output
Not.
9. device as claimed in claim 8, first training module is further adapted for:
Using the facial image as the input of the first convolutional layer, the face gender in the first convolutional neural networks as first
The output of second convolutional layer in convolutional neural networks, is trained the first convolutional neural networks.
10. device as claimed in claim 8, second training module is further adapted for:
Classify to the face gender, obtains the first sex types and second sex type, the first sex types packet
Include any in male and non-male, the second sex type includes any in women and non-women;
Using the facial image as the input of the first convolutional layer in the second convolutional neural networks, first sex types as
The output of the second full articulamentum, the second sex type are as in the second convolutional neural networks in second convolutional neural networks
The output of four full articulamentums, is trained the second convolutional neural networks.
11. device as claimed in claim 9 further includes preprocessing module, suitable for being pre-processed images to be recognized to obtain
Take facial image to be identified.
12. device as claimed in claim 11, the preprocessing module is further adapted for:
Face datection is carried out to images to be recognized, obtains face location information;
By the face location information, convert after the face in the images to be recognized is cut to pre-set dimension;
The transformation matrix that face carries out Plane Rotation is calculated according to face key point information;
The facial image under pre-set dimension is rotated into horizontal front to obtain facial image to be identified using the transformation matrix.
13. device as claimed in claim 10, wherein the primary Shu Chu not include that initial male's probability and non-male are general
Rate, the second sex output includes initial women probability and non-women probability.
14. device as claimed in claim 13, the judgment module is further adapted for:
It regard the sum of initial male's probability and the non-women probability as male's probability;
It regard the sum of the initial women probability and non-male's probability as women probability;
When male's probability is greater than women probability, judge the gender of facial image to be identified for male;
When male's probability is less than women probability, judge the gender of facial image to be identified for women.
15. a kind of calculating equipment, including as described in any one of claim 8-14 based on multi output convolutional neural networks
Gender identification device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610609766.9A CN106295521B (en) | 2016-07-29 | 2016-07-29 | A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610609766.9A CN106295521B (en) | 2016-07-29 | 2016-07-29 | A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295521A CN106295521A (en) | 2017-01-04 |
CN106295521B true CN106295521B (en) | 2019-06-04 |
Family
ID=57662828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610609766.9A Active CN106295521B (en) | 2016-07-29 | 2016-07-29 | A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295521B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845440B (en) * | 2017-02-13 | 2020-04-10 | 山东万腾电子科技有限公司 | Augmented reality image processing method and system |
CN106874964B (en) * | 2017-03-30 | 2023-11-03 | 李文谦 | Foot-type image size automatic prediction method and prediction device based on improved convolutional neural network |
CN106982359B (en) * | 2017-04-26 | 2019-11-05 | 深圳先进技术研究院 | Binocular video monitoring method and system and computer readable storage medium |
US11282695B2 (en) | 2017-09-26 | 2022-03-22 | Samsung Electronics Co., Ltd. | Systems and methods for wafer map analysis |
CN109583277B (en) * | 2017-09-29 | 2021-04-20 | 大连恒锐科技股份有限公司 | Gender determination method of barefoot footprint based on CNN |
CN108182389B (en) * | 2017-12-14 | 2021-07-30 | 华南师范大学 | User data processing method based on big data and deep learning and robot system |
CN108073910B (en) * | 2017-12-29 | 2021-05-07 | 百度在线网络技术(北京)有限公司 | Method and device for generating human face features |
CN108647594B (en) * | 2018-04-26 | 2022-06-10 | 北京小米移动软件有限公司 | Information processing method and device |
GB2574891B (en) * | 2018-06-22 | 2021-05-12 | Advanced Risc Mach Ltd | Data processing |
CN109522872A (en) * | 2018-12-04 | 2019-03-26 | 西安电子科技大学 | A kind of face identification method, device, computer equipment and storage medium |
CN109934149B (en) * | 2019-03-06 | 2022-08-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for outputting information |
CN110443286B (en) * | 2019-07-18 | 2024-06-04 | 广州方硅信息技术有限公司 | Training method of neural network model, image recognition method and device |
CN110503160B (en) * | 2019-08-28 | 2022-03-25 | 北京达佳互联信息技术有限公司 | Image recognition method and device, electronic equipment and storage medium |
CN110826525B (en) * | 2019-11-18 | 2023-05-26 | 天津高创安邦技术有限公司 | Face recognition method and system |
CN112164227B (en) * | 2020-08-26 | 2022-06-28 | 深圳奇迹智慧网络有限公司 | Parking violation vehicle warning method and device, computer equipment and storage medium |
CN111951267A (en) * | 2020-09-08 | 2020-11-17 | 南方科技大学 | Gender judgment method, device, server and storage medium based on neural network |
CN112488499A (en) * | 2020-11-28 | 2021-03-12 | 广东电网有限责任公司 | Judicial risk early warning method, device and terminal based on supplier scale |
CN112434965A (en) * | 2020-12-04 | 2021-03-02 | 广东电力信息科技有限公司 | Expert label generation method, device and terminal based on word frequency |
CN113723243B (en) * | 2021-08-20 | 2024-05-17 | 南京华图信息技术有限公司 | Face recognition method of thermal infrared image of wearing mask and application |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8582807B2 (en) * | 2010-03-15 | 2013-11-12 | Nec Laboratories America, Inc. | Systems and methods for determining personal characteristics |
CN103824054A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded depth neural network-based face attribute recognition method |
CN104966104A (en) * | 2015-06-30 | 2015-10-07 | 孙建德 | Three-dimensional convolutional neural network based video classifying method |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105069413A (en) * | 2015-07-27 | 2015-11-18 | 电子科技大学 | Human body gesture identification method based on depth convolution neural network |
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN105678381A (en) * | 2016-01-08 | 2016-06-15 | 浙江宇视科技有限公司 | Gender classification network training method, gender classification method and related device |
-
2016
- 2016-07-29 CN CN201610609766.9A patent/CN106295521B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8582807B2 (en) * | 2010-03-15 | 2013-11-12 | Nec Laboratories America, Inc. | Systems and methods for determining personal characteristics |
CN103824054A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded depth neural network-based face attribute recognition method |
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN104966104A (en) * | 2015-06-30 | 2015-10-07 | 孙建德 | Three-dimensional convolutional neural network based video classifying method |
CN105069413A (en) * | 2015-07-27 | 2015-11-18 | 电子科技大学 | Human body gesture identification method based on depth convolution neural network |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105678381A (en) * | 2016-01-08 | 2016-06-15 | 浙江宇视科技有限公司 | Gender classification network training method, gender classification method and related device |
Non-Patent Citations (3)
Title |
---|
基于卷积神经网络的人脸性别识别;汪济民 等;《现代电子技术》;20150401;第38卷(第7期);第81-84页 |
基于卷积神经网络的人脸识别方法;陈耀丹 等;《东北师大学报(自然科学版)》;20160620;第48卷(第2期);第70-76页 |
基于跨连卷积神经网络的性别分类模型;张婷 等;《自动化学报》;20160615;第42卷(第6期);第858-865页 |
Also Published As
Publication number | Publication date |
---|---|
CN106295521A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295521B (en) | A kind of gender identification method based on multi output convolutional neural networks, device and calculate equipment | |
WO2021139471A1 (en) | Health status test method and device, and computer storage medium | |
CN105138993B (en) | Establish the method and device of human face recognition model | |
CN110032271B (en) | Contrast adjusting device and method, virtual reality equipment and storage medium | |
CN107977707B (en) | Method and computing equipment for resisting distillation neural network model | |
CN111160269A (en) | Face key point detection method and device | |
CN104103033B (en) | View synthesis method | |
CN106203399B (en) | A kind of image processing method, device and calculate equipment | |
CN110263819A (en) | A kind of object detection method and device for shellfish image | |
JP2021502627A (en) | Image processing system and processing method using deep neural network | |
WO2020199611A1 (en) | Liveness detection method and apparatus, electronic device, and storage medium | |
CN110414574A (en) | A kind of object detection method calculates equipment and storage medium | |
CN107808147A (en) | A kind of face Confidence method based on the tracking of real-time face point | |
CN110084313A (en) | A method of generating object detection model | |
KR20190099914A (en) | Electronic apparatus, method for processing image thereof and computer-readable recording medium | |
CN109829448A (en) | Face identification method, device and storage medium | |
CN110929805B (en) | Training method, target detection method and device for neural network, circuit and medium | |
CN109886341A (en) | A kind of trained method for generating Face datection model | |
CN106295591A (en) | Gender identification method based on facial image and device | |
CN109346159A (en) | Case image classification method, device, computer equipment and storage medium | |
CN108648163A (en) | A kind of Enhancement Method and computing device of facial image | |
CN111936990A (en) | Method and device for waking up screen | |
CN109978063A (en) | A method of generating the alignment model of target object | |
CN112419326B (en) | Image segmentation data processing method, device, equipment and storage medium | |
CN110210319A (en) | Computer equipment, tongue body photo constitution identification device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |