CN104346607B - Face identification method based on convolutional neural networks - Google Patents

Face identification method based on convolutional neural networks Download PDF

Info

Publication number
CN104346607B
CN104346607B CN201410620574.9A CN201410620574A CN104346607B CN 104346607 B CN104346607 B CN 104346607B CN 201410620574 A CN201410620574 A CN 201410620574A CN 104346607 B CN104346607 B CN 104346607B
Authority
CN
China
Prior art keywords
mrow
layer
msub
input
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410620574.9A
Other languages
Chinese (zh)
Other versions
CN104346607A (en
Inventor
胡静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201410620574.9A priority Critical patent/CN104346607B/en
Publication of CN104346607A publication Critical patent/CN104346607A/en
Application granted granted Critical
Publication of CN104346607B publication Critical patent/CN104346607B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a kind of face identification method based on convolutional neural networks, including:Necessary pretreatment early stage is carried out to facial image, obtains preferable facial image;The input that preferable facial image is chosen as convolutional neural networks enters U0, U0Output enter UG, UGOutput as US1Input;US1S neurons by Training, extract the marginal elements of different directions in input picture as the feature extraction and output of first time to special UC1Input;UC1Output as US2Input, US2Complete secondary feature extraction and be used as UC2Input;UC2Output as US3Input, US3Complete the feature extraction of third time and be used as UC3Input;UC3Output as US4Input, US4Weights, threshold value and the neuronal cell number of planes of each layer are obtained by way of supervising competition learning and is used as UC4Input;UC4As the output layer of network, export by US4The final pattern recognition result of the network that is determined of output maximum result.The present invention can improve the discrimination of complex scene human face.

Description

Face identification method based on convolutional neural networks
Technical field
The present invention relates to a kind of face identification method based on convolutional neural networks.
Background technology
Face recognition technology is using computer analysis facial image, extracts effective characteristic information, identifies personal identification Technology.It first determines whether to whether there is face in imageIf there is position, the size letter for then further determining that every face Breath.And potential pattern feature in every face is further extracted according to these information, by itself and the face in known face database Contrasted, so as to identify the classification information of every face.Wherein, judge that the process in piece image with the presence or absence of face is exactly Face datection, it is exactly recognition of face by the image after extraction feature and the process of known face database contrast.
Researcher achieved a large amount of achievements in terms of Face datection and recognition of face in recent years, in detection performance and identity Can on all improve a lot.In recent years, substantial amounts of Face datection algorithm was suggested, and these algorithms can substantially be divided into 3 classes:(1) Method based on features of skin colors, the method for (2) knowledge based model, the method for (3) based on statistical theory.Wherein, artificial neuron Network (ANN) method is right among the statistical property of pattern is lain in network structure and parameter by training a network structure In face it is this kind of it is complicated, be difficult to the pattern that explicitly describes, the method based on ANN has unique advantage, and Rowiey is used The face of two layers of ANN detection multi-pose, first layer are used for estimating the human face posture of input picture window, and the second layer is three faces Detector, it is respectively intended to detect front face, half side-view face, side face.One width input picture first passes around human face posture Detector estimates its human face posture, and image is carried out after pre-processing accordingly, as three human-face detectors of the second layer, The final position for determining face and posture.
Recognition of face is broadly divided into following a few class methods:(1) method based on geometric properties;(2) it is based on elastic model Matching process;(3) neural net method;(4) method based on linear processes subspace.The many algorithms being currently, there are There is good recognition effect to the simple facial image of scene, however it is square by illumination in field of video monitoring, video image Position, noise and different faces and expression etc. influence, even if high performance face recognition algorithms are in such a situa-tion still at present Ideal recognition result can not be reached.
The difficulty of recognition of face is embodied in following aspects:
(1) imaging angle of camera, i.e. posture all have a great impact to most of face recognition algorithms, are based particularly on The algorithm of simple geometry feature.Two facial images for belonging to same person are likely to result in this two figures due to the influence of posture The similarity of picture might as well two images for belonging to different people.
(2) change of illumination can change facial image half-tone information, for some recognizer shadows based on gray feature Sound is very big.
(3) change of expression will also result in the decline of recognition performance.
(4) facial image may also be subjected to the age, block and the influence of the factor such as facial image yardstick, can be in different journeys The performance of face recognition algorithms is influenceed on degree.
Convolutional neural networks are developed recentlies, and cause a kind of efficient identification method paid attention to extensively.20th century 60 Age, Hubel and Wiesel have found that it is unique when being used for the neuron of local sensitivity and set direction in studying cat cortex Network structure can be effectively reduced the complexity of Feedback Neural Network, then propose convolutional neural networks (Convolutional Neural Networks- abbreviation CNN).Now, CNN has become the research heat of numerous scientific domains One of point,, can be directly defeated because the network avoids the pretreatment complicated early stage to image particularly in pattern classification field Enter original image, thus obtained more being widely applied.K.Fukushima is convolution god in the new cognitron proposed in 1980 First through network is realized network.Then, more researchers are improved the network.Wherein, have and represent Property achievement in research be that Alexander and Taylor propose " improvement cognitron ", this method combines various improved methods Advantage simultaneously avoids time-consuming error back propagation.
Usually, CNN basic structure includes two layers, and one is characterized extract layer, the input of each neuron with it is previous The local acceptance region of layer is connected, and extracts the local feature.After the local feature is extracted, it is between other features Position relationship is also decided therewith;The second is Feature Mapping layer, each computation layer of network is made up of multiple Feature Mappings, often Individual Feature Mapping is a plane, and the weights of all neurons are equal in plane.Feature Mapping structure is small using influence function core Activation primitive of the sigmoid functions as convolutional network so that Feature Mapping has shift invariant.Further, since one Neuron on mapping face shares weights, thus reduces the number of network freedom parameter.Each in convolutional neural networks Convolutional layer all followed by one is used for asking the computation layer of local average and second extraction, this distinctive feature extraction structure twice Reduce feature resolution.
CNN is mainly used to identify the X-Y scheme of displacement, scaling and other forms distortion consistency.Due to CNN feature Detection layers are learnt by training data, so when using CNN, avoid the feature extraction of display, and implicitly from instruction Practice and learnt in data;Furthermore because the neuron weights on same Feature Mapping face are identical, so network can be learned parallel Practise, this is also that convolutional network is connected with each other a big advantage of network relative to neuron.Convolutional neural networks are with its local weight Shared special construction has the superiority of uniqueness in terms of speech recognition and image procossing, and it is laid out the life closer to reality Thing neutral net, the shared complexity for reducing network of weights, the image of particularly more dimensional input vectors can directly input net This feature of network avoids the complexity of data reconstruction in feature extraction and assorting process.
Here is the principles illustrated that convolutional neural networks carry out that face is identification:
As shown in figure 1, a complete complex scene human face automatic recognition system mainly include scene image collection with Pretreatment, Face datection and positioning, face characteristic extraction and the several modules of recognition of face.
In Fig. 1, scene image collection with pretreatment module to dynamic acquisition to image handle, to overcome noise to do Disturb, improve recognition effect, mainly include image enhaucament to filter out noise, correct uneven illumination, enhancing contrast makes complicated field Scape image has certain differentiability;Face datection and locating module are in the image of dynamic acquisition, are automatically found to be known The position of others' face, common method include the Face detection algorithm based on complexion model, and the Face detection based on statistical model is calculated Method, the Face detection algorithm of feature based model;Face characteristic extraction is that need to do after Face detection one works, often Method includes the feature extraction based on Euclidean distance, the feature extraction based on KL conversion;Feature extraction based on SVD, base In ICA feature extraction etc.;Last module is face recognition module, completes the identification to each facial image, commonly uses side Method mainly includes two major classes;One kind is still image identification, and another kind of is dynamic image identification.In this two major classes method, people The statistical property of pattern is lain in network structure and parameter by artificial neural networks (ANN) method by training a network structure Among, for face it is this kind of it is complicated, be difficult to the pattern that explicitly describes, the method based on ANN has unique advantage.The present invention What is used is exactly neural network recognization method.
According to whether carrying out feature extraction, Neural Network for Face Recognition system can be divided into two major classes:There is characteristic extraction part Identifying system and identifying system without characteristic extraction part.The former is actually conventional method and neural network method technology With reference to this method can make full use of the experience of people to come obtaining mode feature and neural network classification ability to identify people Face, feature extraction must be able to react the feature of whole face, can be only achieved higher discrimination;The latter then saves feature extraction, Input by whole facial image directly as neutral net.Although this mode adds neutral net knot to a certain extent The complexity of structure, but the interference free performance of network and discrimination all improve a lot compared with the former.It will be used in the present invention CNN just belong to the second neural network.
The content of the invention
It is an object of the invention to provide a kind of face identification method based on convolutional neural networks, it is possible to increase face is known Not rate and anti-interference.
To solve the above problems, the present invention provides a kind of face identification method based on convolutional neural networks, including:
Step 1, necessary pretreatment early stage is carried out to facial image, obtains preferable facial image;
Step 2, the input for choosing preferable facial image as convolutional neural networks enter input layer U0, input layer U0's Output enters difference extract layer UG, UGFirst layer U of the output of layer as feature extraction layer SS1Input;
Step 3, first layer US1S neurons by Training, extract the edges of different directions in input picture into It is allocated as the feature extraction for first time and exports to Feature Mapping layer C first layer UC1Input, wherein, the Feature Mapping layer C is the nervous layer being made up of complicated neuron, and Feature Mapping layer C input connection is fixed, can not be changed, each feature is reflected Penetrate as a plane, the weights of all neurons are equal in plane;
Step 4, first layer UC1Second layer U of the output as feature extraction layer SS2Input, second layer US2Complete the Secondary feature extraction and as Feature Mapping layer C second layer UC2Input;
Step 5, Feature Mapping layer C second layer UC2Third layer U of the output as feature extraction layer SS3Input, Three layers of US3Complete the feature extraction of third time and as Feature Mapping layer C third layer UC3Input;
Step 6, Feature Mapping layer C third layer UC3Four layer U of the output as feature extraction layer SS4Input, Four layers of US4Weights, threshold value and the neuronal cell number of planes of each layer are obtained by way of supervising competition learning and is reflected as feature Penetrate layer C the 4th layer of UC4Input;
Step 7, the 4th layer of UC4As the output layer of network, export by the 4th layer of US4Output maximum result determined The final pattern recognition result of network.
Further, in the above-mentioned methods, pretreatment early stage includes the pretreatment for positioning and splitting.
Further, in the above-mentioned methods, difference extract layer U in step 2GOutput be shown below:
In formula, U0The output of preceding layer is represented, n represents the neuron of input, and v represents designated area, and v is summed and represents bag The neuron summation of designated area, a are containedG(v) be neuron connection intensity, difference extract layer UGThere are 2 neuron planes, As k=2, to strengthen central nervous member, during k=1, represent and restrain central nervous member, AGIt is v radius, UGThe each neuron of layer All inputs connection must be fulfilled for a constraints i.e.
Further, in the above-mentioned methods, step 3 is in six, the response letter of each layer of S neuron in feature extraction layer S Number is shown below:
In formula, aSl(v, κ, k) (>=0) is last layer Feature Mapping layer C neurons uCl-1(n+v, κ) is to this layer of S neuron Contiguous function, the input connection of all neurons of same neuron plane is identical, θlIt is the threshold of l layer S neurons Value, ASlIt is v radius, as l=1, uCl-1(n, κ) is uG(n, k), now, KCl-1=2.
Further, in the above-mentioned methods, step 3 is in six, except UC4Layer is outer, Feature Mapping layer C its excess-three layer UC1、 UC2And UC3C neuron receptance functions be shown below:
In formula, aCl(v) be C layers input.
Compared with prior art, the invention has the advantages that:
(1) network is made to have higher distortion to tolerate to input sample in identification by feature extraction structure in layer Ability;
(2) convolutional neural networks are implicitly obtained to structure by avoiding explicitly characteristic extraction procedure from training sample Build training sample space and contribute larger feature, there is higher discrimination and anti-interference compared with legacy network;
(3) combining form of different neuron and learning rules is used, further increases the recognition capability of network;
(4) by the study to the facial image under preferable pretreatment condition, the weights of each layer in network system are optimized Parameter, substantially increase the face identification rate in complex scene.Test result indicates that the method is substantially better than Structure Method, template With traditional recognition methods such as methods.
Brief description of the drawings
Fig. 1 is existing human face automatic identifying method block diagram;
Fig. 2 is the neutral net network structure of one embodiment of the invention.
Embodiment
In order to facilitate the understanding of the purposes, features and advantages of the present invention, it is below in conjunction with the accompanying drawings and specific real Applying mode, the present invention is further detailed explanation.
As shown in Fig. 2 the present invention provides a kind of face identification method based on convolutional neural networks, including:
Step S1, necessary pretreatment early stage is carried out to facial image, obtains preferable facial image;It is specifically, described Pretreatment early stage includes positioning, the pretreatment split etc.;
Step S2, the input for choosing preferable facial image as convolutional neural networks enter input layer U0, input layer U0's Output enters difference extract layer UG, UGFirst layer U of the output of layer as feature extraction layer SS1Input;Specifically, in Fig. 2, Feature extraction layer S is the nervous layer of simple (simple) neuron composition, completes feature extraction, its input connection be it is variable, And constantly corrected in learning process;Specifically, difference extract layer UGOutput such as formula (1) shown in:
In formula, U0The output of preceding layer is represented, n represents the neuron of input, and v represents designated area, and v is summed and represents bag The neuron summation of designated area, a are containedG(v) be neuron connection intensity, difference extract layer UGThere are 2 neuron planes, As k=2, to strengthen central nervous member, during k=1, represent and restrain central nervous member, AGIt is v radius, UGThe each neuron of layer All inputs connection must also meet a constraints i.e.Difference extraction can just be played a part of;
Step S3, first layer US1S neurons by Training, extract the edges of different directions in input picture into It is allocated as the feature extraction for first time and exports to Feature Mapping layer C first layer UC1Input, wherein, the Feature Mapping layer C is the nervous layer being made up of complicated (complex) neuron, and Feature Mapping layer C input connection is fixed, can not be changed, Show that receptive field is energized the approximate change of position, each Feature Mapping is a plane, the weights of all neurons in plane It is equal;Specifically, in feature extraction layer S shown in the receptance function of each layer of S neuron such as formula (2)
In formula, aSl(v, κ, k) (>=0) is last layer Feature Mapping layer C neurons uCl-1(n+v, κ) is to this layer of S neuron Contiguous function, the input connection of all neurons of same neuron plane is identical, θlIt is the threshold of l layer S neurons Value, ASlIt is v radius.As l=1, uCl-1(n, κ) is uG(n, k), now, KCl-1=2;
Step S4, first layer UC1Second layer U of the output as feature extraction layer SS2Input, second layer US2Complete the Secondary feature extraction and as Feature Mapping layer C second layer UC2Input;
Step S5, Feature Mapping layer C second layer UC2Third layer U of the output as feature extraction layer SS3Input, Three layers of US3Complete the feature extraction of third time and as Feature Mapping layer C third layer UC3Input;
Step S6, Feature Mapping layer C third layer UC3Four layer U of the output as feature extraction layer SS4Input, Four layers of US4Weights, threshold value and the neuronal cell number of planes of each layer are obtained by way of supervising competition learning and is reflected as feature Penetrate layer C the 4th layer of UC4Input;
Step S7, the 4th layer of UC4Output layer as network is identification layer, is exported by the 4th layer of US4Output maximum result The final pattern recognition result of the network that is determined.Specifically, Feature Mapping layer C last layer is identification layer in network, give The result of exit pattern identification.By study, network can automatically identified input pattern, without by input picture distortion, scaling With the influence of displacement.
Preferably, except UC4Layer is outer, Feature Mapping layer C its excess-three layer UC1、UC2And UC3C neurons receptance function such as formula (3) shown in:
In formula, aCl(v) be C layers input.
From figure 2 it can be seen that network is by input layer U0, difference extract layer UG, 4 groups of S layers and 4 layers of C layers composition, main stream Journey figure is as follows:U0→UG→US1→UC1→US2→UC2→US3→UC3→US4→UC4.Wherein difference extract layer UGCorresponding to view Centrocyte in film, formed by strengthening center receptive field neuron plane and suppressing central nervous member plane two parts, UGLayer First layer U of the output as feature extraction layer SS1Input;US1S neurons in layer pass through Training, extraction input The marginal element of different directions in image, its output is as UC1Input;Feature extraction layer S second layer US2With the 3rd Layer US3Neuron be unsupervised competition learning self-organizing neuron;US4Layer is by supervising the training of competition learning come correct Identify all samples;UC4Layer is the output layer i.e. identification layer of network, shows the final pattern recognition result of network.
The present invention passes through the sample learning to facial image, the weighting parameter of every layer of optimization neural network, so as to very big The discrimination of complex scene human face is improved in degree, multiple feature extraction structure causes network to have higher anti-interference energy Power, complex scene here are primarily referred to as the scene that facial image is influenceed by illumination, expression, posture factor, have well Fault-tolerant ability, parallel processing capability and self-learning capability, can processing environment information it is complicated, background knowledge is unclear, inference rule The problem of in the case of indefinite, it is allowed to which sample has larger defect, distortion, and the speed of service is fast, and adaptive performance is good and higher Resolution ratio, it can be applied to the fields such as pattern-recognition, abnormality detection, image procossing.
Each embodiment is described by the way of progressive in this specification, what each embodiment stressed be and other The difference of embodiment, between each embodiment identical similar portion mutually referring to.For system disclosed in embodiment For, due to corresponding to the method disclosed in Example, so description is fairly simple, related part is referring to method part illustration .
Professional further appreciates that, with reference to the unit of each example of the embodiments described herein description And algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware and The interchangeability of software, the composition and step of each example are generally described according to function in the above description.These Function is performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specialty Technical staff can realize described function using distinct methods to each specific application, but this realization should not Think beyond the scope of this invention.
Obviously, those skilled in the art can carry out the spirit of various changes and modification without departing from the present invention to invention And scope.So, if these modifications and variations of the present invention belong to the claims in the present invention and its equivalent technologies scope it Interior, then the present invention is also intended to including these changes and modification.

Claims (5)

  1. A kind of 1. face identification method based on convolutional neural networks, it is characterised in that including:
    Step 1, pretreatment early stage is carried out to facial image, obtains preferable facial image;
    Step 2, the input for choosing preferable facial image as convolutional neural networks enter input layer U0, input layer U0Output Into difference extract layer UG, UGFirst layer U of the output of layer as feature extraction layer SS1Input;
    Step 3, first layer US1S neurons by Training, the marginal element for extracting different directions in input picture makees Feature extraction for first time is simultaneously exported to Feature Mapping layer C first layer UC1Input, wherein, the Feature Mapping layer C is The nervous layer being made up of complicated neuron, Feature Mapping layer C input connection is fixed, can not be changed, each Feature Mapping For a plane, the weights of all neurons are equal in plane;
    Step 4, first layer UC1Second layer U of the output as feature extraction layer SS2Input, second layer US2Complete second Feature extraction and as Feature Mapping layer C second layer UC2Input;
    Step 5, Feature Mapping layer C second layer UC2Third layer U of the output as feature extraction layer SS3Input, third layer US3Complete the feature extraction of third time and as Feature Mapping layer C third layer UC3Input;
    Step 6, Feature Mapping layer C third layer UC3Four layer U of the output as feature extraction layer SS4Input, the 4th layer US4Weights, threshold value and the neuronal cell number of planes of each layer are obtained by way of supervising competition learning and is used as Feature Mapping layer C the 4th layer of UC4Input;
    Step 7, the 4th layer of UC4As the output layer of network, export by the 4th layer of US4The network that is determined of output maximum result Final pattern recognition result.
  2. 2. the face identification method based on convolutional neural networks as claimed in claim 1, it is characterised in that the early stage is located in advance Reason includes the pretreatment for positioning and splitting.
  3. 3. the face identification method based on convolutional neural networks as claimed in claim 1, it is characterised in that difference in step 2 Extract layer UGOutput be shown below:
    <mrow> <msub> <mi>u</mi> <mi>G</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>{</mo> <mo>&amp;lsqb;</mo> <msup> <mrow> <mo>(</mo> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>k</mi> </msup> <munder> <mi>&amp;Sigma;</mi> <mrow> <mrow> <mo>|</mo> <mi>v</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <msub> <mi>A</mi> <mi>G</mi> </msub> </mrow> </munder> <msub> <mi>a</mi> <mi>G</mi> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>,</mo> <mn>0</mn> <mo>}</mo> <mo>,</mo> <mrow> <mo>(</mo> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
    In formula, U0The output of preceding layer is represented, n represents the neuron of input, and v represents designated area, and v summation representatives are contained The neuron summation of designated area, aG(v) be neuron connection intensity, difference extract layer UGThere are 2 neuron planes, work as k= When 2, to strengthen central nervous member, during k=1, represent and restrain central nervous member, AGIt is v radius, UGThe institute of each neuron of layer There is input connection must to be fulfilled for a constraints i.e.
  4. 4. the face identification method based on convolutional neural networks as claimed in claim 1, it is characterised in that step 3 to six In, the receptance function of each layer of S neuron is shown below in feature extraction layer S:
    <mrow> <msub> <mi>u</mi> <mrow> <mi>s</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>&amp;theta;</mi> <mi>l</mi> </msub> <mrow> <mn>1</mn> <mo>-</mo> <msub> <mi>&amp;theta;</mi> <mi>l</mi> </msub> </mrow> </mfrac> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>{</mo> <mfrac> <mrow> <mn>1</mn> <mo>+</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>&amp;kappa;</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>K</mi> <mrow> <mi>C</mi> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </munderover> <munder> <mi>&amp;Sigma;</mi> <mrow> <mrow> <mo>|</mo> <mi>v</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <msub> <mi>A</mi> <mrow> <mi>S</mi> <mi>l</mi> </mrow> </msub> </mrow> </munder> <msub> <mi>a</mi> <mrow> <mi>S</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mi>&amp;kappa;</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>u</mi> <mrow> <mi>C</mi> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mi>v</mi> <mo>,</mo> <mi>&amp;kappa;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <msub> <mi>&amp;theta;</mi> <mi>l</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>v</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mn>1</mn> <mo>&amp;rsqb;</mo> <mo>,</mo> <mn>0</mn> <mo>}</mo> </mrow>
    In formula, aSl(v, κ, k) (>=0) is last layer Feature Mapping layer C neurons uCl-1(n+v, κ) to this layer of S neuron company Connect function, the input connections of all neurons of same neuron plane is identical, θlIt is the threshold value of l layer S neurons, ASl It is v radius, as l=1, uCl-1(n, κ) is uG(n, k), now, KCl-1=2.
  5. 5. the face identification method based on convolutional neural networks as claimed in claim 1, it is characterised in that step 3 to six In, except UC4Layer is outer, Feature Mapping layer C its excess-three layer UC1、UC2And UC3C neuron receptance functions be shown below:
    <mrow> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>{</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mrow> <mo>|</mo> <mi>v</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <msub> <mi>A</mi> <mrow> <mi>C</mi> <mi>l</mi> </mrow> </msub> </mrow> </munder> <msub> <mi>a</mi> <mrow> <mi>C</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>u</mi> <mrow> <mi>S</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mi>v</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>}</mo> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>{</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mrow> <mo>|</mo> <mi>v</mi> <mo>|</mo> </mrow> <mo>&lt;</mo> <msub> <mi>A</mi> <mrow> <mi>C</mi> <mi>l</mi> </mrow> </msub> </mrow> </munder> <msub> <mi>a</mi> <mrow> <mi>C</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>u</mi> <mrow> <mi>S</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mi>v</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>}</mo> </mrow> </mfrac> </mrow>
    In formula, aCl(v) be C layers input.
CN201410620574.9A 2014-11-06 2014-11-06 Face identification method based on convolutional neural networks Expired - Fee Related CN104346607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410620574.9A CN104346607B (en) 2014-11-06 2014-11-06 Face identification method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410620574.9A CN104346607B (en) 2014-11-06 2014-11-06 Face identification method based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN104346607A CN104346607A (en) 2015-02-11
CN104346607B true CN104346607B (en) 2017-12-22

Family

ID=52502181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410620574.9A Expired - Fee Related CN104346607B (en) 2014-11-06 2014-11-06 Face identification method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN104346607B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2718222C1 (en) * 2016-07-29 2020-03-31 Общество С Ограниченной Ответственностью "Нтех Лаб" Face recognition using artificial neural network

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778448B (en) * 2015-03-24 2017-12-15 孙建德 A kind of face identification method based on structure adaptive convolutional neural networks
CN104794501B (en) * 2015-05-14 2021-01-05 清华大学 Pattern recognition method and device
CN106203619B (en) * 2015-05-29 2022-09-13 三星电子株式会社 Data optimized neural network traversal
CN104992167B (en) * 2015-07-28 2018-09-11 中国科学院自动化研究所 A kind of method for detecting human face and device based on convolutional neural networks
CN105426875A (en) * 2015-12-18 2016-03-23 武汉科技大学 Face identification method and attendance system based on deep convolution neural network
CN105631406B (en) * 2015-12-18 2020-07-10 北京小米移动软件有限公司 Image recognition processing method and device
GB2545661A (en) 2015-12-21 2017-06-28 Nokia Technologies Oy A method for analysing media content
CN105975931B (en) * 2016-05-04 2019-06-14 浙江大学 A kind of convolutional neural networks face identification method based on multiple dimensioned pond
US10303977B2 (en) 2016-06-28 2019-05-28 Conduent Business Services, Llc System and method for expanding and training convolutional neural networks for large size input images
CN106295566B (en) * 2016-08-10 2019-07-09 北京小米移动软件有限公司 Facial expression recognizing method and device
CN107368182B (en) * 2016-08-19 2020-02-18 北京市商汤科技开发有限公司 Gesture detection network training, gesture detection and gesture control method and device
CN106407369A (en) * 2016-09-09 2017-02-15 华南理工大学 Photo management method and system based on deep learning face recognition
FR3059804B1 (en) * 2016-12-07 2019-08-02 Idemia Identity And Security IMAGE PROCESSING SYSTEM
CN106650919A (en) * 2016-12-23 2017-05-10 国家电网公司信息通信分公司 Information system fault diagnosis method and device based on convolutional neural network
CN107066934A (en) * 2017-01-23 2017-08-18 华东交通大学 Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN107423696A (en) * 2017-07-13 2017-12-01 重庆凯泽科技股份有限公司 Face identification method and system
CN107633229A (en) * 2017-09-21 2018-01-26 北京智芯原动科技有限公司 Method for detecting human face and device based on convolutional neural networks
CN107545571A (en) * 2017-09-22 2018-01-05 深圳天琴医疗科技有限公司 A kind of image detecting method and device
CN111738431B (en) 2017-12-11 2024-03-05 中科寒武纪科技股份有限公司 Neural network computing device and method
US10834365B2 (en) 2018-02-08 2020-11-10 Nortek Security & Control Llc Audio-visual monitoring using a virtual assistant
US11295139B2 (en) 2018-02-19 2022-04-05 Intellivision Technologies Corp. Human presence detection in edge devices
US11615623B2 (en) 2018-02-19 2023-03-28 Nortek Security & Control Llc Object detection in edge devices for barrier operation and parcel delivery
US10978050B2 (en) 2018-02-20 2021-04-13 Intellivision Technologies Corp. Audio type detection
US11735018B2 (en) 2018-03-11 2023-08-22 Intellivision Technologies Corp. Security system with face recognition
KR20210028185A (en) * 2018-06-29 2021-03-11 렌치 잉크. Human posture analysis system and method
CN109242043A (en) * 2018-09-29 2019-01-18 北京京东金融科技控股有限公司 Method and apparatus for generating information prediction model
CN113569796A (en) * 2018-11-16 2021-10-29 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN109871464B (en) * 2019-01-17 2020-12-25 东南大学 Video recommendation method and device based on UCL semantic indexing
CN110210399A (en) * 2019-05-31 2019-09-06 广东世纪晟科技有限公司 A kind of face identification method based on uncertain quantization probability convolutional neural networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831396A (en) * 2012-07-23 2012-12-19 常州蓝城信息科技有限公司 Computer face recognition method
CN103646244A (en) * 2013-12-16 2014-03-19 北京天诚盛业科技有限公司 Methods and devices for face characteristic extraction and authentication
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355303B2 (en) * 2011-12-04 2016-05-31 King Saud University Face recognition using multilayered discriminant analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831396A (en) * 2012-07-23 2012-12-19 常州蓝城信息科技有限公司 Computer face recognition method
CN103646244A (en) * 2013-12-16 2014-03-19 北京天诚盛业科技有限公司 Methods and devices for face characteristic extraction and authentication
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2718222C1 (en) * 2016-07-29 2020-03-31 Общество С Ограниченной Ответственностью "Нтех Лаб" Face recognition using artificial neural network

Also Published As

Publication number Publication date
CN104346607A (en) 2015-02-11

Similar Documents

Publication Publication Date Title
CN104346607B (en) Face identification method based on convolutional neural networks
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN108615010A (en) Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern
CN107766850A (en) Based on the face identification method for combining face character information
CN106326874A (en) Method and device for recognizing iris in human eye images
Tivive et al. A gender recognition system using shunting inhibitory convolutional neural networks
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN105956570B (en) Smiling face&#39;s recognition methods based on lip feature and deep learning
Shen et al. Learning high-level concepts by training a deep network on eye fixations
CN105893916A (en) New method for detection of face pretreatment, feature extraction and dimensionality reduction description
Vadlapati et al. Facial recognition using the OpenCV Libraries of Python for the pictures of human faces wearing face masks during the COVID-19 pandemic
CN106682653A (en) KNLDA-based RBF neural network face recognition method
CN105893941B (en) A kind of facial expression recognizing method based on area image
Wang et al. Research on face recognition technology based on PCA and SVM
CN116343284A (en) Attention mechanism-based multi-feature outdoor environment emotion recognition method
CN110210399A (en) A kind of face identification method based on uncertain quantization probability convolutional neural networks
Joshi et al. Deep learning based approach for malaria detection in blood cell images
Li et al. Automatic recognition of smiling and neutral facial expressions
Xia et al. A multi-scale multi-attention network for dynamic facial expression recognition
Shen et al. Facial expression recognition based on bidirectional gated recurrent units within deep residual network
Reale et al. Facial action unit analysis through 3d point cloud neural networks
CN109800854A (en) A kind of Hydrophobicity of Composite Insulator grade determination method based on probabilistic neural network
CN108256569A (en) A kind of object identifying method under complex background and the computer technology used
Zuobin et al. Effective feature fusion for pattern classification based on intra-class and extra-class discriminative correlation analysis
Niu et al. An indoor pool drowning risk detection method based on improved YOLOv4

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171222

Termination date: 20201106