CN104346607A - Face recognition method based on convolutional neural network - Google Patents
Face recognition method based on convolutional neural network Download PDFInfo
- Publication number
- CN104346607A CN104346607A CN201410620574.9A CN201410620574A CN104346607A CN 104346607 A CN104346607 A CN 104346607A CN 201410620574 A CN201410620574 A CN 201410620574A CN 104346607 A CN104346607 A CN 104346607A
- Authority
- CN
- China
- Prior art keywords
- layer
- input
- output
- neuron
- feature extraction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a face recognition method based on a convolutional neural network. The face recognition method comprises the following steps: carrying out necessary pretreatment at early stage on a facial image to obtain an ideal facial image; selecting the ideal facial image as the input of the convolutional neural network to enter U0, wherein the output of the U0 enters UG, and the output of the UG is taken as the input of US1; extracting edge components in different directions in an input image as first-time feature extraction and outputting to the input of special UC1 through supervised training by the S nerve cell of the US1; taking the output of the UC1 as the input of US2, completing the second-time feature extraction by the US2 and taking as the input of UC2; taking the output of the UC2 as the input of US3, and completing the third-time feature extraction by the US3 and taking as the input of UC3; taking the output of the UC3 as the input of US4, and obtaining the weight, threshold value and neuron plane number in each layer in a supervision competitive learning mode by the US4 and taking as the input of the UC4; taking the UC4 as the output layer of the network, and outputting the final mode recognition result of the network determined by the maximum output result of the US4. According to the face recognition method, the recognition rate of faces in complex scenarios can be improved.
Description
Technical field
The present invention relates to a kind of face identification method based on convolutional neural networks.
Background technology
Face recognition technology utilizes Computer Analysis facial image, extracts effective characteristic information, identifies the technology of personal identification.First it judge to there is face in image? if existed, determine position, the size information of often opening face further.And extract further according to these information and often open pattern feature potential in face, the face in itself and known face database is contrasted, thus identifies the classification information of often opening face.Wherein, judge that the process that whether there is face in piece image is exactly Face datection, the process image after extraction feature and known face database contrasted is exactly recognition of face.
Researcher achieved a large amount of achievement in Face datection and recognition of face in recent years, and detection perform and recognition performance all improve a lot.In recent years, a large amount of Face datection algorithms was suggested, and these algorithms roughly can be divided into 3 classes: (1) based on the method for features of skin colors, the method for (2) knowledge based model, the method for (3) Corpus--based Method theory.Wherein, artificial neural network (ANN) method is by training network structure, the statistical property of pattern is lain among network structure and parameter, for this kind of complexity of face, the pattern that is difficult to explicit description, method based on ANN has unique advantage, Rowiey employs the face that two-layer ANN detects multi-pose, ground floor is used for estimating the human face posture of input picture window, the second layer is three human-face detectors, is used for respectively detecting front face, half side-view face, side face.First one width input picture estimates its human face posture through human face posture detecting device, after carrying out corresponding pre-service, it can be used as three human-face detectors of the second layer, finally determines position and the attitude of face image.
Recognition of face is broadly divided into following several class methods: (l) is based on the method for geometric properties; (2) based on elastic model matching process; (3) neural net method; (4) based on the method for linear processes subspace.At present, the a lot of algorithms existed have good recognition effect to the simple facial image of scene, but in field of video monitoring, video image is by illumination, orientation, noise and the impact such as different faces and expression, even if current high performance face recognition algorithms still can not reach ideal recognition result in such a situa-tion.
The difficulty of recognition of face is embodied in following several aspect:
L the imaging angle of () camera, namely attitude all has a great impact most of face recognition algorithms, particularly based on the algorithm of simple geometry feature.Two facial images belonging to same person may cause the similarity of these two images to belong to the images of different people not as good as two due to the impact of attitude.
(2) change of illumination can change facial image half-tone information, very large for the impact of some recognizers based on gray feature.
(3) change of expressing one's feelings also can cause the decline of recognition performance.
(4) facial image also may be subject to the age, blocks and the impact of the factor such as facial image yardstick, can in the performance affecting face recognition algorithms in varying degrees.
Convolutional neural networks is developed recently, and causes a kind of efficient identification method extensively paid attention to.The sixties in 20th century, when studying the neuron for local sensitivity and set direction in cat cortex, Hubel and Wiesel finds that the network structure of its uniqueness can reduce the complicacy of Feedback Neural Network effectively, then propose convolutional neural networks (Convolutional Neural Networks-is called for short CNN).Now, CNN has become one of study hotspot of numerous scientific domain, particularly in pattern classification field, because this network avoids the complicated pre-service in early stage to image, directly can input original image, thus obtain and apply more widely.K.Fukushima is that first of convolutional neural networks realizes network at the new cognitron that 1980 propose.Subsequently, more researcher improves this network.Wherein, representative achievement in research is that Alexander and Taylor proposes " improvement cognitron ", and the method combines various advantage of improving one's methods and avoids error back propagation consuming time.
Usually, the basic structure of CNN comprises two layers, and one is feature extraction layer, and each neuronic input is connected with the local acceptance domain of front one deck, and extracts the feature of this local.Once after this local feature is extracted, the position relationship between it and other features is also decided thereupon; It two is Feature Mapping layers, and each computation layer of network is made up of multiple Feature Mapping, and each Feature Mapping is a plane, and in plane, all neuronic weights are equal.Feature Mapping structure adopts sigmoid function that influence function core is little as the activation function of convolutional network, makes Feature Mapping have shift invariant.In addition, because the neuron on a mapping face shares weights, the number of freedom of network parameter is thus decreased.Each convolutional layer in convolutional neural networks is used for asking the computation layer of local average and second extraction followed by one, and this distinctive twice feature extraction structure reduces feature resolution.
CNN is mainly used to the X-Y scheme identifying displacement, convergent-divergent and other form distortion unchangeability.Because the feature detection layer of CNN is learnt by training data, so when using CNN, avoiding the feature extraction of display, and implicitly learning from training data; Moreover due to the neuron weights on same Feature Mapping face identical, so network can collateral learning, this is also that convolutional network is connected with each other relative to neuron a large advantage of network.Convolutional neural networks has unique superiority with the special construction that its local weight is shared in speech recognition and image procossing, its layout is closer to the biological neural network of reality, weights share the complicacy reducing network, and particularly the image of multidimensional input vector directly can input the complexity that this feature of network avoids data reconstruction in feature extraction and assorting process.
Here is the principles illustrated that convolutional neural networks carries out that face is identification:
As shown in Figure 1, complete complex scene human face automatic recognition system mainly comprises scene image collection and pre-service, Face datection and location, face characteristic extract and the several module of recognition of face.
In Fig. 1, scene image collection and pretreatment module to dynamic acquisition to image process, to overcome noise, improve recognition effect, mainly comprise image enhaucament with filtering noise, correct uneven illumination, strengthen contrast and make complex scene image have certain differentiability; Face datection and locating module are in the image of dynamic acquisition, be automatically found the position that will identify face, common method comprises the Face detection algorithm based on complexion model, the Face detection algorithm of Corpus--based Method model, the Face detection algorithm of feature based model; It is the work needing to do after Face detection that face characteristic extracts, and conventional method comprises the feature extraction based on Euclidean distance, based on the feature extraction of KL conversion; Based on the feature extraction of SVD, based on the feature extraction etc. of ICA; Last module is face recognition module, completes the identification to each facial image, and common method mainly comprises two large classes; One class is still image identification, and another kind of is dynamic image identification.In these two large class methods, artificial neural network (ANN) method is by training network structure, the statistical property of pattern is lain among network structure and parameter, for this kind of complexity of face, the pattern that is difficult to explicit description, the method based on ANN has unique advantage.What the present invention adopted is exactly neural network recognization method.
According to whether carrying out feature extraction, Neural Network for Face Recognition system can be divided into two large classes: have the recognition system of characteristic extraction part and the recognition system without characteristic extraction part.The former is actually the combination of classic method and neural network method technology, the experience that this method can make full use of people comes obtaining mode feature and neural network classification ability to identify face, feature extraction must can react the feature of whole face, just can reach higher discrimination; Latter saves feature extraction, using whole facial image directly as the input of neural network.Although this mode adds the complexity of neural network structure to a certain extent, comparatively the former improves a lot for the interference free performance of network and discrimination.The CNN that will adopt in the present invention just belongs to Equations of The Second Kind neural network.
Summary of the invention
The object of the present invention is to provide a kind of face identification method based on convolutional neural networks, can face identification rate and anti-interference be improved.
For solving the problem, the invention provides a kind of face identification method based on convolutional neural networks, comprising:
Step one, carries out necessary pre-service in early stage to facial image, obtains desirable facial image;
Step 2, chooses desirable facial image and enters input layer U as the input of convolutional neural networks
0, input layer U
0output enter difference extract layer U
g, U
gthe output of layer is as the ground floor U of feature extraction layer S
s1input;
Step 3, ground floor U
s1s neuron by Training, the marginal element extracting different directions in input picture is as primary feature extraction and export the ground floor U of Feature Mapping layer C to
c1input, wherein, described Feature Mapping layer C is the nervous layer be made up of complicated neuron, and it is fixing that the input of Feature Mapping layer C connects, and can not revise, each Feature Mapping is a plane, and in plane, all neuronic weights are equal;
Step 4, ground floor U
c1output as the second layer U of feature extraction layer S
s2input, second layer U
s2complete secondary feature extraction and as the second layer U of Feature Mapping layer C
c2input;
Step 5, the second layer U of Feature Mapping layer C
c2output as the third layer U of feature extraction layer S
s3input, third layer U
s3complete third time feature extraction and as the third layer U of Feature Mapping layer C
c3input;
Step 6, the third layer U of Feature Mapping layer C
c3output as the 4th layer of U of feature extraction layer S
s4input, the 4th layer of U
s4the weights of each layer, threshold value and neuronal cell number of planes is obtained and as the 4th layer of U of Feature Mapping layer C by supervising the mode of competitive learning
c4input;
Step 7, the 4th layer of U
c4as the output layer of network, export by the 4th layer of U
s4the final pattern recognition result of the network that determines of output maximum result.
Further, in the above-mentioned methods, described pre-service in early stage comprises the pre-service of location and segmentation.
Further, in the above-mentioned methods, difference extract layer U in step 2
goutput be shown below:
In formula, U
0the output of one deck before representative, the neuron of n representative input, v represents appointed area, v summation representative is contained to the neuron summation of appointed area, a
g(ξ) be the intensity that neuron connects, difference extract layer U
ghave 2 neuron planes, as k=2, for strengthening central nervous unit, during k=1, central nervous unit is restrained in representative, A
gthe radius of v, U
gnamely each neuronic all input connection of layer must meet a constraint condition
Further, in the above-mentioned methods, in step 3 to six, in feature extraction layer S, the neuronic response function of every one deck S is shown below:
In formula, a
sl(v, κ, k) (>=0) is last layer Feature Mapping layer C neuron u
cl-1(n+v, κ), to this layer of neuronic contiguous function of S, it is identical that all neuronic input of same neuron plane connects, θ
lthe neuronic threshold value of l layer S, A
slthe radius of v, as l=1, u
cl-1(n, κ) is u
g(n, k), now, K
cl-1=2.
Further, in the above-mentioned methods, in step 3 to six, except U
c4outside layer, its excess-three layer U of Feature Mapping layer C
c1, U
c2and U
c3c neuron response function be shown below:
In formula, a
clv () is the input of C layer.
Compared with prior art, tool of the present invention has the following advantages:
(1) network is made to have higher distortion tolerance when identifying to input amendment by feature extraction structure in layer;
(2) convolutional neural networks is by avoiding explicitly characteristic extraction procedure, implicitly obtaining from training sample constructing the larger feature of training sample space contribution, having higher discrimination and anti-interference compared with legacy network;
(3) adopt different neurons and the array configuration of learning rules, further increase the recognition capability of network;
(4) by the study to the facial image under desirable pretreatment condition, optimize the weighting parameter of each layer in network system, substantially increase the face identification rate in complex scene.Experimental result shows the traditional recognition method such as the method is obviously better than Structure Method, template matching method.
Accompanying drawing explanation
Fig. 1 is existing human face automatic identifying method block diagram;
Fig. 2 is the neural network network structure of one embodiment of the invention.
Embodiment
For enabling above-mentioned purpose of the present invention, feature and advantage become apparent more, and below in conjunction with the drawings and specific embodiments, the present invention is further detailed explanation.
As shown in Figure 2, the invention provides a kind of face identification method based on convolutional neural networks, comprising:
Step S1, carries out necessary pre-service in early stage to facial image, obtains desirable facial image; Concrete, described pre-service in early stage comprises location, the pre-service of segmentation etc.;
Step S2, chooses desirable facial image and enters input layer U as the input of convolutional neural networks
0, input layer U
0output enter difference extract layer U
g, U
gthe output of layer is as the ground floor U of feature extraction layer S
s1input; Concrete, in Fig. 2, feature extraction layer S is the nervous layer of simple (simple) neuron composition, completes feature extraction, and it is variable that its input connects, and constantly corrects in learning process; Concrete, difference extract layer U
goutput such as formula shown in (1):
In formula, U
0the output of one deck before representative, the neuron of n representative input, v represents appointed area, v summation representative is contained to the neuron summation of appointed area, a
g(ξ) be the intensity that neuron connects, difference extract layer U
ghave 2 neuron planes, as k=2, for strengthening central nervous unit, during k=1, central nervous unit is restrained in representative, A
gthe radius of v, U
gnamely each neuronic all input of layer connects also must meet a constraint condition
just can play the effect that difference is extracted;
Step S3, ground floor U
s1s neuron by Training, the marginal element extracting different directions in input picture is as primary feature extraction and export the ground floor U of Feature Mapping layer C to
c1input, wherein, described Feature Mapping layer C is the nervous layer be made up of complicated (complex) neuron, it is fixing that the input of Feature Mapping layer C connects, can not revise, display receptive field is energized the approximate change of position, and each Feature Mapping is a plane, and in plane, all neuronic weights are equal; Concrete, in feature extraction layer S, the neuronic response function of every one deck S is such as formula shown in (2)
In formula, a
sl(v, κ, k) (>=0) is last layer Feature Mapping layer C neuron u
cl-1(n+v, κ), to this layer of neuronic contiguous function of S, it is identical that all neuronic input of same neuron plane connects, θ
lthe neuronic threshold value of l layer S, A
slit is the radius of v.As l=1, u
cl-1(n, κ) is u
g(n, k), now, K
cl-1=2;
Step S4, ground floor U
c1output as the second layer U of feature extraction layer S
s2input, second layer U
s2complete secondary feature extraction and as the second layer U of Feature Mapping layer C
c2input;
Step S5, the second layer U of Feature Mapping layer C
c2output as the third layer U of feature extraction layer S
s3input, third layer U
s3complete third time feature extraction and as the third layer U of Feature Mapping layer C
c3input;
Step S6, the third layer U of Feature Mapping layer C
c3output as the 4th layer of U of feature extraction layer S
s4input, the 4th layer of U
s4the weights of each layer, threshold value and neuronal cell number of planes is obtained and as the 4th layer of U of Feature Mapping layer C by supervising the mode of competitive learning
c4input;
Step S7, the 4th layer of U
c4as output layer and the identification layer of network, export by the 4th layer of U
s4the final pattern recognition result of the network that determines of output maximum result.Concrete, in network, last one deck of Feature Mapping layer C is identification layer, provides the result of pattern-recognition.Through study, network automatically can identify input pattern, and not by the distortion of input picture, the impact of convergent-divergent and displacement.
Preferably, except U
c4outside layer, its excess-three layer U of Feature Mapping layer C
c1, U
c2and U
c3c neuron response function such as formula shown in (3):
In formula, a
clv () is the input of C layer.
As can be seen from Figure 2, network is by input layer U
0, difference extract layer U
g, 4 groups of S layers and 4 layers of C layers composition, main flow figure is as follows: U
0→ U
g→ U
s1→ U
c1→ U
s2→ U
c2→ U
s3→ U
c3→ U
s4→ U
c4.Wherein difference extract layer U
gcorresponding to the centrocyte in retina, by adding strong center receptive field neuron plane and suppress central nervous unit plane two parts to form, U
gthe output of layer is as the ground floor U of feature extraction layer S
s1input; U
s1s neuron in layer is by Training, and extract the marginal element of different directions in input picture, its output is as U
c1input; The second layer U of feature extraction layer S
s2with third layer U
s3neuron be without supervision competitive learning self-organization neuron; U
s4layer correctly identifies all samples by the training supervising competitive learning; U
c4layer is output layer and the identification layer of network, the pattern recognition result that display network is final.
The present invention is by the sample learning to facial image, the weighting parameter of optimization neural network every layer, thus improve the discrimination of complex scene human face to a great extent, feature extraction structure repeatedly makes network have higher antijamming capability, here complex scene mainly refers to that facial image is subject to illumination, expression, the scene of attitude factor impact, there is good fault-tolerant ability, parallel processing capability and self-learning capability, can processing environment information complicated, background knowledge is unclear, problem in the indefinite situation of inference rule, sample is allowed to have larger defect, distortion, travelling speed is fast, the resolution that adaptive performance is good and higher, can be applicable to pattern-recognition, abnormality detection, the fields such as image procossing.
In this instructions, each embodiment adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar portion mutually see.For system disclosed in embodiment, owing to corresponding to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
Professional can also recognize further, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two, in order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Obviously, those skilled in the art can carry out various change and modification to invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.
Claims (5)
1. based on a face identification method for convolutional neural networks, it is characterized in that, comprising:
Step one, carries out necessary pre-service in early stage to facial image, obtains desirable facial image;
Step 2, chooses desirable facial image and enters input layer U as the input of convolutional neural networks
0, input layer U
0output enter difference extract layer U
g, U
gthe output of layer is as the ground floor U of feature extraction layer S
s1input;
Step 3, ground floor U
s1s neuron by Training, the marginal element extracting different directions in input picture is as primary feature extraction and export the ground floor U of Feature Mapping layer C to
c1input, wherein, described Feature Mapping layer C is the nervous layer be made up of complicated neuron, and it is fixing that the input of Feature Mapping layer C connects, and can not revise, each Feature Mapping is a plane, and in plane, all neuronic weights are equal;
Step 4, ground floor U
c1output as the second layer U of feature extraction layer S
s2input, second layer U
s2complete secondary feature extraction and as the second layer U of Feature Mapping layer C
c2input;
Step 5, the second layer U of Feature Mapping layer C
c2output as the third layer U of feature extraction layer S
s3input, third layer U
s3complete third time feature extraction and as the third layer U of Feature Mapping layer C
c3input;
Step 6, the third layer U of Feature Mapping layer C
c3output as the 4th layer of U of feature extraction layer S
s4input, the 4th layer of U
s4the weights of each layer, threshold value and neuronal cell number of planes is obtained and as the 4th layer of U of Feature Mapping layer C by supervising the mode of competitive learning
c4input;
Step 7, the 4th layer of U
c4as the output layer of network, export by the 4th layer of U
s4the final pattern recognition result of the network that determines of output maximum result.
2. as claimed in claim 1 based on the face identification method of convolutional neural networks, it is characterized in that, described pre-service in early stage comprises the pre-service of location and segmentation.
3., as claimed in claim 1 based on the face identification method of convolutional neural networks, it is characterized in that, difference extract layer U in step 2
goutput be shown below:
In formula, U
0the output of one deck before representative, the neuron of n representative input, v represents appointed area, v summation representative is contained to the neuron summation of appointed area, a
g(ξ) be the intensity that neuron connects, difference extract layer U
ghave 2 neuron planes, as k=2, for strengthening central nervous unit, during k=1, central nervous unit is restrained in representative, A
gthe radius of v, U
gnamely each neuronic all input connection of layer must meet a constraint condition
4., as claimed in claim 1 based on the face identification method of convolutional neural networks, it is characterized in that, in step 3 to six, in feature extraction layer S, the neuronic response function of every one deck S is shown below:
In formula, a
sl(v, κ, k) (>=0) is last layer Feature Mapping layer C neuron u
cl-1(n+v, κ), to this layer of neuronic contiguous function of S, it is identical that all neuronic input of same neuron plane connects, θ
lthe neuronic threshold value of l layer S, A
slthe radius of v, as l=1, u
cl-1(n, κ) is u
g(n, k), now, K
cl-1=2.
5., as claimed in claim 1 based on the face identification method of convolutional neural networks, it is characterized in that, in step 3 to six, except U
c4outside layer, its excess-three layer U of Feature Mapping layer C
c1, U
c2and U
c3c neuron response function be shown below:
In formula, a
clv () is the input of C layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410620574.9A CN104346607B (en) | 2014-11-06 | 2014-11-06 | Face identification method based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410620574.9A CN104346607B (en) | 2014-11-06 | 2014-11-06 | Face identification method based on convolutional neural networks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104346607A true CN104346607A (en) | 2015-02-11 |
CN104346607B CN104346607B (en) | 2017-12-22 |
Family
ID=52502181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410620574.9A Expired - Fee Related CN104346607B (en) | 2014-11-06 | 2014-11-06 | Face identification method based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104346607B (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778448A (en) * | 2015-03-24 | 2015-07-15 | 孙建德 | Structure adaptive CNN (Convolutional Neural Network)-based face recognition method |
CN104794501A (en) * | 2015-05-14 | 2015-07-22 | 清华大学 | Mode identification method and device |
CN104992167A (en) * | 2015-07-28 | 2015-10-21 | 中国科学院自动化研究所 | Convolution neural network based face detection method and apparatus |
CN105426875A (en) * | 2015-12-18 | 2016-03-23 | 武汉科技大学 | Face identification method and attendance system based on deep convolution neural network |
CN105631406A (en) * | 2015-12-18 | 2016-06-01 | 小米科技有限责任公司 | Method and device for recognizing and processing image |
CN105975931A (en) * | 2016-05-04 | 2016-09-28 | 浙江大学 | Convolutional neural network face recognition method based on multi-scale pooling |
CN106203619A (en) * | 2015-05-29 | 2016-12-07 | 三星电子株式会社 | Data-optimized neutral net traversal |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106407369A (en) * | 2016-09-09 | 2017-02-15 | 华南理工大学 | Photo management method and system based on deep learning face recognition |
CN106650919A (en) * | 2016-12-23 | 2017-05-10 | 国家电网公司信息通信分公司 | Information system fault diagnosis method and device based on convolutional neural network |
GB2545661A (en) * | 2015-12-21 | 2017-06-28 | Nokia Technologies Oy | A method for analysing media content |
CN107066934A (en) * | 2017-01-23 | 2017-08-18 | 华东交通大学 | Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment |
CN107368182A (en) * | 2016-08-19 | 2017-11-21 | 北京市商汤科技开发有限公司 | Gestures detection network training, gestures detection, gestural control method and device |
CN107423696A (en) * | 2017-07-13 | 2017-12-01 | 重庆凯泽科技股份有限公司 | Face identification method and system |
CN107545571A (en) * | 2017-09-22 | 2018-01-05 | 深圳天琴医疗科技有限公司 | A kind of image detecting method and device |
CN107633229A (en) * | 2017-09-21 | 2018-01-26 | 北京智芯原动科技有限公司 | Method for detecting human face and device based on convolutional neural networks |
WO2018021942A3 (en) * | 2016-07-29 | 2018-03-01 | Общество С Ограниченной Ответственностью "Нтех Лаб" | Facial recognition using an artificial neural network |
CN108257095A (en) * | 2016-12-07 | 2018-07-06 | 法国艾德米亚身份与安全公司 | For handling the system of image |
CN109242043A (en) * | 2018-09-29 | 2019-01-18 | 北京京东金融科技控股有限公司 | Method and apparatus for generating information prediction model |
US10303977B2 (en) | 2016-06-28 | 2019-05-28 | Conduent Business Services, Llc | System and method for expanding and training convolutional neural networks for large size input images |
CN109871464A (en) * | 2019-01-17 | 2019-06-11 | 东南大学 | A kind of video recommendation method and device based on UCL Semantic Indexing |
CN109902812A (en) * | 2017-12-11 | 2019-06-18 | 北京中科寒武纪科技有限公司 | Board and neural network computing method |
CN110210399A (en) * | 2019-05-31 | 2019-09-06 | 广东世纪晟科技有限公司 | Face recognition method based on uncertainty quantization probability convolution neural network |
WO2020000096A1 (en) * | 2018-06-29 | 2020-01-02 | Wrnch Inc. | Human pose analysis system and method |
KR20200065033A (en) * | 2018-11-16 | 2020-06-08 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Key point detection method and apparatus, electronic device and storage medium |
US10834365B2 (en) | 2018-02-08 | 2020-11-10 | Nortek Security & Control Llc | Audio-visual monitoring using a virtual assistant |
US10978050B2 (en) | 2018-02-20 | 2021-04-13 | Intellivision Technologies Corp. | Audio type detection |
US11295139B2 (en) | 2018-02-19 | 2022-04-05 | Intellivision Technologies Corp. | Human presence detection in edge devices |
US11615623B2 (en) | 2018-02-19 | 2023-03-28 | Nortek Security & Control Llc | Object detection in edge devices for barrier operation and parcel delivery |
US11735018B2 (en) | 2018-03-11 | 2023-08-22 | Intellivision Technologies Corp. | Security system with face recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831396A (en) * | 2012-07-23 | 2012-12-19 | 常州蓝城信息科技有限公司 | Computer face recognition method |
US20130142399A1 (en) * | 2011-12-04 | 2013-06-06 | King Saud University | Face recognition using multilayered discriminant analysis |
CN103646244A (en) * | 2013-12-16 | 2014-03-19 | 北京天诚盛业科技有限公司 | Methods and devices for face characteristic extraction and authentication |
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
-
2014
- 2014-11-06 CN CN201410620574.9A patent/CN104346607B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130142399A1 (en) * | 2011-12-04 | 2013-06-06 | King Saud University | Face recognition using multilayered discriminant analysis |
CN102831396A (en) * | 2012-07-23 | 2012-12-19 | 常州蓝城信息科技有限公司 | Computer face recognition method |
CN103646244A (en) * | 2013-12-16 | 2014-03-19 | 北京天诚盛业科技有限公司 | Methods and devices for face characteristic extraction and authentication |
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
Cited By (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778448A (en) * | 2015-03-24 | 2015-07-15 | 孙建德 | Structure adaptive CNN (Convolutional Neural Network)-based face recognition method |
CN104778448B (en) * | 2015-03-24 | 2017-12-15 | 孙建德 | A kind of face identification method based on structure adaptive convolutional neural networks |
CN104794501A (en) * | 2015-05-14 | 2015-07-22 | 清华大学 | Mode identification method and device |
CN104794501B (en) * | 2015-05-14 | 2021-01-05 | 清华大学 | Pattern recognition method and device |
CN106203619B (en) * | 2015-05-29 | 2022-09-13 | 三星电子株式会社 | Data optimized neural network traversal |
CN106203619A (en) * | 2015-05-29 | 2016-12-07 | 三星电子株式会社 | Data-optimized neutral net traversal |
CN104992167A (en) * | 2015-07-28 | 2015-10-21 | 中国科学院自动化研究所 | Convolution neural network based face detection method and apparatus |
CN104992167B (en) * | 2015-07-28 | 2018-09-11 | 中国科学院自动化研究所 | A kind of method for detecting human face and device based on convolutional neural networks |
CN105631406A (en) * | 2015-12-18 | 2016-06-01 | 小米科技有限责任公司 | Method and device for recognizing and processing image |
CN105426875A (en) * | 2015-12-18 | 2016-03-23 | 武汉科技大学 | Face identification method and attendance system based on deep convolution neural network |
GB2545661A (en) * | 2015-12-21 | 2017-06-28 | Nokia Technologies Oy | A method for analysing media content |
US10242289B2 (en) | 2015-12-21 | 2019-03-26 | Nokia Technologies Oy | Method for analysing media content |
CN105975931B (en) * | 2016-05-04 | 2019-06-14 | 浙江大学 | A kind of convolutional neural networks face identification method based on multiple dimensioned pond |
CN105975931A (en) * | 2016-05-04 | 2016-09-28 | 浙江大学 | Convolutional neural network face recognition method based on multi-scale pooling |
US11017267B2 (en) | 2016-06-28 | 2021-05-25 | Conduent Business Services, Llc | System and method for expanding and training convolutional neural networks for large size input images |
US10303977B2 (en) | 2016-06-28 | 2019-05-28 | Conduent Business Services, Llc | System and method for expanding and training convolutional neural networks for large size input images |
US10083347B2 (en) | 2016-07-29 | 2018-09-25 | NTech lab LLC | Face identification using artificial neural network |
WO2018021942A3 (en) * | 2016-07-29 | 2018-03-01 | Общество С Ограниченной Ответственностью "Нтех Лаб" | Facial recognition using an artificial neural network |
CN106295566A (en) * | 2016-08-10 | 2017-01-04 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN106295566B (en) * | 2016-08-10 | 2019-07-09 | 北京小米移动软件有限公司 | Facial expression recognizing method and device |
CN107368182A (en) * | 2016-08-19 | 2017-11-21 | 北京市商汤科技开发有限公司 | Gestures detection network training, gestures detection, gestural control method and device |
CN106407369A (en) * | 2016-09-09 | 2017-02-15 | 华南理工大学 | Photo management method and system based on deep learning face recognition |
CN108257095B (en) * | 2016-12-07 | 2023-11-28 | 法国艾德米亚身份与安全公司 | System for processing images |
CN108257095A (en) * | 2016-12-07 | 2018-07-06 | 法国艾德米亚身份与安全公司 | For handling the system of image |
CN106650919A (en) * | 2016-12-23 | 2017-05-10 | 国家电网公司信息通信分公司 | Information system fault diagnosis method and device based on convolutional neural network |
CN107066934A (en) * | 2017-01-23 | 2017-08-18 | 华东交通大学 | Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment |
CN107423696A (en) * | 2017-07-13 | 2017-12-01 | 重庆凯泽科技股份有限公司 | Face identification method and system |
CN107633229A (en) * | 2017-09-21 | 2018-01-26 | 北京智芯原动科技有限公司 | Method for detecting human face and device based on convolutional neural networks |
CN107545571A (en) * | 2017-09-22 | 2018-01-05 | 深圳天琴医疗科技有限公司 | A kind of image detecting method and device |
WO2019114649A1 (en) * | 2017-12-11 | 2019-06-20 | 北京中科寒武纪科技有限公司 | Neural network calculation apparatus and method |
US11803735B2 (en) | 2017-12-11 | 2023-10-31 | Cambricon Technologies Corporation Limited | Neural network calculation apparatus and method |
US11657258B2 (en) | 2017-12-11 | 2023-05-23 | Cambricon Technologies Corporation Limited | Neural network calculation apparatus and method |
CN109902812B (en) * | 2017-12-11 | 2020-10-09 | 中科寒武纪科技股份有限公司 | Board card and neural network operation method |
CN109902812A (en) * | 2017-12-11 | 2019-06-18 | 北京中科寒武纪科技有限公司 | Board and neural network computing method |
US12099918B2 (en) | 2017-12-11 | 2024-09-24 | Cambricon Technologies Corporation Limited | Neural network calculation apparatus and method |
US12099917B2 (en) | 2017-12-11 | 2024-09-24 | Cambricon Technologies Corporation Limited | Neural network calculation apparatus and method |
TWI791569B (en) * | 2017-12-11 | 2023-02-11 | 大陸商中科寒武紀科技股份有限公司 | Device and method for neural network operation |
US10834365B2 (en) | 2018-02-08 | 2020-11-10 | Nortek Security & Control Llc | Audio-visual monitoring using a virtual assistant |
US11615623B2 (en) | 2018-02-19 | 2023-03-28 | Nortek Security & Control Llc | Object detection in edge devices for barrier operation and parcel delivery |
US11295139B2 (en) | 2018-02-19 | 2022-04-05 | Intellivision Technologies Corp. | Human presence detection in edge devices |
US10978050B2 (en) | 2018-02-20 | 2021-04-13 | Intellivision Technologies Corp. | Audio type detection |
US11735018B2 (en) | 2018-03-11 | 2023-08-22 | Intellivision Technologies Corp. | Security system with face recognition |
WO2020000096A1 (en) * | 2018-06-29 | 2020-01-02 | Wrnch Inc. | Human pose analysis system and method |
CN109242043A (en) * | 2018-09-29 | 2019-01-18 | 北京京东金融科技控股有限公司 | Method and apparatus for generating information prediction model |
KR102394354B1 (en) * | 2018-11-16 | 2022-05-04 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Key point detection method and apparatus, electronic device and storage medium |
KR20200065033A (en) * | 2018-11-16 | 2020-06-08 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Key point detection method and apparatus, electronic device and storage medium |
CN109871464B (en) * | 2019-01-17 | 2020-12-25 | 东南大学 | Video recommendation method and device based on UCL semantic indexing |
CN109871464A (en) * | 2019-01-17 | 2019-06-11 | 东南大学 | A kind of video recommendation method and device based on UCL Semantic Indexing |
CN110210399A (en) * | 2019-05-31 | 2019-09-06 | 广东世纪晟科技有限公司 | Face recognition method based on uncertainty quantization probability convolution neural network |
Also Published As
Publication number | Publication date |
---|---|
CN104346607B (en) | 2017-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104346607A (en) | Face recognition method based on convolutional neural network | |
Bodapati et al. | Feature extraction and classification using deep convolutional neural networks | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN109902590B (en) | Pedestrian re-identification method for deep multi-view characteristic distance learning | |
CN103824054B (en) | A kind of face character recognition methods based on cascade deep neural network | |
CN112395442B (en) | Automatic identification and content filtering method for popular pictures on mobile internet | |
CN106909938B (en) | Visual angle independence behavior identification method based on deep learning network | |
CN112766355B (en) | Electroencephalogram signal emotion recognition method under label noise | |
WO2021056974A1 (en) | Vein recognition method and apparatus, device, and storage medium | |
CN116645716B (en) | Expression recognition method based on local features and global features | |
KR102128158B1 (en) | Emotion recognition apparatus and method based on spatiotemporal attention | |
CN110135244B (en) | Expression recognition method based on brain-computer collaborative intelligence | |
Shen et al. | Learning high-level concepts by training a deep network on eye fixations | |
Benkaddour | CNN based features extraction for age estimation and gender classification | |
CN110210399A (en) | Face recognition method based on uncertainty quantization probability convolution neural network | |
CN116343284A (en) | Attention mechanism-based multi-feature outdoor environment emotion recognition method | |
CN112115796A (en) | Attention mechanism-based three-dimensional convolution micro-expression recognition algorithm | |
CN110880010A (en) | Visual SLAM closed loop detection algorithm based on convolutional neural network | |
CN105956570A (en) | Lip characteristic and deep learning based smiling face recognition method | |
CN105893941B (en) | A kind of facial expression recognizing method based on area image | |
CN106682653A (en) | KNLDA-based RBF neural network face recognition method | |
CN114596605A (en) | Expression recognition method with multi-feature fusion | |
Garg et al. | Facial expression recognition & classification using hybridization of ICA, GA, and neural network for human-computer interaction | |
Su et al. | Nesterov accelerated gradient descent-based convolution neural network with dropout for facial expression recognition | |
CN116884067B (en) | Micro-expression recognition method based on improved implicit semantic data enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171222 Termination date: 20201106 |
|
CF01 | Termination of patent right due to non-payment of annual fee |