CN103778414A - Real-time face recognition method based on deep neural network - Google Patents

Real-time face recognition method based on deep neural network Download PDF

Info

Publication number
CN103778414A
CN103778414A CN201410023333.6A CN201410023333A CN103778414A CN 103778414 A CN103778414 A CN 103778414A CN 201410023333 A CN201410023333 A CN 201410023333A CN 103778414 A CN103778414 A CN 103778414A
Authority
CN
China
Prior art keywords
network
face
layer
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410023333.6A
Other languages
Chinese (zh)
Inventor
罗志增
邢健飞
席旭刚
高云园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201410023333.6A priority Critical patent/CN103778414A/en
Publication of CN103778414A publication Critical patent/CN103778414A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a real-time face recognition method based on a deep neural network and adjacent element analysis. According to the invention, at first, a multi-layer neural network is practiced through a large-scale face database having the advantage of good diversity, wherein layers besides the last layer are non-linear layers, and the tail layer is a linear layer; then, the obtained network is practiced continuously on the basis of a hybrid face database through the supervised adjacent element analysis method to make the network has more deep understanding of face images so as to achieve the purposes of shorting the face image distance between same individuals and increasing the face image distance between different individuals; and finally, in the actual face recognition stage, the invention provides a concept of 'search radius', so the recognition time can be shortened on the premise of ensuring the recognition rate to realize real-time face recognition. According to the invention, the advantage of high recognition rate can be realized, and at the same time, according to the invention, the advantage of fast recognition speed can be realized, so that the methof of the invention is suitable for being applied in real-time face recognition tasks.

Description

Real-time face recognition methods based on degree of depth neural network
Technical field
The invention belongs to area of pattern recognition, relate to a kind of face identification method, particularly a kind of face identification method that can carry out in real time recognition of face task.
Background technology
As the one of biological identification technology, recognition of face is by feat of characteristics such as untouchable, good user's experience, and the discrimination growing steadily, and has huge market potential and scientific research value.Recognition of face belongs to the one of image recognition, the heavy difficult point of image recognition is to give the ability of machine perception implicit information that image contains, and as a kind of feature extracting method that extracts data the deep information, degree of depth neural network has certain inspiration to the face recognition technology based on image.
At present, degree of depth neural network has had multinomial breakthrough at area of pattern recognition: Microsoft adopts degree of depth nerual network technique to carry out speech recognition, has reached the highest now phonetic recognization rate; Baidu adopt the degree of depth convolution networking that Yann professor LeCun of New York University proposes Baidu know figure and wait application, and set up the Baidu degree of depth and learnt research institute; Latest news is pointed out, can accomplish the degree of thinking independently on the basis of degree of depth learning system large scale database in " study " is stored in server that Google builds.
But, rarely have researcher by degree of depth Application of Neural Network in recognition of face field, applicant thinks that reason is 2 points: it is that each individuality is set up a model that the framework of degree of depth neural network+softmax that one side is traditional needs multiple images, poor for the uncertain task applicability of this class number of recognition of face; On the other hand, the needed dimension of picture of recognition of face is larger, and generally, more than 30 × 30 pictures just can reach promising result, has so just strengthened the difficulty of training degree of depth network model.
Summary of the invention
For probing into the recognition effect of degree of depth neural network for recognition of face task, the present invention proposes a kind of real-time face recognition methods based on degree of depth neural network.First utilize the preferably extensive face database of diversity to train a kind of multilayer neural network, wherein every layer except last one deck is non-linear layer, and tail layer is linear layer; Then utilize the method for the contiguous meta analysis that has supervision on the basis of mixing face database, to continue the network that training obtains, network is deepened for the understanding of facial image, reach the object that shortens facial image distance between same individuality, increases facial image distance between Different Individual, last in the actual recognition of face stage, the present invention proposes the concept of one " search radius ", under the prerequisite of guaranteeing discrimination, shorten identification required time, realize real-time recognition of face.
In order to realize above object, the inventive method mainly comprises the following steps:
Step (1). obtain network training data, specifically: choose the good face database of diversity as extensive face database.In addition, from multiple face databases, choose the synthetic face database that mixes of a part of image sets, wherein the stronger image of part illumination variation is carried out to unitary of illumination operation, reduce the impact of illumination.People face part in image during employing Viola-Jones human-face detector detects and shears extensive face database and mixes face database.By extensive face database with mix face database in the pixel value of every image be stretched as row by row or column, being combined into line number is that image pixel is counted, columns is the matrix of image number, matrix, divided by 255, makes data be distributed in 0-1 scope.
Step (2). use on the basis of face parts of images in the extensive face database that degree of depth neural network obtains in step (1) with without monitor mode training degree of depth neural network, specific algorithm is as follows.
The present invention is take Autoencoder degree of depth neural network as framework, by recovering facial image in extensive face database, to reach the object of training degree of depth neural network, first use the every layer network of reverse conduction Algorithm for Training, last same with the general performance of reverse conduction algorithm adjustment network.The concrete step of algorithm is poly-as follows:
1) the weight penalty factor of each layer network of initialization, weight scaling, weighted value, bias,
The parameters such as integrated data size.Weight penalty factor is to control the network weight mistake that does not allow training obtain
Greatly, cause over-fitting, suppose that initialized weight proportion is W s, have:
W s=sqrt(6)/sqrt(v+h+1)(1)
In formula (1), v represents visible layer nodes, and h represents hidden layer node number.Can obtain:
W=2 *w s *(rand(h,v)-0.5)(2)
W′=2 *w s *(rand(v,h)-0.5)(3)
In formula (2) (3), W, W ' are respectively the initializes weights of visible layer, hidden layer, and rand (m, n) is for being created on the function of m × n matrix of uniform random number composition between (0,1).
2) determine the loss function of network.The target of Autoencoder is to update network, strengthens the recovery capability of network for raw data, and the loss function of network is:
J ( W , b ) = [ 1 m Σ i = 1 m J ( W , b ; x ( i ) , y ( i ) ) ] + λ 2 Σ l = 1 n t - 1 Σ i = 1 s t Σ j = 1 s l + 1 ( W ji ( l ) ) 2 = [ 1 m Σ i = 1 m ( 1 2 | | r W , b ( x ( i ) ) - y ( i ) | | 2 ) ] + λ 2 Σ l = 1 n l - 1 Σ i = 1 s l Σ j = 1 s l + 1 ( W ji ( i ) ) 2 ( 4 )
In formula (4), m is data dimension, i.e. visible node layer number, and r is the data of rebuilding through network, and W, b are respectively weight and biasing, and y be actual input, and λ is that weight is punished parameter, object is to avoid weight excessive and cause over-fitting, n tfor the number of plies, s t, s t+1for input layer and output layer nodes.
3) ask for every layer network loss function for weight and bigoted partial derivative.For convenience of calculation, the present invention uses a kind of intermediate variable a, δ 1, δ 2, for linear layer, have:
a=W *y+b(5)
δ 2=-(y-r W,b(x))(6)
δ 1=((W′) T*δ 2)(7)
For non-linear layer, suppose that activation function is sigmoid, has:
a=1./(1+exp(-(W *y+b)))(8)
δ 2=-(y-r W,b(x)). *r W,b(x). *(1-r W,b(x))(9)
δ 1=((W′) T*δ 2). *a. *(1-a)(10)
Objective function for the partial derivative of visible layer, hidden layer weight and biasing is:
∂ J ∂ W = ( 1 m * δ 1 * y ) + λsum ( W ) - - - ( 11 )
∂ J ∂ b = 1 m * sum ( δ 1 , 2 ) - - - ( 12 )
∂ J ∂ W ′ = ( 1 m * δ 2 * α T ) + λsum ( W ′ ) - - - ( 13 )
∂ J ∂ b ′ = 1 m * sum ( δ 2 , 2 ) - - - ( 14 )
In formula (11) (12) (13) (14), sum (x), sum (x, 2) be respectively ask for x all elements and, ask for the function of all row sums of x.Try to achieve after partial derivative, utilize gradient descent method to upgrade weight and biasing.
4) after the weight training of every layer, regard all-network as an entirety, utilize gradient descent method to upgrade each layer of weight, further improve the recovery capability of network for data.
Step (3). to mix face database as input data, the degree of depth neural network that step (2) is obtained is carried out Training, improves the cognitive ability of network for face.
Here the present invention adopts the method for contiguous meta analysis (Neighbourhood Components Analysis) to shorten the distance of facial image between same individual, increases the distance of facial image between Different Individual, and then improves accuracy of face identification.
Contiguous meta analysis is a kind of learning distance metric method, and it defines the distance of single data field remainder data in new transformed space with a square of Euclidean distance, and function definition is as follows:
P ij = exp - | | Ax i - Ax j | | 2 Σ k exp - | | Ax i - Ax j | | 2 j ≠ i 0 j = i ( 15 )
In formula (15), x i, x j, x kbe respectively i, j, the corresponding mapping (enum) data of the individual facial image in k position, A is mapping space, is expressed as certain layer that neural network is concrete in the present invention.Definition loss function is f, is determined by following formula:
Σ i Σ j ∈ C i P ij = Σ i P i - - - ( 16 )
In formula (16), C ifor the corresponding face set of individual i.Same step (2), the present invention is by asking for the partial derivative of loss function f for every layer network
Figure BDA0000458423610000044
, utilize gradient descent method iteration optimization network model, improve the understandability of network for face difference.
Step (4). through above step, can obtain well recovering and deep layer network that can deep understanding facial image.For concrete identifying, the present invention proposes the concept of one " search radius (SearchRadius) ", concrete steps are:
For the multidimensional data after face mapping, use following formula to ask for a binary representation data y ab:
y &alpha;b = 1 x &alpha;b &GreaterEqual; m b 0 x &alpha;b < m b - - - ( 17 )
In formula (17), x abbe the b dimension component that a opens facial image, m bfor the intermediate value of face images b dimension component.This formula has guaranteed being uniformly distributed for every one dimension component 0 and 1 of token image.Here binary representation data y, abalso can be regarded as the Hash feature of energy token image.
In actual identifying, the view data to be identified representing according to degree of depth network is asked for this binary form registration, use all images in binary form registration and face database to ask for Hamming distance, according to the size of set search radius R, use the front R of Hamming distance minimum to open image as secondary contrast images, once obtain the Euclidean distance with image to be identified, Euclidean distance and the threshold value of gained minimum compare, and are less than or equal to threshold value and are defined as this facial image (threshold value by etc. wrong rate ERR determine).
The present invention, compared with existing many face identification methods, has following features:
Degree of depth neural network can be extracted the further feature of image, be suitable for this task higher to image understanding Capability Requirement of recognition of face, network training process of the present invention comprises without monitor procedure and has monitor procedure simultaneously, the main training network of monitor procedure of recovery capability without to(for) data, there is monitor procedure to be used for the understandability of training network for face, the extensive face database using without monitor procedure has guaranteed the applicability of network for the face beyond training data, there is in monitor procedure each individuality contain and comprise multi-pose, multiple expression, multiple images of many illumination, for e-learning Different Individual facial image why not the same and same individual facial image equally have effect why, therefore recognition of face rate of the present invention has kept higher level.
For concrete identifying, guaranteeing under the prerequisite of discrimination, the proposition of " search radius " concept, get around image to be identified and storehouse and neutralized the directly contrast of the larger image do of this image Hamming distance, so just avoided with image library in all image autumn rain Euclidean distances, greatly reduce identification required time.
Accompanying drawing explanation
Fig. 1 is the invention process process flow diagram;
Fig. 2 is part facial image in the extensive face database of the invention process;
Fig. 3 is that the invention process is mixed part facial image in face database;
Fig. 4 is that the invention process is trained the ground floor network weight obtaining;
Fig. 5 is the invention process FAR and FRR curve;
Fig. 6 is that the invention process recognition time is with " search radius " change curve;
Fig. 7 is that the invention process recognition correct rate is with " search radius " change curve;
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated: the present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process.
As shown in Figure 1, the present embodiment comprises the steps:
Step 1. obtain network training data, specifically: use diversity good LFW face database as without network training storehouse in monitor procedure, referring to Fig. 2, use parts of images in CMU-PIE, Georgia Tech, CaltechFaces and VidTiMIT face database to be combined into mixing face database (containing 2311 images, each individuality contains multiple images under multiple states), referring to Fig. 3, as the training data having in monitor procedure, wherein the stronger image of part illumination variation is carried out to unitary of illumination operation, reduce the impact of illumination.People face part in image during employing Viola-Jones human-face detector detects and shears LFW and mixes face database.By extensive face database with mix face database in the pixel value of every image be stretched as row by row or column, being combined into line number is that image pixel is counted, columns is the matrix of image number, matrix, divided by 255, makes data be distributed in 0-1 scope.
Step 2. use on the basis of the LFW face parts of images that degree of depth neural network obtains in step 1 to train a kind of three layer depth neural networks without monitor mode, two-layer is wherein non-linear layer, activation function is sigmoid, tail layer is linear layer, net structure is 4096-256-128-64, and specific algorithm is as follows.
(1) parameter of the every layer network of initialization.
(2) ask the loss function of determining network.The network recovery data that the loss function here refers to and input data least mean-square error, add the penalty term to network weight, prevents network over-fitting.
(3) ask the loss function of determining network for the weight of network and the partial derivative of biasing, then utilize Gradient Descent method progressively to upgrade weight, here, gradient descent method adopts L-BFGS method.
Step 3. adopt contiguous first method on the basis of 3/4 mixing face database (approximately containing 2310 images), the network obtaining further to be trained, same, also adopt L-BFGS method progressively to upgrade weight here.
Step 4. remaining 1/4 to mixing in face database, be about 770 images, as test pattern.In test process, the Hamming distance of binary form registration between first movement images, in the size of basis " search radius " R, choose the image of the front R of Hamming distance as image to be selected, carry out secondary contrast, secondary contrast adopts Euclidean distance, and the minimum image of distance is as target image.In this actual identifying, this enforcement is used different " search radius ", probes into the validity of " search radius " concept of the present invention's proposition.
The network weight being obtained by this enforcement of Fig. 4 can find out, facial contour is more clear, and known the present invention trains the degree of depth neural network obtaining better for the recovery capability of facial image.
As seen from Figure 5, the false rejection rate FRR of this enforcement and the intersection point ERR of false acceptance rate FAR are lower, illustrate that the discrimination of algorithm is higher, and robustness is stronger, and adaptability is better, is easily generalized in general recognition of face task.
As seen from Figure 6 along with the continuous increase of " search radius ", identification required time also presents linear growth, known in the time that " search radius " reaches 100 left and right by table 4.11, recognition correct rate reaches steady state (SS), is stabilized in 94% left and right, in the time that " search radius " reaches 100, check corresponding Fig. 7, obtaining 770 images, to be all completed recognition time be 1.5833s, and every face picture of average test needs 2.056ms, meets the required condition of real-time face recognition system.

Claims (1)

1. the real-time face recognition methods based on degree of depth neural network, is characterized in that the method comprises the steps:
Step (1). obtain network training data, specifically: choose the good face database of diversity as extensive face database; In addition, from multiple face databases, choose the synthetic face database that mixes of a part of image sets, wherein the stronger image of part illumination variation is carried out to unitary of illumination operation, reduce the impact of illumination; People face part in image during employing Viola-Jones human-face detector detects and shears extensive face database and mixes face database; By extensive face database with mix face database in the pixel value of every image be stretched as row by row or column, being combined into line number is that image pixel is counted, columns is the matrix of image number, matrix, divided by 255, makes data be distributed in 0-1 scope;
Step (2). use on the basis of face parts of images in the extensive face database that degree of depth neural network obtains in step (1) with without monitor mode training degree of depth neural network, specific algorithm is as follows;
1) the weight penalty factor of each layer network of initialization, weight scaling, weighted value, bias, integrated data size; Weight penalty factor is to control not allow to train the network weight obtaining excessive, causes over-fitting, supposes that initialized weight proportion is W s, have:
W sin=sqrt (6)/sqrt (v+h+1) (1) formula (1), v represents visible layer nodes, and h represents hidden layer node number; Can obtain:
W=2 *w s *(rand(h,v)-0.5)(2)
W′=2 *w s *(rand(v,h)-0.5)(3)
In formula (2) (3), W, W ' are respectively the initializes weights of visible layer, hidden layer, and rand (m, n) is for being created on the function of m × n matrix of uniform random number composition between (0,1);
2) determine the loss function of network; The target of Autoencoder is to update network, strengthens the recovery capability of network for raw data, and the loss function of network is:
J ( W , b ) = [ 1 m &Sigma; i = 1 m J ( W , b ; x ( i ) , y ( i ) ) ] + &lambda; 2 &Sigma; l = 1 n t - 1 &Sigma; i = 1 s t &Sigma; j = 1 s l + 1 ( W ji ( l ) ) 2 = [ 1 m &Sigma; i = 1 m ( 1 2 | | r W , b ( x ( i ) ) - y ( i ) | | 2 ) ] + &lambda; 2 &Sigma; l = 1 n l - 1 &Sigma; i = 1 s l &Sigma; j = 1 s l + 1 ( W ji ( i ) ) 2 ( 4 )
In formula (4), m is data dimension, i.e. visible node layer number, and r is the data of rebuilding through network, and W, b are respectively weight and biasing, and y be actual input, and λ is that weight is punished parameter, object is to avoid weight excessive and cause over-fitting, n tfor the number of plies, s t, s t+1for input layer and output layer nodes;
3) ask for every layer network loss function for weight and bigoted partial derivative; For convenience of calculation, use a kind of intermediate variable a, δ 1, δ 2, for linear layer, have:
a=W *y+b(5)
δ 2=-(y-r W,b(x))(6)
δ 1=((W′) T*δ 2)(7)
For non-linear layer, suppose that activation function is sigmoid, has:
a=1./(1+exp(-(W *y+b)))(8)
δ 2=-(y-r W,b(x)). *r W,b(x). *(1-r W,b(x))(9)
δ 1=((W′) T*δ 2). *a. *(1-a)(10)
Objective function for the partial derivative of visible layer, hidden layer weight and biasing is:
&PartialD; J &PartialD; W = ( 1 m * &delta; 1 * y ) + &lambda;sum ( W ) - - - ( 11 )
&PartialD; J &PartialD; b = 1 m * sum ( &delta; 1 , 2 ) - - - ( 12 )
&PartialD; J &PartialD; W &prime; = ( 1 m * &delta; 2 * &alpha; T ) + &lambda;sum ( W &prime; ) - - - ( 13 )
&PartialD; J &PartialD; b &prime; = 1 m * sum ( &delta; 2 , 2 ) - - - ( 14 )
In formula (11) (12) (13) (14), sum (x), sum (x, 2) be respectively ask for x all elements and, ask for the function of all row sums of x; Try to achieve after partial derivative, utilize gradient descent method to upgrade weight and biasing;
4) after the weight training of every layer, regard all-network as an entirety, utilize gradient descent method to upgrade each layer of weight, further improve the recovery capability of network for data;
Step (3). to mix face database as input data, the degree of depth neural network that step (2) is obtained is carried out Training, improves the cognitive ability of network for face;
Here adopt the method for contiguous meta analysis to shorten the distance of facial image between same individual, increase the distance of facial image between Different Individual, and then improve accuracy of face identification;
Contiguous meta analysis is a kind of learning distance metric method, and it defines the distance of single data field remainder data in new transformed space with a square of Euclidean distance, and function definition is as follows:
P ij = exp - | | Ax i - Ax j | | 2 &Sigma; k exp - | | Ax i - Ax j | | 2 j &NotEqual; i 0 j = i ( 15 )
In formula (15), x i, x j, x kbe respectively i, j, the corresponding mapping (enum) data of the individual facial image in k position, A is mapping space, is expressed as certain layer that neural network is concrete; Definition loss function is f, is determined by following formula:
&Sigma; i &Sigma; j &Element; C i P ij = &Sigma; i P i - - - ( 16 )
In formula (16), C ifor the corresponding face set of individual i; By asking for the partial derivative of loss function f for every layer network
Figure FDA0000458423600000034
, utilize gradient descent method iteration optimization network model, improve the understandability of network for face difference;
Step (4). for the multidimensional data after face mapping, use following formula to ask for a binary representation data y ab:
y &alpha;b = 1 x &alpha;b &GreaterEqual; m b 0 x &alpha;b < m b - - - ( 17 )
In formula (17), x abbe the b dimension component that a opens facial image, m bfor the intermediate value of face images b dimension component; This formula has guaranteed being uniformly distributed for every one dimension component 0 and 1 of token image;
In identifying, the view data to be identified representing according to degree of depth network is asked for this binary form registration, use all images in binary form registration and face database to ask for Hamming distance, according to the size of set search radius R, use the front R of Hamming distance minimum to open image as secondary contrast images, once obtain and the Euclidean distance of image to be identified, Euclidean distance and the threshold value of gained minimum compare, and are less than or equal to threshold value and are defined as this facial image.
CN201410023333.6A 2014-01-17 2014-01-17 Real-time face recognition method based on deep neural network Pending CN103778414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410023333.6A CN103778414A (en) 2014-01-17 2014-01-17 Real-time face recognition method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410023333.6A CN103778414A (en) 2014-01-17 2014-01-17 Real-time face recognition method based on deep neural network

Publications (1)

Publication Number Publication Date
CN103778414A true CN103778414A (en) 2014-05-07

Family

ID=50570628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410023333.6A Pending CN103778414A (en) 2014-01-17 2014-01-17 Real-time face recognition method based on deep neural network

Country Status (1)

Country Link
CN (1) CN103778414A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156464A (en) * 2014-08-20 2014-11-19 中国科学院重庆绿色智能技术研究院 Micro-video retrieval method and device based on micro-video feature database
CN104318215A (en) * 2014-10-27 2015-01-28 中国科学院自动化研究所 Cross view angle face recognition method based on domain robustness convolution feature learning
CN104537684A (en) * 2014-06-17 2015-04-22 浙江立元通信技术股份有限公司 Real-time moving object extraction method in static scene
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN104598872A (en) * 2014-12-23 2015-05-06 安科智慧城市技术(中国)有限公司 Face comparison method, apparatus and face recognition method, system
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN105117611A (en) * 2015-09-23 2015-12-02 北京科技大学 Determining method and system for traditional Chinese medicine tongue diagnosis model based on convolution neural networks
CN105160336A (en) * 2015-10-21 2015-12-16 云南大学 Sigmoid function based face recognition method
CN105184367A (en) * 2014-06-09 2015-12-23 讯飞智元信息科技有限公司 Model parameter training method and system for depth neural network
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ
CN105512725A (en) * 2015-12-14 2016-04-20 杭州朗和科技有限公司 Neural network training method and equipment
CN105590094A (en) * 2015-12-11 2016-05-18 小米科技有限责任公司 Method and device for determining number of human bodies
CN105760833A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature recognition method
CN106372581A (en) * 2016-08-25 2017-02-01 中国传媒大学 Method for constructing and training human face identification feature extraction network
CN106408037A (en) * 2015-07-30 2017-02-15 阿里巴巴集团控股有限公司 Image recognition method and apparatus
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN107292899A (en) * 2017-05-05 2017-10-24 浙江大学 A kind of Corner Feature extracting method for two dimensional laser scanning instrument
CN107533665A (en) * 2015-04-28 2018-01-02 高通股份有限公司 Top-down information is included in deep neural network via bias term
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point
CN107690657A (en) * 2015-08-07 2018-02-13 谷歌有限责任公司 Trade company is found according to image
CN108154165A (en) * 2017-11-20 2018-06-12 华南师范大学 Love and marriage object matching data processing method, device, computer equipment and storage medium based on big data and deep learning
US10049307B2 (en) 2016-04-04 2018-08-14 International Business Machines Corporation Visual object recognition
CN108858201A (en) * 2018-08-15 2018-11-23 深圳市烽焌信息科技有限公司 It is a kind of for nursing the robot and storage medium of children
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
CN109359608A (en) * 2018-10-25 2019-02-19 电子科技大学 A kind of face identification method based on deep learning model
CN109793491A (en) * 2018-12-29 2019-05-24 维沃移动通信有限公司 A kind of colour blindness detection method and terminal device
CN110309692A (en) * 2018-03-27 2019-10-08 杭州海康威视数字技术股份有限公司 Face identification method, apparatus and system, model training method and device
CN110874571A (en) * 2015-01-19 2020-03-10 阿里巴巴集团控股有限公司 Training method and device of face recognition model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039221B1 (en) * 1999-04-09 2006-05-02 Tumey David M Facial image verification utilizing smart-card with integrated video camera
US8209080B2 (en) * 2009-04-27 2012-06-26 Toyota Motor Engineering & Manufacturing North America, Inc. System for determining most probable cause of a problem in a plant
CN102902966A (en) * 2012-10-12 2013-01-30 大连理工大学 Super-resolution face recognition method based on deep belief networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039221B1 (en) * 1999-04-09 2006-05-02 Tumey David M Facial image verification utilizing smart-card with integrated video camera
US8209080B2 (en) * 2009-04-27 2012-06-26 Toyota Motor Engineering & Manufacturing North America, Inc. System for determining most probable cause of a problem in a plant
CN102902966A (en) * 2012-10-12 2013-01-30 大连理工大学 Super-resolution face recognition method based on deep belief networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GOLDBERGER J 等: "《Neighbourhood components analysis》", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 *
HINDON GE 等: ""Reducing the dimensionality of data with neural networks"", 《SCIENCE》 *
QUOC V. LE 等: ""Building High-level Features Using Large Scale Unsupervised Learning"", 《IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH & SIGNAL PROCESSING》 *
YING ZHANG 等: ""Occlusion-Robust Face Recognition Using Iterative Stacked Denoising Autoencoder"", 《ICONIP》 *
林妙真: ""基于深度学习的人脸识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邢健飞 等: ""基于深度神经网络的实时人脸识别"", 《杭州电子科技大学学报》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN105184367A (en) * 2014-06-09 2015-12-23 讯飞智元信息科技有限公司 Model parameter training method and system for depth neural network
CN105184367B (en) * 2014-06-09 2018-08-14 讯飞智元信息科技有限公司 The model parameter training method and system of deep neural network
CN104537684A (en) * 2014-06-17 2015-04-22 浙江立元通信技术股份有限公司 Real-time moving object extraction method in static scene
CN104156464A (en) * 2014-08-20 2014-11-19 中国科学院重庆绿色智能技术研究院 Micro-video retrieval method and device based on micro-video feature database
CN104156464B (en) * 2014-08-20 2018-04-27 中国科学院重庆绿色智能技术研究院 Micro- video retrieval method and device based on micro- video frequency feature data storehouse
CN104318215A (en) * 2014-10-27 2015-01-28 中国科学院自动化研究所 Cross view angle face recognition method based on domain robustness convolution feature learning
CN104318215B (en) * 2014-10-27 2017-09-19 中国科学院自动化研究所 A kind of cross-view face identification method based on domain robust convolution feature learning
CN104598872B (en) * 2014-12-23 2018-07-06 深圳市君利信达科技有限公司 Method, equipment and a kind of method of face recognition, the system that a kind of face compares
CN104598872A (en) * 2014-12-23 2015-05-06 安科智慧城市技术(中国)有限公司 Face comparison method, apparatus and face recognition method, system
CN110874571A (en) * 2015-01-19 2020-03-10 阿里巴巴集团控股有限公司 Training method and device of face recognition model
CN110874571B (en) * 2015-01-19 2023-05-05 创新先进技术有限公司 Training method and device of face recognition model
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN107533665A (en) * 2015-04-28 2018-01-02 高通股份有限公司 Top-down information is included in deep neural network via bias term
CN106408037A (en) * 2015-07-30 2017-02-15 阿里巴巴集团控股有限公司 Image recognition method and apparatus
CN106408037B (en) * 2015-07-30 2020-02-18 阿里巴巴集团控股有限公司 Image recognition method and device
CN107690657B (en) * 2015-08-07 2019-10-22 谷歌有限责任公司 Trade company is found according to image
CN107690657A (en) * 2015-08-07 2018-02-13 谷歌有限责任公司 Trade company is found according to image
CN105117611B (en) * 2015-09-23 2018-06-12 北京科技大学 Based on the determining method and system of the TCM tongue diagnosis model of convolutional Neural metanetwork
CN105117611A (en) * 2015-09-23 2015-12-02 北京科技大学 Determining method and system for traditional Chinese medicine tongue diagnosis model based on convolution neural networks
CN105160336A (en) * 2015-10-21 2015-12-16 云南大学 Sigmoid function based face recognition method
CN105160336B (en) * 2015-10-21 2018-06-15 云南大学 Face identification method based on Sigmoid functions
CN105469041A (en) * 2015-11-19 2016-04-06 上海交通大学 Facial point detection system based on multi-task regularization and layer-by-layer supervision neural networ
CN105469041B (en) * 2015-11-19 2019-05-24 上海交通大学 Face point detection system based on multitask regularization and layer-by-layer supervision neural network
CN105590094A (en) * 2015-12-11 2016-05-18 小米科技有限责任公司 Method and device for determining number of human bodies
CN105590094B (en) * 2015-12-11 2019-03-01 小米科技有限责任公司 Determine the method and device of human body quantity
CN105512725A (en) * 2015-12-14 2016-04-20 杭州朗和科技有限公司 Neural network training method and equipment
CN105760833A (en) * 2016-02-14 2016-07-13 北京飞搜科技有限公司 Face feature recognition method
US10049307B2 (en) 2016-04-04 2018-08-14 International Business Machines Corporation Visual object recognition
CN106372581A (en) * 2016-08-25 2017-02-01 中国传媒大学 Method for constructing and training human face identification feature extraction network
CN106548159A (en) * 2016-11-08 2017-03-29 中国科学院自动化研究所 Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN107292899A (en) * 2017-05-05 2017-10-24 浙江大学 A kind of Corner Feature extracting method for two dimensional laser scanning instrument
CN107609519B (en) * 2017-09-15 2019-01-22 维沃移动通信有限公司 A kind of localization method and device of human face characteristic point
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point
CN108154165A (en) * 2017-11-20 2018-06-12 华南师范大学 Love and marriage object matching data processing method, device, computer equipment and storage medium based on big data and deep learning
CN108154165B (en) * 2017-11-20 2021-12-07 华南师范大学 Marriage and love object matching data processing method and device based on big data and deep learning, computer equipment and storage medium
CN110309692A (en) * 2018-03-27 2019-10-08 杭州海康威视数字技术股份有限公司 Face identification method, apparatus and system, model training method and device
CN108858201A (en) * 2018-08-15 2018-11-23 深圳市烽焌信息科技有限公司 It is a kind of for nursing the robot and storage medium of children
CN109145837A (en) * 2018-08-28 2019-01-04 厦门理工学院 Face emotion identification method, device, terminal device and storage medium
CN109359608A (en) * 2018-10-25 2019-02-19 电子科技大学 A kind of face identification method based on deep learning model
CN109359608B (en) * 2018-10-25 2021-10-19 电子科技大学 Face recognition method based on deep learning model
CN109793491A (en) * 2018-12-29 2019-05-24 维沃移动通信有限公司 A kind of colour blindness detection method and terminal device

Similar Documents

Publication Publication Date Title
CN103778414A (en) Real-time face recognition method based on deep neural network
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
Tao et al. An object detection system based on YOLO in traffic scene
CN114398961B (en) Visual question-answering method based on multi-mode depth feature fusion and model thereof
Cai et al. Facial expression recognition method based on sparse batch normalization CNN
CN106651915B (en) The method for tracking target of multi-scale expression based on convolutional neural networks
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN107316015A (en) A kind of facial expression recognition method of high accuracy based on depth space-time characteristic
CN107122809A (en) Neural network characteristics learning method based on image own coding
CN106203395A (en) Face character recognition methods based on the study of the multitask degree of depth
CN106295186A (en) A kind of method and system of aided disease diagnosis based on intelligent inference
CN105787557A (en) Design method of deep nerve network structure for computer intelligent identification
CN109299701A (en) Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN103824054A (en) Cascaded depth neural network-based face attribute recognition method
CN107102727A (en) Dynamic gesture study and recognition methods based on ELM neutral nets
CN103810506A (en) Method for identifying strokes of handwritten Chinese characters
CN110321862B (en) Pedestrian re-identification method based on compact ternary loss
CN107992895A (en) A kind of Boosting support vector machines learning method
CN109558902A (en) A kind of fast target detection method
CN106909938A (en) Viewing angle independence Activity recognition method based on deep learning network
CN107203740A (en) A kind of face age estimation method based on deep learning
CN106570521A (en) Multi-language scene character recognition method and recognition system
CN104408470A (en) Gender detection method based on average face preliminary learning
CN116226629B (en) Multi-model feature selection method and system based on feature contribution
CN107451594A (en) A kind of various visual angles Approach for Gait Classification based on multiple regression

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140507