CN105678249A - Face identification method aiming at registered face and to-be-identified face image quality difference - Google Patents

Face identification method aiming at registered face and to-be-identified face image quality difference Download PDF

Info

Publication number
CN105678249A
CN105678249A CN201511031057.9A CN201511031057A CN105678249A CN 105678249 A CN105678249 A CN 105678249A CN 201511031057 A CN201511031057 A CN 201511031057A CN 105678249 A CN105678249 A CN 105678249A
Authority
CN
China
Prior art keywords
quality
face
low
facial image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511031057.9A
Other languages
Chinese (zh)
Other versions
CN105678249B (en
Inventor
高盛华
汤旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201511031057.9A priority Critical patent/CN105678249B/en
Publication of CN105678249A publication Critical patent/CN105678249A/en
Application granted granted Critical
Publication of CN105678249B publication Critical patent/CN105678249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a face identification method aiming at registered face and to-be-identified face image quality difference. The method is characterized by comprising the following steps of 1 constructing a high-quality convolutional neural sub-network of N<H> layers and a low-quality convolutional neural sub-network of N<L> layers; 2 utilizing a sample set to learn the high-quality convolutional neural sub-network and the low-quality convolutional neural sub-network to obtain the parameters theta H, WH, theta L and WL, wherein the sample set comprises multiple data pairs, and each data pair is composed of a high-quality face image and a low-quality face image; 3 registering the high-quality face images in a face identification system, and comparing the low-quality face images with the registered high-quality face images one by one after the low-quality face images are captured real-timely. The present invention provides the face identification method which is more efficient and has a better robustness.

Description

For the face identification method that registration face is different with face picture quality to be identified
Technical field
The present invention relates to a kind of face identification method.
Background technology
The hypothesis identical in quality that existing recognition of face is all based in system registered face and facial image to be identified. And the recognition of face in practical application, in such as video surveillance applications, it is all generally registered face and face quality to be identified exists a great difference, it is often the case that the quality of human face image of registration high (positive face, resolution are high, illumination condition is good) in Database Systems, and the quality of human face image to be identified collected in monitor video low (side face, resolution are low, illumination is dark, fuzzy etc.). If registered face and face to be identified are transformed to same size, great information loss can be caused, thus affecting the precision of recognition of face.
Summary of the invention
It is an object of the invention to provide a kind of more efficiently, the face identification method of robust more.
In order to achieve the above object, the technical scheme is that and provide a kind of face identification method different with face picture quality to be identified for registration face, it is characterized in that: the image for different quality trains different neutral nets to extract feature, and calculate the distance between two different quality characteristics of image by the method for metric learning, this algorithm comprises the following steps:
The first step, one N of structureHThe high-quality convolutional Neural sub-network of layer, parameter to be learned is θH、WH, and a NLThe low quality convolutional Neural sub-network of layer, parameter to be learned is θL、WL, NH> NL, object function J is defined as:
arg min { &theta; H , &theta; L , W H , W L } J = 1 2 &Sigma; i g ( &mu; - l i ( &tau; - d 2 ( I i H , I i L ) ) )
+ &lambda; 2 ( &Sigma; n = 1 N H | | M H n | | F 2 + &Sigma; n = 1 N L | | M L n | | F 2 )
+ &gamma; 2 ( | | W H | | F 2 + | | W L | | F 2 ) , In formula: g () represents Rogers spy's loss function, and τ represents that marginal position, μ represent the minimum range with edge, liFor demarcating i-th by high-quality facial imageWith low-quality facial imageWhether the face of the face centering constituted is the face of same person, if i-th is by high-quality facial imageWith low-quality facial imageThe face of the face centering constituted is the face of same person, then li=1, otherwise, li=-1, d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 Represent that i-th is by high-quality facial imageWith low-quality facial imageDistance between two faces of the face centering constituted,Represent the output without feature alignment of high-quality convolutional Neural sub-network,Represent the output without feature alignment of low quality convolutional Neural sub-network,Represent the n-th layer wave filter of high-quality convolutional Neural sub-network,Representing the n-th layer wave filter of low quality convolutional Neural sub-network, λ represents the regularization coefficient of convolutional Neural sub-network, and γ represents the regularization coefficient of the full articulamentum for feature alignment,Represent the F norm of vector;
Second step, utilizing sample set to high-quality convolutional Neural sub-network and low quality convolutional Neural sub-network, study obtains parameter θH、WH、θL、WL, wherein, sample set includes multipair data pair, and every pair of data form by a high-quality facial image and a low-quality facial image;
3rd step, in face identification system, register high-quality facial image, after the low-quality facial image of captured in real time, by low-quality facial image and registered high-quality facial image comparison one by one, the registered high-quality facial image of i-th pairWith low-quality facial imageComparison step include:
Step 3.1, respectively by high-quality facial imageAnd low-quality facial imageInput the high-quality convolutional Neural sub-network after training and low quality convolutional Neural sub-network, the output after being aligndAnd
Step 3.2, calculating facial imageWith facial imageBetween distance d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 , If d 2 ( I i H , I i L ) &le; &tau; - &mu; , Then facial imageWith facial imageBelong to same person, otherwise, belong to different people.
Preferably, described high-quality convolutional Neural sub-network is 8 layers, ground floor is high quality graphic input layer, last layer is the full articulamentum of high quality graphic, the full articulamentum of high quality graphic input layer to high quality graphic be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two, convolutional layer three, maximum pond layer three.
Preferably, described low quality convolutional Neural sub-network is 6 layers, ground floor is low-quality image input layer, last layer is the full articulamentum of low-quality image, the full articulamentum of low-quality image input layer to low-quality image be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two.
The present invention compares with additive method in two common data sets, including COX and PaSC. Discrimination can as the judgment criteria of measure algorithm robustness and high efficiency, and Performance Evaluation ROC curve represents.
Under the different experiments facilities of two data sets, the accuracy rate of the present invention and existing best technique contrasts, (this table represents the face authentication accuracy rate of distinct methods on two data sets of PaSC and COX) as shown in the table.
Note: SRDML represents algorithm provided by the invention.
Accompanying drawing explanation
Fig. 1 is the network structure of high-quality convolutional Neural sub-network;
Fig. 2 is the network structure of low quality convolutional Neural sub-network;
Fig. 3 feature alignment schematic diagram;
Fig. 4 is on COX data set, the comparison diagram of the ROC curve of the inventive method and existing best technique;
Fig. 5 A is present invention robust analysis figure under salt-pepper noise affects;
Fig. 5 B is present invention robust analysis figure under Gaussian noise affects;
Fig. 5 C is present invention robust analysis figure under the impact blocked;
Fig. 6 is the algorithm high efficiency schematic diagram of the present invention under different resolution;
Fig. 7 A and Fig. 7 B is the inventive method result of part face distance metric on PaSC data set, and Fig. 7 A is the characteristic profile of high quality graphic, and Fig. 7 B is the characteristic profile of low-quality image, and in figure, subject1 to subject5 is different individuals.
Detailed description of the invention
For making the present invention become apparent, hereby with preferred embodiment, and accompanying drawing is coordinated to be described in detail below.
The present invention needs to solve a given facial image pair, it is judged that this image problem to whether belonging to same person. Describe in detail as follows: in data set, i-th face is to being expressed asRepresent the facial image of high and low quality respectively.Method based on deep neural network can learn to arriveThe feature representation more with distinctionThen pass through mapping WHAnd WLObtain the tolerance of similarity d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 . The target of algorithm is to solve following problem: when face is to when belonging to same person, d2≤ τ-μ; When face is to when belonging to different people, d2≥τ+μ。
The invention provides a kind of face identification method different with face picture quality to be identified for registration face, comprise the following steps:
The first step, one N of structureHThe high-quality convolutional Neural sub-network of layer, parameter to be identified is θH、WH, and a NLThe low quality convolutional Neural sub-network of layer, parameter to be identified is θL、WL, NH> NL
For high-quality convolutional Neural sub-network, the picture size of its input is bigger, and therefore the number of plies of its neutral net is deeper; For low quality convolutional Neural sub-network, the picture size of its input is less, and therefore the number of plies of its neutral net is slightly shallow. As shown in Figures 1 and 2, in the present embodiment, high-quality convolutional Neural sub-network is 8 layers, ground floor is high quality graphic input layer, last layer is the full articulamentum of high quality graphic, the full articulamentum of high quality graphic input layer to high quality graphic be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two, convolutional layer three, maximum pond layer three.
Low quality convolutional Neural sub-network is 6 layers, ground floor is low-quality image input layer, last layer is the full articulamentum of low-quality image, the full articulamentum of low-quality image input layer to low-quality image be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two.
In Fig. 1 and Fig. 2, the numeral of outmost turns represents the size of image, and the numeral being positioned at the lower left corner, outer ring represents corresponding port number, for instance for the ground floor high quality graphic input layer of Fig. 1, its image is sized to 134*107, and corresponding port number is 3. As can be seen here, the input size of high-quality convolutional Neural sub-network is bigger, and the input of low quality convolutional Neural sub-network is smaller. And the input size of two sub-networks is to be determined by the median of all training sample sizes. One number of turns word table of the middle square frame of each layer shows the size of the wave filter of this layer, is used for carrying out convolution operation; The numeral that the square of convolutional layer is corresponding represents the size of the characteristic pattern after convolution and corresponding port number, passes through convolution algorithm, it is possible to make original signal feature strengthen, and reduce noise. Such as the ground floor high quality graphic input layer of Fig. 1, its filter size is 9*9. By Fig. 1 and Fig. 2 it can be seen that image is after the layer operation of maximum pond, the feature of Chi Huahou has less dimension, also can improve result (being not easy over-fitting) simultaneously. In Fig. 1 and Fig. 2, last layer, namely the numeral of the cuboid of full articulamentum represents corresponding neuronic number, and the operation of full articulamentum is equivalent to the process of feature alignment.
In the present embodiment, the process of feature alignment is referring to Fig. 3, and the neuronic color of the bottom represents the neuron relevant with eye feature in face compared with shallow portion, and color deeper portion represents the neuron relevant with face feature in face. It is demonstrated experimentally that the feature about someone's face is only relevant with partial nerve unit, this partial nerve unit is now in state of activation. The method of the present invention make use of this principle just, passes through WHAnd WLRespectively the feature extracted is done feature alignment operation according to object function.Feature alignment is to pass through WHAnd WLCarry out handleIt is mapped to same space.
Owing to the picture quality of registered face is far above the picture quality of face to be identified, the present invention is respectively trained two different sub-networks to extract registered face and the feature of face to be identified(high-quality facial image),(low-quality facial image). For high-quality convolutional Neural sub-network, the picture size of input is bigger, and the number of plies of neutral net is deeper; For low quality convolutional Neural sub-network, the picture size of input is less, and the number of plies of neutral net is slightly shallow. Learn to parameter θ two convolutional Neural sub-networksH、θL, obtain featureAfter, more in the end one layer, namely full articulamentum carries out feature alignment, the parameter W in full articulamentumH、WLAlso it is need study.
After high-quality convolutional Neural sub-network and low quality convolutional Neural sub-network build, the method for the present invention define one more efficient, there is the distance metric object function of distinction more.Between distance can be expressed as d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 , Wherein θHAnd θLThe parameter that to be high quality graphic respectively to learn as the convolutional neural networks of input as the convolutional neural networks of input and low-quality image. Present invention liDemarcateIt it is whether the face of same person. IfBelong to same person, then li=1, otherwise li=-1. In order to make study to the face characteristic that extracts of convolutional neural networks have more distinction, the distance between face is added following constraint by the present invention:Wherein τ represents that marginal position, μ represent the minimum range with edge. This constraint be meant that ifBelong to same person, it would be desirable that the distance d between them2≤ τ-μ; Otherwise wish the distance d between them2>=τ+μ. This ensures that there belong to different face between distance at least than belong to identical face between big 2 μ of distance, thus contributing to recognition of face. Therefore the present invention defines following object function J:
arg min { &theta; H , &theta; H , W H , W L } J = 1 2 &Sigma; i g ( &mu; - l i ( &tau; - d 2 ( I i H , I i L ) ) )
+ &lambda; 2 ( &Sigma; n = 1 N H | | M H n | | F 2 + &Sigma; n = 1 N L | | M L n | | F 2 )
+ &gamma; 2 ( | | W H | | F 2 + | | W L | | F 2 )
In formula,Being Rogers spy's loss function, β represents the parameter of the steep for controlling recurrence,Represent the of high-quality convolutional Neural sub-network, n layer wave filter,Representing the n-th layer wave filter of low quality convolutional Neural sub-network, λ represents the regularization coefficient of convolutional Neural sub-network, and γ represents the regularization coefficient of the full articulamentum for feature alignment,Represent the F norm of vector.
Second step, utilizing sample set to high-quality convolutional Neural sub-network and low quality convolutional Neural sub-network, study obtains parameter θH、WH、θL、WL, wherein, sample set includes multipair data pair, and every pair of data form by a high-quality facial image and a low-quality facial image.
3rd step, in face identification system, register high-quality facial image, after the low-quality facial image of captured in real time, by low-quality facial image and registered high-quality facial image comparison one by one, the registered high-quality facial image of i-th pairWith low-quality facial imageComparison step include:
Step 3.1, respectively by high-quality facial imageAnd low-quality facial imageInput the high-quality convolutional Neural sub-network after training and low quality convolutional Neural sub-network, the output after being aligndAnd
Step 3.2, calculating facial imageWith facial imageBetween distance d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 , If d 2 ( I i H , I i L ) &le; &tau; - &mu; , Then facial imageWith facial imageBelong to same person, otherwise, belong to different people.
On COX data set, the contrast of the ROC curve of the inventive method and existing best technique, as shown in Figure 4.Under different noises, the analysis of algorithm robustness provided by the invention, as shown in Figure 5 A to FIG. 5 C. Under different resolution, the example of algorithm high efficiency provided by the invention is as shown in Figure 6. SRDML represents algorithm provided by the invention.
The inventive method is the result of part face distance metric on PaSC data set, as shown in figs. 7 a and 7b. In figure, each color represents same person. For everyone, the present invention selects 100 pictures, and wherein 50 is high-quality facial image, and 50 is low-quality facial image. The feature extracted after high-quality convolutional Neural sub-network and low quality convolutional Neural sub-network by the picture of different quality respectively, is then shown to two-dimensional feature space in the example shown by PCA dimensionality reduction. Pass through exemplary plot, it can be observed that algorithm achieves good Clustering Effect on the facial image point cloud of different quality.
The robust human face Study of recognition different with face picture quality to be identified for face registered in system realizes under degree of depth learning framework Caffe.

Claims (3)

1. face identification method one kind different with face picture quality to be identified for registering face, it is characterized in that, image for different quality trains different neutral nets to extract feature, and calculate the distance between two different quality characteristics of image by the method for metric learning, this algorithm comprises the following steps:
The first step, one N of structureHThe high-quality convolutional Neural sub-network of layer, parameter to be identified is θH、WH, and a NLThe low quality convolutional Neural sub-network of layer, parameter to be identified is θL、WL, NH> NL, object function J is defined as:
arg min { &theta; H , &theta; L , W H , W L } J = 1 2 &Sigma; i g ( &mu; - l i ( &tau; - d 2 ( I i H , I i L ) ) )
+ &lambda; 2 ( &Sigma; n = 1 N H | | M H n | | F 2 + &Sigma; n = 1 N L | | M L n | | F 2 )
+ &gamma; 2 ( | | W H | | F 2 + | | W L | | F 2 ) , In formula: g () represents Rogers spy's loss function, and τ represents that marginal position, μ represent the minimum range with edge, liFor demarcating i-th by high-quality facial imageWith low-quality facial imageWhether two faces of the face centering constituted are the face of same person, if i-th is by high-quality facial imageWith low-quality facial imageTwo faces of the face centering constituted are the face of same person, then li=1, otherwise, li=-1, d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 Represent the i-th width facial imageWith the i-th width facial imageBetween distance,Represent exporting without alignment of high-quality convolutional Neural sub-network,Represent exporting without alignment of low quality convolutional Neural sub-network,Represent the n-th layer wave filter of high-quality convolutional Neural sub-network,Representing the n-th layer wave filter of low quality convolutional Neural sub-network, λ represents the regularization coefficient of convolutional Neural sub-network, and γ represents the regularization coefficient of the full articulamentum for feature alignment,Represent the F norm of vector;
Second step, utilizing sample set to high-quality convolutional Neural sub-network and low quality convolutional Neural sub-network, study obtains parameter θH、WH、θL、WL, wherein, sample set includes multipair data pair, and every pair of data form by a high-quality facial image and a low-quality facial image;
3rd step, in face identification system, register high-quality facial image, after the low-quality facial image of captured in real time, by low-quality facial image and registered high-quality facial image comparison one by one, the registered high-quality facial image of i-th pairWith low-quality facial imageComparison step include:
Step 3.1, respectively by high-quality facial imageAnd low-quality facial imageInput the high-quality convolutional Neural sub-network after training and low quality convolutional Neural sub-network, the output after being aligndAnd
Step 3.2, calculating facial imageWith facial imageBetween distance d 2 ( I i H , I i L ) = | | W H f &theta; H ( I i H ) - W L f &theta; L ( I i L ) | | 2 2 , If d 2 ( I i H , I i L ) &le; &tau; - &mu; , Then facial imageWith facial imageBelong to same person, otherwise, belong to different people.
2. a kind of for the registration face face identification method different with face picture quality to be identified as claimed in claim 1, it is characterized in that, described high-quality convolutional Neural sub-network is 8 layers, ground floor is high quality graphic input layer, last layer is the full articulamentum of high quality graphic, the full articulamentum of high quality graphic input layer to high quality graphic be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two, convolutional layer three, maximum pond layer three.
3. a kind of for the registration face face identification method different with face picture quality to be identified as claimed in claim 1, it is characterized in that, described low quality convolutional Neural sub-network is 6 layers, ground floor is low-quality image input layer, last layer is the full articulamentum of low-quality image, the full articulamentum of low-quality image input layer to low-quality image be followed successively by: convolutional layer one, maximum pond layer one, convolutional layer two, maximum pond layer two.
CN201511031057.9A 2015-12-31 2015-12-31 For the registered face face identification method different with face picture quality to be identified Active CN105678249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511031057.9A CN105678249B (en) 2015-12-31 2015-12-31 For the registered face face identification method different with face picture quality to be identified

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511031057.9A CN105678249B (en) 2015-12-31 2015-12-31 For the registered face face identification method different with face picture quality to be identified

Publications (2)

Publication Number Publication Date
CN105678249A true CN105678249A (en) 2016-06-15
CN105678249B CN105678249B (en) 2019-05-07

Family

ID=56298415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511031057.9A Active CN105678249B (en) 2015-12-31 2015-12-31 For the registered face face identification method different with face picture quality to be identified

Country Status (1)

Country Link
CN (1) CN105678249B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106910176A (en) * 2017-03-02 2017-06-30 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method
CN107341463A (en) * 2017-06-28 2017-11-10 北京飞搜科技有限公司 A kind of face characteristic recognition methods of combination image quality analysis and metric learning
CN108269254A (en) * 2018-01-17 2018-07-10 百度在线网络技术(北京)有限公司 Image quality measure method and apparatus
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
CN111435431A (en) * 2019-01-15 2020-07-21 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7236615B2 (en) * 2004-04-21 2007-06-26 Nec Laboratories America, Inc. Synergistic face detection and pose estimation with energy-based models
CN104866900A (en) * 2015-01-29 2015-08-26 北京工业大学 Deconvolution neural network training method
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7236615B2 (en) * 2004-04-21 2007-06-26 Nec Laboratories America, Inc. Synergistic face detection and pose estimation with energy-based models
CN104866900A (en) * 2015-01-29 2015-08-26 北京工业大学 Deconvolution neural network training method
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
CN106910176A (en) * 2017-03-02 2017-06-30 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method
CN106910176B (en) * 2017-03-02 2019-09-13 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method
CN107341463A (en) * 2017-06-28 2017-11-10 北京飞搜科技有限公司 A kind of face characteristic recognition methods of combination image quality analysis and metric learning
CN107341463B (en) * 2017-06-28 2020-06-05 苏州飞搜科技有限公司 Face feature recognition method combining image quality analysis and metric learning
CN108269254A (en) * 2018-01-17 2018-07-10 百度在线网络技术(北京)有限公司 Image quality measure method and apparatus
CN111435431A (en) * 2019-01-15 2020-07-21 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105678249B (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN105678249A (en) Face identification method aiming at registered face and to-be-identified face image quality difference
Li et al. Infrared and visible image fusion using a deep learning framework
CN107633513B (en) 3D image quality measuring method based on deep learning
CN108537743B (en) Face image enhancement method based on generation countermeasure network
US10262190B2 (en) Method, system, and computer program product for recognizing face
CN112001868B (en) Infrared and visible light image fusion method and system based on generation of antagonism network
EP2905722B1 (en) Method and apparatus for detecting salient region of image
CN104008370B (en) A kind of video face identification method
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN104008538B (en) Based on single image super-resolution method
CN107563328A (en) A kind of face identification method and system based under complex environment
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN109376637A (en) Passenger number statistical system based on video monitoring image processing
CN104915676A (en) Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method
CN105389797A (en) Unmanned aerial vehicle video small-object detecting method based on super-resolution reconstruction
CN104021394A (en) Insulator image recognition method based on Adaboost algorithm
CN104616280B (en) Method for registering images based on maximum stable extremal region and phase equalization
Bouchaffra et al. Structural hidden Markov models for biometrics: Fusion of face and fingerprint
Premaratne et al. Image matching using moment invariants
CN103426158A (en) Method for detecting two-time-phase remote sensing image change
CN103714326A (en) One-sample face identification method
CN104966054A (en) Weak and small object detection method in visible image of unmanned plane
CN109685772B (en) No-reference stereo image quality evaluation method based on registration distortion representation
CN106529395A (en) Signature image recognition method based on deep brief network and k-means clustering
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant