CN111696090B - Method for evaluating quality of face image in unconstrained environment - Google Patents

Method for evaluating quality of face image in unconstrained environment Download PDF

Info

Publication number
CN111696090B
CN111696090B CN202010510923.7A CN202010510923A CN111696090B CN 111696090 B CN111696090 B CN 111696090B CN 202010510923 A CN202010510923 A CN 202010510923A CN 111696090 B CN111696090 B CN 111696090B
Authority
CN
China
Prior art keywords
face
layer
face image
image
convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010510923.7A
Other languages
Chinese (zh)
Other versions
CN111696090A (en
Inventor
李波
黄鸣镝
李孟
刘民岷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Guangdong Electronic Information Engineering Research Institute of UESTC
Original Assignee
University of Electronic Science and Technology of China
Guangdong Electronic Information Engineering Research Institute of UESTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Guangdong Electronic Information Engineering Research Institute of UESTC filed Critical University of Electronic Science and Technology of China
Priority to CN202010510923.7A priority Critical patent/CN111696090B/en
Publication of CN111696090A publication Critical patent/CN111696090A/en
Application granted granted Critical
Publication of CN111696090B publication Critical patent/CN111696090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to the technical field of face recognition, in particular to a face image quality evaluation method in an unconstrained environment. The invention calibrates the face quality score by adopting two face recognition algorithms, so that the calibrated result is more accurate and comprehensive; and simultaneously, the calculated similarity of the two face recognition algorithms is subjected to weight distribution, the weight distribution is determined according to the sum of the two recognition rates based on the actual recognition rate of the algorithm, and the weight distribution is combined with the actual recognition rate and is more reasonable. And finally, the face features extracted by the convolutional neural network are used for regression prediction of the face quality scores, so that the accuracy and the real-time performance of face quality evaluation are improved.

Description

Method for evaluating quality of face image in unconstrained environment
Technical Field
The invention relates to the technical field of face recognition, in particular to a face image quality evaluation method in an unconstrained environment.
Background
At present, due to the rapid development of a face recognition technology, face recognition is widely applied to the field of public security. The actual recognition performance of the face recognition depends on the quality of the acquired face image to a great extent, and the accuracy of the face recognition can be effectively improved by the high-quality face image.
However, in an actual image acquisition environment, there are situations of motion blur and noise in a captured face image due to hardware such as a shooting device, and meanwhile, there may be situations of face occlusion, face posture shift, and the like in a face image acquired in an unconstrained environment, and these situations all bring challenges to face identification.
The method is based on a multi-factor fusion mode, wherein the mode is to obtain the quality score of the face image after weighting and fusing a plurality of factors influencing the quality of the face image, the method easily causes that various factors are difficult to be considered completely, the weight distributed to each factor is not determined and has no standard, and the comprehensive calculation process is complex and time-consuming; in addition, another method is a face image quality evaluation method based on machine learning, which extracts features of an image and trains a quality evaluation model according to the image features, wherein a Convolutional Neural Network (CNN) is greatly successful in the field of image processing, and in recent years, the CNN is also used for image quality evaluation to achieve good effects.
Disclosure of Invention
Aiming at the problems or the defects, the invention provides a method for evaluating the quality of a face image in an unconstrained environment in order to evaluate the quality of the face image in the unconstrained environment more accurately and objectively.
A method for evaluating the quality of a face image in an unconstrained environment comprises the following steps:
step 1: and carrying out face detection on the image containing the face under the unconstrained scene to be evaluated and intercepting the face part.
Step 2: and (3) performing face correction treatment on the face intercepted in the step (1), and then performing face sample size enhancement treatment.
Step 2.1, the face correction is to automatically position key points of the face of the intercepted face, and then perform affine transformation according to the positioned key points, so that the face is rotated until the positions of the eyes are on the same horizontal line.
And 2.2, performing face image sample amount increasing treatment on all corrected faces, wherein the treatment modes comprise mirror image treatment, translation treatment, illumination treatment, shielding treatment and image motion blurring treatment on the face images, the increased face image sample amount is more than ten thousand, and finally, the sizes of the treated face images are unified to the size required to be input by a subsequent CNN network.
And step 3: and (3) calibrating the face image processed in the step (2), and giving the face image a quality score to carry out face image quality calibration.
The method for calibrating the quality scores of the face images is to calculate the similarity between the features of the face images to be evaluated and the features of the reference face images, namely, the similarity between the features of the face images to be evaluated and the features of the reference face images extracted after the face images to be evaluated and the reference face images are subjected to the same face recognition algorithm is calculated, and the calculated similarity value is the quality score of the face images to be calibrated.
And respectively calculating the similarity between the face image to be calibrated and the reference face image by adopting two face recognition algorithms, then performing weight distribution on the similarity calculated by the two face recognition algorithms, wherein the weight distribution is determined according to the sum of the two recognition rates occupied by the actual recognition rate of the algorithm, and finally obtaining the final quality score of the face image, so that the quality score of the calibrated face image is more accurate and reasonable.
Because the two selected recognition algorithms have different recognition rates in practical application, the face similarity calculated by the face algorithm is subjected to weight distribution according to the sum of the two recognition rates of each face algorithm recognition rate, and finally the face calibration quality score is obtained, wherein the calculation result formula is as follows:
SCORE 1,2 =m 1 ×SCORE 1 +m 2 ×SCORE 2
m 1 +m 2 =1
Wherein, SCORE 1 Face quality SCORE, calculated for the first recognition algorithm 2 Face quality score, m, calculated for a second recognition network 1 For the weights assigned to the first recognition algorithm, m 2 For the weights assigned to the second recognition algorithm, the sum of the two weights is 1, i.e. m 1 +m 2 =1。
And 4, step 4: a convolutional neural network is designed for extracting the features of the face image, and comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and a regression layer.
The input layer receives three-channel RGB color pictures, and each convolution layer and the full-connection layer are activated by using a Relu activation function, wherein the pooling layers adopt a maximum pooling mode.
And the regression layer adopts Support Vector Regression (SVR), inputs the features extracted by the convolutional neural network and trains the Support Vector Regression (SVR), and finally, the Support Vector Regression (SVR) is used for automatically outputting the quality score of the face image to be detected.
In summary, aiming at the situation that no special face database is used for evaluating the quality of the face image at present, the invention expands the sample size of the face image and further simulates various influences on the quality of the face image under the unconstrained condition by carrying out mirror image processing, translation processing, illumination processing, shielding processing and image motion fuzzification processing on the face image. Moreover, two face recognition algorithms are adopted to calibrate the quality scores of the face images, so that the calibration result is more accurate and comprehensive; and simultaneously, the calculated similarity of the two face recognition algorithms is subjected to weight distribution, the weight distribution is determined according to the sum of the two recognition rates based on the actual recognition rate of the algorithm, and the weight distribution is combined with the actual recognition rate and is more reasonable. And finally, the face features extracted by the convolutional neural network are used for regression prediction of the face image quality scores, so that the accuracy and the real-time performance of the face image quality evaluation are improved.
Drawings
FIG. 1 is a flow chart of a method for evaluating the quality of a face image in an unconstrained environment according to the invention;
FIG. 2 is a flowchart illustrating a method for preprocessing a face image after detecting the face image according to an embodiment;
FIG. 3 is a diagram illustrating the calculation of similarity between faces based on a face recognition algorithm according to an embodiment;
fig. 4 is a schematic structural diagram of a facial image quality evaluation model according to an embodiment.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be further fully described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments.
The invention is mainly applied to the field of human face image quality evaluation, and aims at unconstrained scenes such as airports in public places, the quality of human face images captured by videos is easily influenced by the surrounding environment, so that the acquired human face images are possibly influenced by factors such as posture change, uneven light, human face shielding, image blurring and the like, the quality of the obtained human face images is not high, and the accuracy of human face recognition is finally influenced.
As shown in fig. 1, the method for evaluating the quality of a face image in an unconstrained environment provided by the present invention includes:
step 1: and carrying out face detection on the acquired picture to be evaluated and intercepting a face part.
And 2, step: and (3) after the face obtained in the step (1) is subjected to face correction processing, performing sample size expansion processing on the corrected face image, so that the number of samples can reach more than ten thousand.
As shown in fig. 2, a schematic flow diagram of face detection and face preprocessing is provided, which specifically includes the following steps:
firstly, acquiring a picture containing a face in an unconstrained scene, carrying out face detection on the picture by using an opencv self-contained face detector, intercepting a face part, then carrying out key point calibration on the face, and carrying out face correction operation by using the positions of the key points; and then, carrying out mirror image turning, translation operation, illumination processing, shielding processing and image motion blurring processing on the corrected human face so as to further expand the sample size of the human face image, and finally restoring all the images to a three-channel image, wherein the size of the three-channel image is normalized to 224X 3.
And step 3: calibrating all the face images obtained in the step 2, and giving the face a quality score form to carry out quality calibration;
Furthermore, when the similarity between two face feature vectors is analyzed in the step 3, cosine similarity is used for representation. Cosine similarity measures the difference between two individuals by using the cosine value of the included angle between two vectors in the vector space. Cosine similarity emphasizes the difference of two vectors in direction rather than distance or length, compared to distance measurement. The formula is as follows:
Figure BDA0002528217350000041
wherein
Figure BDA0002528217350000042
In order to calibrate the human face feature vector,
Figure BDA0002528217350000043
is a standard face feature vector.
Further, the two face recognition algorithms are a face recognition algorithm based on LBP (local binary pattern) features and a face recognition algorithm based on the convolutional neural network VGG16, respectively.
LBP can extract the texture characteristics of the local area of the image, and the core idea is to perform experimental characterization by comparing the brightness of the central pixel and the brightness of the surrounding pixels of a certain area in the image. The original LBP operator is defined in a window of 3 x 3, the central pixel of the window is used as a threshold value, the gray values of 8 adjacent pixels are compared with the central pixel, if the values of the surrounding pixels are larger than the value of the central pixel, the position of the pixel point is marked as 1, and if not, the position is 0. Thus, 8 points in the 3 × 3 neighborhood can generate 8-bit binary numbers (usually converted into decimal numbers, i.e. LBP codes, 256 types in total) by comparison, that is, the LBP value of the pixel point in the center of the window is obtained, and the LBP value is used to reflect the texture information of the region. The formula for the original LBP is as follows:
Figure BDA0002528217350000044
Wherein g is p Is the value of the surrounding pixel, g c The central pixel value.
The VGG is a classic deep learning network for face recognition, wherein the VGG16 network is composed of 13 convolutional layers, 3 fully-connected layers and 5 pooling layers, and features extracted from the last fully-connected layer of the VGG16 are used as features of a face image.
Because the recognition rates of the two recognition algorithms are different in practical application, the face similarity calculated by the face algorithm is subjected to weight distribution according to the sum of the two recognition rates of each face algorithm recognition rate, and finally the face calibrated quality score is obtained, wherein the calculation result formula is as follows:
SCORE L,V =m L ×SCORE L +m V ×SCORE V
m L +m V =1
wherein, SCORE L SCORE for face image quality SCORE calculated by LBP recognition algorithm V Network computed face image quality score, m, for VGG16 recognition L For the weights assigned to the LBP recognition algorithm, m V For the weights assigned to the VGG identification algorithm, the sum of the two weights is 1, i.e., m L +m V =1。
As shown in fig. 3, a schematic diagram of a human face image quality score calibration method specifically includes:
the feature extraction and LBP feature extraction of the VGG model are carried out on the reference face picture and the face picture to be calibrated at the same time, one group of the reference face picture and the face picture to be calibrated calculates the similarity calculated by the features extracted by the VGG, the other group of the reference face picture and the face picture to be calibrated calculates the similarity calculated by the LBP features, and the cosine similarity is used for representing when the similarity between two face feature vectors is calculated. Cosine similarity measures the difference between two individuals by using the cosine value of the included angle between two vectors in the vector space. The similarity of the two parts also represents two quality scores calculated by two face recognition algorithms, the quality score calculated by a VGG model is multiplied by a proportion of 0.6, the quality score calculated by LBP characteristics is multiplied by a proportion of 0.4, the quality scores of the two parts are weighted to obtain the quality score of the face image to be calibrated, all the face images are processed in the same way to obtain the quality score of each face image to be calibrated, and the range of the quality scores is 0-1.
And 4, step 4: the designed convolutional neural network is used for extracting the facial image features, the convolutional neural network comprises four convolutional layers, two pooling layers and a full-connection layer, and the facial features extracted through the convolutional neural network are used for regression prediction.
As shown in fig. 4, the convolutional neural network designed for face feature extraction in this embodiment includes four convolutional layers, two pooling layers, and a full connection layer. The pooling layer is selected to be maximally pooled. The first convolutional layer C1, the convolutional kernel used 3x3, step size 1, no padding, number 36; a second convolutional layer C2, with convolutional kernel size also 3x3, step size 1, no padding, number 36; the convolution layer C3 has convolution kernel size of 3x3, step size of 1, no padding and number of 48; the fourth layer is a maximized pooling layer S4, the convolution kernel size is 2x2, the number is 64, and the step size is 4; the fifth layer is convolutional layer C5, the convolutional kernel size is also 3x3, the step size is 1, no padding is provided, and the number is 96; the sixth layer is a maximum pooling layer S6, the convolution kernel size is 2x2, the number is 128, and the step size is 2; the last layer is the fully-connected layer F7, which has a total of 120 neurons. The Relu activation function is used after each convolutional layer and full link layer.
The convolutional neural network extracts the features and then returns to the regression layer, and the regression layer inputs the features extracted by the convolutional neural network and trains a support regression vector machine (SVR) after adopting the support regression vector (SVR) for automatically outputting the face image quality score.

Claims (2)

1. A method for evaluating the quality of a face image in an unconstrained environment comprises the following specific steps:
step 1: carrying out face detection on an image containing a face under an unconstrained scene to be evaluated and intercepting a face part;
and 2, step: carrying out face correction treatment on the face cut out in the step 1, and then carrying out enhancement treatment on the face sample size;
step 2.1, the face correction is to automatically position key points of the face of the intercepted face, and then carry out affine transformation according to the positioned key points, so that the face is rotated until the positions of two eyes are on the same horizontal line;
step 2.2, all the corrected faces are subjected to face image sample amount increasing treatment; the processing mode comprises mirror image processing, translation processing, illumination processing, shielding processing and image motion fuzzification processing on the face image, and the increased sample size of the face image is more than ten thousand; finally, restoring all the processed face images to a three-channel image, and unifying the sizes of the face images to the size of 224X224X3 required to be input by a subsequent CNN network;
And 3, step 3: calibrating the face image processed in the step 2, and giving the face image a quality score to calibrate the face quality;
the method for calibrating the quality scores of the face images comprises the steps of calculating the similarity between the features of the face images to be evaluated and the features of a reference face image, namely calculating the similarity between the features of the face images to be evaluated and the features of the reference face image after the face images to be evaluated and the reference face image are subjected to the same face recognition algorithm, wherein the calculated similarity value is the quality score of the face images to be evaluated;
adopting two face recognition algorithms to respectively calculate the similarity between the face image to be calibrated and the standard face image, and then carrying out weight distribution on the similarity calculated by the two face recognition algorithms; the distribution of the weight is determined according to the sum of the two recognition rates of the actual recognition rates of the two algorithms, and finally the final quality score of the face image is obtained;
SCORE 1,2 =m 1 ×SCORE 1 +m 2 ×SCORE 2
m 1 +m 2 =1
wherein, SCORE 1 Face quality SCORE, calculated for the first recognition algorithm 2 Face quality score, m, calculated for a second recognition network 1 For the weights assigned to the first recognition algorithm, m 2 For the weights assigned to the second recognition algorithm, the sum of the two weights is 1, i.e. m 1 +m 2 =1;
The two face recognition algorithms are respectively a first face recognition algorithm based on local binary pattern LBP characteristics and a second face recognition algorithm based on convolutional neural network VGG 16;
defining an original LBP operator in a window of 3x3, taking a central pixel of the window as a threshold value, comparing the gray values of 8 adjacent pixels with the central pixel, and if the values of surrounding pixels are greater than the value of the central pixel, marking the position of the pixel as 1, otherwise, marking the position as 0; comparing 8 points in 3-by-3 neighborhood to generate 8-bit binary number, converting into decimal number, namely LBP code, and obtaining LBP value of the central pixel point of the window, and reflecting the texture information of the region by using the value; the formula for the original LBP is as follows:
Figure FDA0003589136860000021
wherein g is p Is the value of the surrounding pixel, g c Is the center pixel value;
the convolutional neural network VGG16 is composed of 13 convolutional layers, 3 fully-connected layers and 5 pooling layers, and features extracted by the last fully-connected layer of the VGG16 are used as features of the face image;
and 4, step 4: designing a convolutional neural network for evaluating the quality of the face image, wherein the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, a full-link layer and a regression layer;
specifically, the convolutional neural network comprises four convolutional layers, two pooling layers and a full-connection layer; the maximum pooling is selected for the pooling layer; the first convolutional layer C1, the convolutional kernel used 3x3, step size 1, no padding, number 36; a second convolutional layer C2, with convolutional kernel size also 3x3, step size 1, no padding, number 36; the convolution layer C3 has convolution kernel size of 3x3, step size of 1, no padding and number of 48; the fourth layer is a maximized pooling layer S4, the convolution kernel size is 2x2, the number is 64, and the step size is 4; the fifth layer is convolutional layer C5, the convolutional kernel size is also 3x3, the step size is 1, no padding is provided, and the number is 96; the sixth layer is a maximum pooling layer S6, the convolution kernel size is 2x2, the number is 128, and the step size is 2; the last layer is a full junction layer F7, which has 120 neurons in total; a Relu activation function is used after each convolutional layer and full link layer;
The input layer receives three-channel RGB color pictures, and each convolution layer and the full-connection layer are activated by using a Relu activation function, wherein the pooling layers adopt a maximum pooling mode;
and the regression layer adopts a Support Vector Regression (SVR), inputs the features extracted by the convolutional neural network and trains the Support Vector Regression (SVR) for automatically outputting the quality score of the face image to be detected.
2. The method for evaluating the quality of a human face image in an unconstrained environment as claimed in claim 1, wherein: in the step 3, when the similarity between two face feature vectors is analyzed, cosine similarity is used for representing the similarity, cosine values of included angles between two vectors in a vector space are used for measuring the difference between two individuals, and the formula is as follows:
Figure FDA0003589136860000022
wherein
Figure FDA0003589136860000023
In order to calibrate the human face feature vector,
Figure FDA0003589136860000024
is a standard face feature vector.
CN202010510923.7A 2020-06-08 2020-06-08 Method for evaluating quality of face image in unconstrained environment Active CN111696090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010510923.7A CN111696090B (en) 2020-06-08 2020-06-08 Method for evaluating quality of face image in unconstrained environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010510923.7A CN111696090B (en) 2020-06-08 2020-06-08 Method for evaluating quality of face image in unconstrained environment

Publications (2)

Publication Number Publication Date
CN111696090A CN111696090A (en) 2020-09-22
CN111696090B true CN111696090B (en) 2022-07-29

Family

ID=72479709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010510923.7A Active CN111696090B (en) 2020-06-08 2020-06-08 Method for evaluating quality of face image in unconstrained environment

Country Status (1)

Country Link
CN (1) CN111696090B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139462A (en) * 2021-04-23 2021-07-20 杭州魔点科技有限公司 Unsupervised face image quality evaluation method, electronic device and storage medium
CN113505720A (en) * 2021-07-22 2021-10-15 浙江大华技术股份有限公司 Image processing method and device, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360183A (en) * 2018-08-20 2019-02-19 中国电子进出口有限公司 A kind of quality of human face image appraisal procedure and system based on convolutional neural networks
CN111126240A (en) * 2019-12-19 2020-05-08 西安工程大学 Three-channel feature fusion face recognition method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574381B (en) * 2014-12-25 2017-09-29 南京邮电大学 A kind of full reference image quality appraisement method based on local binary patterns
US10726244B2 (en) * 2016-12-07 2020-07-28 Samsung Electronics Co., Ltd. Method and apparatus detecting a target
CN110458792B (en) * 2018-05-04 2022-02-08 北京眼神科技有限公司 Method and device for evaluating quality of face image
CN109308692B (en) * 2018-07-30 2022-05-17 西北大学 OCT image quality evaluation method based on improved Resnet and SVR mixed model
CN109784358B (en) * 2018-11-23 2023-07-11 南京航空航天大学 No-reference image quality evaluation method integrating artificial features and depth features
CN110046652A (en) * 2019-03-18 2019-07-23 深圳神目信息技术有限公司 Face method for evaluating quality, device, terminal and readable medium
CN110636278A (en) * 2019-06-27 2019-12-31 天津大学 Stereo image quality evaluation method based on sparse binocular fusion convolutional neural network
CN111127387B (en) * 2019-07-11 2024-02-09 宁夏大学 Quality evaluation method for reference-free image
CN110427888A (en) * 2019-08-05 2019-11-08 北京深醒科技有限公司 A kind of face method for evaluating quality based on feature clustering
CN110619628B (en) * 2019-09-09 2023-05-09 博云视觉(北京)科技有限公司 Face image quality assessment method
CN111160284A (en) * 2019-12-31 2020-05-15 苏州纳智天地智能科技有限公司 Method, system, equipment and storage medium for evaluating quality of face photo

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360183A (en) * 2018-08-20 2019-02-19 中国电子进出口有限公司 A kind of quality of human face image appraisal procedure and system based on convolutional neural networks
CN111126240A (en) * 2019-12-19 2020-05-08 西安工程大学 Three-channel feature fusion face recognition method

Also Published As

Publication number Publication date
CN111696090A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN109684924B (en) Face living body detection method and device
CN111401384B (en) Transformer equipment defect image matching method
CN110197229B (en) Training method and device of image processing model and storage medium
Shen et al. Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images
CN110502986A (en) Identify character positions method, apparatus, computer equipment and storage medium in image
CN111783748B (en) Face recognition method and device, electronic equipment and storage medium
CN111696090B (en) Method for evaluating quality of face image in unconstrained environment
WO2018035794A1 (en) System and method for measuring image resolution value
CN111738211B (en) PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning
CN110705353A (en) Method and device for identifying face to be shielded based on attention mechanism
CN116681636B (en) Light infrared and visible light image fusion method based on convolutional neural network
Fang et al. Laser stripe image denoising using convolutional autoencoder
CN112084952B (en) Video point location tracking method based on self-supervision training
CN115731597A (en) Automatic segmentation and restoration management platform and method for mask image of face mask
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
Bhandari et al. Image enhancement and object recognition for night vision surveillance
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN115862121B (en) Face quick matching method based on multimedia resource library
CN117133041A (en) Three-dimensional reconstruction network face recognition method, system, equipment and medium based on deep learning
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
JP2023082065A (en) Method of discriminating objet in image having biometric characteristics of user to verify id of the user by separating portion of image with biometric characteristic from other portion
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN111046861B (en) Method for identifying infrared image, method for constructing identification model and application
WO2022120532A1 (en) Presentation attack detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant