CN115661894A - Face image quality filtering method - Google Patents

Face image quality filtering method Download PDF

Info

Publication number
CN115661894A
CN115661894A CN202211256212.7A CN202211256212A CN115661894A CN 115661894 A CN115661894 A CN 115661894A CN 202211256212 A CN202211256212 A CN 202211256212A CN 115661894 A CN115661894 A CN 115661894A
Authority
CN
China
Prior art keywords
face
image
detection
recheckable
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211256212.7A
Other languages
Chinese (zh)
Inventor
陈炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Jieyu Computer Technology Co ltd
Original Assignee
Fujian Jieyu Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Jieyu Computer Technology Co ltd filed Critical Fujian Jieyu Computer Technology Co ltd
Priority to CN202211256212.7A priority Critical patent/CN115661894A/en
Publication of CN115661894A publication Critical patent/CN115661894A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for filtering the quality of a human face image, which carries out fuzzy detection on the human face image through an edge detection algorithm, more intuitively distinguishes the brightness of the human face image by utilizing an HSV color space, detects the shielding degree of the human face image, the exaggeration degree of human face expression and the angle deflection degree based on a convolutional neural network and a support vector machine model, carries out defect classification on the quality of the human face image by integrating multi-dimensional human face quality indexes, filters the image to be detected according to different required images, and can obtain higher accuracy through a small amount of data.

Description

Face image quality filtering method
Technical Field
The invention relates to the field of image technical processing, in particular to a face image quality filtering method.
Background
In the prior art, an image quality index comprehensive detection mode generally performs image quality filtering on the definition, brightness, face shielding, face size and face angle of a detected image in an integral detection mode, for example, CN110188627B "a face image filtering method and device" discloses a "face image filtering method and device". The method comprises the following steps: inputting an image to be detected into a pre-trained image prediction model, and determining the attribute of the image to be detected through the image prediction model; judging whether the image meets a preset threshold condition according to the determined attribute of the image, and if so, determining the image to be detected as the image meeting the application in the face scene; wherein, the attribute of the image to be detected comprises: any one or more of the shooting angle of the face image, the size of the face image, the image fuzziness, the illumination intensity and whether the face image is shielded. Therefore, the accuracy and efficiency of filtering the face image can be improved. However, a large amount of data is needed for detecting various attributes through a convolutional neural network model, the angles of the human face have three directions, the human face has five sense organs positions, and the judgment accuracy of the attributes obtained through one convolutional neural network and a single threshold is insufficient.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a face image quality filtering method capable of obtaining more accurate detection results through less data volume.
The technical scheme adopted by the invention for solving the technical problems is as follows: a face image quality filtering method specifically comprises the following steps:
acquiring a human face image to be detected;
carrying out fuzzy detection on the face image to be detected to obtain a fuzzy detection result;
carrying out brightness detection on the face image to be detected to obtain a brightness detection result;
taking the face image to be detected with qualified fuzzy detection result or qualified brightness detection as a recheckable image;
carrying out occlusion detection on the recheckable image, inputting a face region image in the recheckable image into an occlusion detection model to obtain an occlusion detection result, and judging whether the face in the recheckable image is occluded according to the detection result;
performing expression detection on the recheckable image to obtain face key points in the recheckable image, selecting expression feature points from the face key points and substituting the expression feature points into an expression detection model to obtain an expression detection result, and judging whether the face in the recheckable image is an exaggerated expression or not according to the detection result;
carrying out angle detection on the recheckable image to obtain face key points in the recheckable image, selecting angle characteristic points from the face key points and substituting the angle characteristic points into an angle detection model to obtain an angle detection result, and judging whether the face angle in the recheckable image is deflected too much according to the detection result;
classifying the images to be detected with different quality problems according to the results of fuzzy detection, brightness detection, shielding detection, expression detection and angle detection, and filtering the images to be detected according to the difference of the required images.
Preferably, the ambiguity detection specifically includes:
respectively extracting gradients of the horizontal direction and the vertical direction of the image to be detected through a Sobel operator, and calculating the fuzziness of the image to be detected; performing Gaussian blur operation on the image to be detected, extracting a gradient through a Sobel operator, and calculating the blur degree of the image to be detected again; performing difference operation on the fuzziness obtained by the two detections, and if the difference is smaller than a threshold T fuzziness, indicating that the image is fuzzy; if the difference is greater than the threshold T fuzzy, the image is clear.
Preferably, the brightness detection specifically includes:
converting an image to be detected, which is subjected to ambiguity detection, from an RGB image into an HSV image, and separating V channel data; calculating the average value of V channel data, if it is greater than threshold value T Global brightness a Or less than T Global brightness B If so, indicating that the global brightness is unqualified; if the average threshold value of the V channels is less than T Global brightness a And is greater than T Global brightness B Calculating the average value of the V channel data through a sliding window algorithm, and if the average value is larger than a threshold value T Local brightness A Or less than T Local brightness B And then, the local brightness is unqualified.
Preferably, the specific obtaining method of the occlusion detection model comprises the following steps:
collecting a face image data set, and labeling face coordinate data as [ (x, y) according to the five sense organs of a face picture in the data set Left eye ,(x,y) Right eye ,(x,y) Nose tip ,(x,y) Left mouth corner ,(x,y) Right mouth corner ]If the facial features of the face picture are not shielded, the face coordinates are marked asThe facial features actually correspond to the coordinates, and if the facial features are shielded in the facial picture, the facial coordinates are marked as fixed value coordinates; dividing the marked data set into a training set, a testing set and a verification set; building a feature extraction network, and mapping the training set sample to a depth feature space to obtain the face image feature value; the occlusion detection model performs back propagation to update weight parameters in a network layer by calculating the mean square error of a predicted value and a true value of the coordinates of the five sense organs in the characteristic values as a loss function in the training process; setting hyper-parameters of the shielding detection model, configuring the training set samples to carry out shielding detection model training, testing the accuracy of the shielding detection model by using the verification set samples in the training process, and improving the accuracy of the model by adjusting the hyper-parameters; after the optimal shielding detection model is adjusted, the accuracy of the shielding detection model determined by a test set sample test is used, and whether the accuracy of the shielding detection model has obvious floating or not is observed to verify the generalization capability of the model; and obtaining and fixing the optimal model parameters of the shielding detection model.
Preferably, the specific occlusion detection process is as follows:
intercepting a human face area image of the retentable image, and transmitting the human face area image into a shielding detection model; if the face coordinates are not shielded, the face coordinate result returned by the shielding detection model is the actual corresponding coordinates of the five sense organs, and if the face coordinates are shielded, the face coordinate result returned by the shielding detection model is the fixed value coordinates; and judging whether the face is shielded according to the condition whether the face coordinate result returned by the shielding detection model has a fixed value.
Preferably, the expression detection model specifically acquiring method includes:
collecting a face image data set, acquiring face key point coordinates in the data set by using a face detection tool, marking expressions of face pictures in the data set as 'exaggerated expressions' and 'normal expressions', respectively corresponding to a label 0 and a label 1, selecting feature points of the 'exaggerated expressions' and the 'normal expressions' from the face key points, splicing the feature point coordinates into a feature matrix, and dividing the marked data set into a training set, a test set and a verification set; and constructing a support vector machine, and training, verifying and testing the support vector machine to obtain a fixed optimal expression detection model.
Preferably, the specific flow of expression detection is as follows:
acquiring face key points in a recheckable image by using a face detection tool, transmitting the face key point information into an expression detection model, and judging whether the face in the recheckable image is an exaggerated expression or not according to a return result of the expression detection model; if the return value is 0, it is indicated that the face in the recheckable image is of an exaggerated expression, and if the return value is 1, it is indicated that the face in the recheckable image is of a normal expression.
Preferably, the specific obtaining method of the angle detection model is as follows:
collecting a face image data set, acquiring face key point coordinates in the data set by using a face detection tool, labeling angles of face pictures in the data set into a three-dimensional space coordinate format (X, Y, Z), selecting angle feature points of a face in a three-dimensional space from the face key points, splicing the feature point coordinates into a feature matrix, and dividing the labeled data set into a training set, a test set and a verification set; and constructing a support vector machine, and training, verifying and testing the support vector machine to obtain a fixed optimal expression detection model.
Preferably, the specific process of angle detection is as follows:
acquiring face key points in a recheckable image by using a face detection tool, transmitting the face key point information into an angle detection model, and judging whether the face in the recheckable image rotates at a large angle or rotates obliquely according to a return result of the angle detection model; if the return angle data is larger than the set value, the human face angle deflection in the recheckable image is over large.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention relates to a face image quality filtering method, which detects five key quality indexes of face blurring, brightness, expression, shielding and angle in steps, increases the detection accuracy, and can clearly return the basis of face quality judgment in each step to provide high-quality images for subsequent face identification and living body detection.
2. The invention relates to a human face image quality filtering method, which improves the accuracy of human face shielding, human face expression and human face angle detection by using a machine learning method; in addition, the occlusion detection cuts and marks the five sense organs in advance before detection, so that the subsequent identification is more accurate, and the data amount required by training is reduced; the expression and angle detection uses the feature matrix, and the obtained data volume is small, the operation speed is high, the classification effect is good, and the method is more convenient and faster.
Drawings
FIG. 1 is a flow chart of face quality detection in an embodiment of the invention;
FIG. 2 is a schematic diagram of key points of a face according to an embodiment of the present invention;
FIG. 3 is a flowchart of training a face occlusion detection model in an embodiment of the invention;
FIG. 4 is a flow chart of training a facial expression detection model in an embodiment of the present invention;
FIG. 5 is a flowchart of training a face angle detection model according to an embodiment of the present invention;
fig. 6 is a schematic view of a face angle in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides the following technical solutions: a face image quality filtering method specifically comprises the following steps:
acquiring a human face image to be detected;
carrying out fuzzy detection on the face image to be detected to obtain a fuzzy detection result;
carrying out brightness detection on the face image to be detected to obtain a brightness detection result;
taking the face image to be detected with qualified fuzzy detection result or qualified brightness detection as a recheckable image;
carrying out occlusion detection on the recheckable image, inputting a face region image in the recheckable image into an occlusion detection model to obtain an occlusion detection result, and judging whether the face in the recheckable image is occluded or not according to the detection result;
performing expression detection on the recheckable image to obtain face key points in the recheckable image, selecting expression feature points from the face key points and substituting the expression feature points into an expression detection model to obtain an expression detection result, and judging whether the face in the recheckable image is an exaggerated expression or not according to the detection result;
carrying out angle detection on the recheckable image to obtain face key points in the recheckable image, selecting angle characteristic points from the face key points and substituting the angle characteristic points into an angle detection model to obtain an angle detection result, and judging whether the face angle in the recheckable image is deflected too much according to the detection result;
classifying the images to be detected with different quality problems according to the results of fuzzy detection, brightness detection, shielding detection, expression detection and angle detection, and filtering the images to be detected according to the difference of the required images.
Preferably, the ambiguity detection specifically includes:
respectively extracting gradients of the image to be detected in the horizontal direction and the vertical direction through a Sobel operator, and calculating the fuzziness of the image to be detected; secondly, after carrying out fuzzy operation on the image to be detected by using Gaussian blur, respectively extracting gradients in the horizontal direction and the vertical direction of the image by using a Sobel operator again, and calculating image blur; comparing the two fuzziness degrees, and if the fuzziness degree is smaller than a threshold T, indicating that the image is fuzzy; if the blur is larger than the threshold T, the image is clear.
Preferably, the brightness detection specifically includes:
converting an image to be detected, which is subjected to ambiguity detection, from an RGB image into an HSV image, and separating V channel data; calculating the average value of V channel if it is greater than threshold value T Global brightness a Or less than T Global brightness B If so, indicating that the global brightness is unqualified; if the average threshold value of the V channels is less than T Global brightness a And is greater than T Global brightness B Then, the V channel data is subjected to sliding window calculation average value by a window of 5X5, if the V channel data is larger than the threshold value T Local brightness A Or less than T Local brightness B And then, the local brightness is unqualified.
Preferably, the specific occlusion detection process is as follows:
firstly, intercepting a human face area image of a retentable image, and then transmitting the human face area image into an occlusion detection model; if the facial coordinate result returned by the occlusion detection model is not occluded, the facial coordinate result returned by the occlusion detection model is the actual corresponding coordinate of the five sense organs, and if the facial coordinate result returned by the occlusion detection model is (1, -1); and judging whether the face is shielded according to the condition whether the face coordinate result returned by the shielding detection model has-1.
Preferably, the specific flow of expression detection is as follows:
firstly, acquiring face key points in a recheckable image by using a Dlib face detection tool, then transmitting the face key point information into an expression detection model, and judging whether the face in the recheckable image is an exaggerated expression or not according to a return result of the expression detection model; if the return value is 0, the human face in the recheckable image is described as an exaggerated expression, and if the return value is 1, the human face in the recheckable image is described as a normal expression.
Preferably, the specific process of angle detection is as follows:
firstly, acquiring face key points in a recheckable image by using a Dlib face detection tool, then transmitting the face key point information into an angle detection model, and judging whether the face angle in the recheckable image inclines or rotates according to a return result of the angle detection model; if the return value is 0, the human face angle deflection in the recheckable image is larger, and if the return value is 1, the human face angle in the recheckable image is normal.
Preferably, the specific obtaining method of the occlusion detection model is as follows:
collecting a face image data set, and labeling face coordinate data as [ (x, y) according to the five sense organs of a face picture in the data set Left eye ,(x,y) Right eye ,(x,y) Nose tip ,(x,y) Left mouth corner ,(x,y) Right mouth corner ]If the facial features of the face picture are not shielded, the face coordinates are marked as actual corresponding coordinates of the facial features, and if the facial features of the face picture are shielded, the face coordinates are marked as (-1, -1); and (4) the labeled data set is as follows: 1:1, dividing the ratio into a training set, a testing set and a verification set;
building a feature extraction network, mapping the training set sample to a depth feature space to obtain a feature value of the face in the face picture, wherein the feature extraction network comprises an input layer, a hidden layer and an output layer; fig. 3 is a schematic diagram of a face occlusion detection feature extraction network structure, and the specific structure is as follows:
INPUT->CONV1->CONV2->MAXPOOL->CONV3->MAXPOOL->CONV4->CONV5->DENSE->OUTPUT
wherein INPUT is INPUT layer data, and the image size is 224 × 3;
CONV1 is the first layer of convolution with a size of 11 × 96, step size 1;
CONV2 is the second layer of convolution with a size of 5 × 128, step size 1;
MAXFOOL is the maximum pooling layer, and the step length is 2;
CONV3 is the third layer of convolution layer with size 3 x 128, step size 1;
CONV4 is the fourth convolution layer with a size of 3 × 64, step size 1;
CONV5 is a fifth layer of convolution with a size of 3 × 64, with a step size of 1;
DENSE is DENSE layer with output size of 112 × 5;
OUTPUT is an OUTPUT layer and is a feature vector extracted by the convolutional neural network.
The shielding detection model updates weight parameters in a network layer by calculating the mean square error of the coordinate predicted value and the true value of the five sense organs in the characteristic value as a loss function for back propagation in the training process;
Figure BDA0003889678120000091
in the formula, i represents the characteristic of the ith channel, represents a predicted x-axis coordinate value, represents a marked x-axis coordinate value, represents a predicted y-axis coordinate value and represents a marked y-axis coordinate value.
Setting hyper-parameters of the shielding detection model, wherein the hyper-parameters mainly comprise an initial learning rate, the size of single-batch training data, the iteration times of the training data and the like, configuring the training set samples to carry out shielding detection model training, testing the accuracy of the shielding detection model by using the verification set samples in the training process, and improving the accuracy of the model by adjusting the hyper-parameters; after the optimal shielding detection model is adjusted, the accuracy of the shielding detection model determined by a test set sample test is used for observing whether the accuracy of the shielding detection model has obvious floating or not so as to verify the generalization capability of the model; and obtaining and fixing the optimal model parameters of the shielding detection model.
Preferably, the expression detection model specifically acquiring method includes:
collecting a face image data set and acquiring 56 face key point coordinates in the data set by using a Dlib face detection tool, wherein the selected points are points marked with serial numbers shown in FIG. 2, and the specific serial numbers are as follows:
[4,5,6,7,8,9,10,11,12,17,18,19,20,21,22,23,24,25,26,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67],
these are mainly the eyes, nose, mouth and points near these five sense organs, which are selected as feature data because the face is likely to be affected in making a large-scale expression, causing deviation from the original position. Marking expressions of the face pictures in the data set as 'exaggerated expressions' and 'normal expressions', respectively corresponding to the labels 0 and 1, selecting feature points of the 'exaggerated expressions' and the 'normal expressions' from the key points of the face, splicing the feature point coordinates into a 56 × 2 feature matrix, and marking the marked data set according to the following steps of: 1:1 proportion division trainingA training set, a test set and a verification set; constructing a Support Vector machine, wherein the Support Vector machine uses a ml machine learning library tool in OpenCV, the type of the Support Vector machine selects a C _ SVC or C-Support Vector Classification or C-Support Vector classifier, namely an n-type classifier, n is more than or equal to 2, imperfect separation of classes is allowed, and a penalty coefficient C is used for an outlier; the kernel function uses a LINEAR kernel without the operation of feature mapping, and can perform LINEAR classification on an original feature space, which is a kernel function option with the highest running speed; the method comprises the following steps of training, verifying and testing a support vector machine to obtain a fixed optimal expression detection model, and specifically comprises the following steps: setting parameters of iteration termination of the support vector machine, setting the maximum iteration frequency to be 1000 and the highest accuracy precision to be 1e -6 When the iteration times or the accuracy reaches the set parameter value, stopping training the iteration holding model; and testing and determining the accuracy of the model by using the test set data, observing whether the accuracy of the model has obvious fluctuation or not, and verifying the generalization capability of the model.
Preferably, the specific obtaining method of the angle detection model is as follows:
collecting a face image data set, acquiring 68 face key point coordinates in the data set by using a Dlib face detection tool, labeling the angles of face pictures in the data set into a three-dimensional space coordinate format (X, Y, Z), selecting angle feature points of a face in a three-dimensional space from the face key points, splicing the feature point coordinates into a feature matrix of 68X 2, and performing 8-point mapping on the labeled data set: 1:1, dividing the ratio into a training set, a test set and a verification set; constructing a support vector machine, wherein the rotation angle direction of the human face is as shown in fig. 6 and is divided into rotation around the directions of an x axis, a y axis and a z axis, so that three support vector machine models of the x axis, the y axis and the z axis are required to be established to respectively predict the rotation angles of the human face in the three directions; the input data is face key points, and the output data is face angles; the Support Vector machine uses a ml machine learning library tool in OpenCV, the type of the Support Vector machine selects a C _ SVC or C-Support Vector Classification or C-Support Vector classifier, namely an n-type classifier, n is more than or equal to 2, imperfect separation of classes is allowed, and a penalty coefficient C is used for an outlier; the kernel function uses a LINEAR kernel without the operation of feature mapping, and can perform LINEAR classification on an original feature space, which is a kernel function option with the highest running speed; the method comprises the following steps of training, verifying and testing a support vector machine to obtain a fixed optimal expression detection model, and specifically comprises the following steps: setting parameters for iteration termination of the support vector machine, setting the maximum iteration frequency to be 1000, setting the highest accuracy precision to be 1e-6, and stopping training the iteration keeping model when the iteration frequency or the accuracy precision reaches the set parameter values; and testing and determining the accuracy of the model by using the test set data, observing whether the accuracy of the model has obvious fluctuation or not, and verifying the generalization capability of the model.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A face image quality filtering method is characterized by comprising the following steps:
acquiring a human face image to be detected;
carrying out fuzzy detection on the face image to be detected to obtain a fuzzy detection result;
carrying out brightness detection on the face image to be detected to obtain a brightness detection result;
taking the face image to be detected with qualified fuzzy detection result or qualified brightness detection as a recheckable image;
carrying out occlusion detection on the recheckable image, inputting a face region image in the recheckable image into an occlusion detection model to obtain an occlusion detection result, and judging whether the face in the recheckable image is occluded according to the detection result;
performing expression detection on the recheckable image to obtain face key points in the recheckable image, selecting expression feature points from the face key points and substituting the expression feature points into an expression detection model to obtain an expression detection result, and judging whether the face in the recheckable image is an exaggerated expression or not according to the detection result;
carrying out angle detection on the recheckable image to obtain face key points in the recheckable image, selecting angle characteristic points from the face key points and substituting the angle characteristic points into an angle detection model to obtain an angle detection result, and judging whether the face angle in the recheckable image is deflected too much according to the detection result;
classifying the images to be detected with different quality problems according to the results of fuzzy detection, brightness detection, shielding detection, expression detection and angle detection, and filtering the images to be detected according to the difference of the required images.
2. The method for filtering the quality of the human face image according to claim 1, wherein the blur detection comprises the following specific steps:
respectively extracting gradients of the image to be detected in the horizontal direction and the vertical direction through a Sobel operator, and calculating the fuzziness of the image to be detected; performing Gaussian blur operation on the image to be detected, extracting a gradient through a Sobel operator, and calculating the blur degree of the image to be detected again; carrying out difference operation on the ambiguity obtained by the two detections, and if the difference is less than a threshold value T Blurring If the image is blurred, the image is described as blurred; if the difference is greater than the threshold T Blurring The image is clear.
3. The method for filtering the quality of the human face image according to claim 1, wherein the brightness detection comprises the following specific steps:
converting an image to be detected, which is subjected to ambiguity detection, from an RGB image into an HSV image, and separating V channel data; calculating the average value of V channel data, if it is greater than threshold value T Global brightness a Or less than T Global brightness B If so, indicating that the global brightness is unqualified; if the average threshold value of the V channels is less than T Global brightness a And is greater than T Global brightness B Calculating the average value of the V channel data through a sliding window algorithm, and if the average value is larger than a threshold value T Local luminance A Or less than T Local brightness B And then, the local brightness is unqualified.
4. The method for filtering the quality of the human face image according to claim 1, wherein the specific obtaining method of the occlusion detection model is as follows:
collecting a face image data set, and labeling face coordinate data as [ (x, y) according to the five sense organs of a face picture in the data set Left eye ,(x,y) Right eye ,(x,y) Nose tip ,(x,y) Left mouth corner ,(x,y) Right mouth corner ]If the facial features of the face picture are not shielded, the face coordinates are marked as actual corresponding coordinates of the facial features, and if the facial features of the face picture are shielded, the face coordinates are marked as fixed value coordinates; dividing the marked data set into a training set, a testing set and a verification set; building a feature extraction network, and mapping the training set sample to a depth feature space to obtain the face image feature value; the occlusion detection model performs back propagation to update weight parameters in a network layer by calculating the mean square error of the coordinate predicted value and the true value of the five sense organs in the characteristic value as a loss function in the training process; setting hyper-parameters of the shielding detection model, configuring the training set samples to carry out shielding detection model training, testing the accuracy of the shielding detection model by using the verification set samples in the training process, and improving the accuracy of the model by adjusting the hyper-parameters; after the optimal shielding detection model is adjusted, the accuracy of the shielding detection model determined by a test set sample test is used for observing whether the accuracy of the shielding detection model has obvious floating or not so as to verify the generalization capability of the model; and obtaining and fixing the optimal model parameters of the shielding detection model.
5. The method for filtering the quality of the human face image according to claim 4, wherein the specific flow of the occlusion detection is as follows:
intercepting a human face area image of the retentable image, and transmitting the human face area image into a shielding detection model; if the face coordinates are not shielded, the face coordinate result returned by the shielding detection model is the actual corresponding coordinates of the five sense organs, and if the face coordinates are shielded, the face coordinate result returned by the shielding detection model is the fixed value coordinates; and judging whether the face is shielded according to the condition whether the face coordinate result returned by the shielding detection model has a fixed value.
6. The method for filtering the quality of the facial images as claimed in claim 1, wherein the expression detection model is specifically obtained by a method comprising:
collecting a face image data set, acquiring face key point coordinates in the data set by using a face detection tool, marking expressions of face pictures in the data set as 'exaggerated expressions' and 'normal expressions', respectively corresponding to a label 0 and a label 1, selecting feature points of the 'exaggerated expressions' and the 'normal expressions' from the face key points, splicing the feature point coordinates into a feature matrix, and dividing the marked data set into a training set, a test set and a verification set; and constructing a support vector machine, and training, verifying and testing the support vector machine to obtain a fixed optimal expression detection model.
7. The method of claim 6, wherein the specific flow of expression detection is as follows:
acquiring key points of the face in the recheckable image by using a face detection tool, transmitting the key point information of the face into an expression detection model, and judging whether the face in the recheckable image is an exaggerated expression or not according to a return result of the expression detection model; if the return value is 0, the human face in the recheckable image is described as an exaggerated expression, and if the return value is 1, the human face in the recheckable image is described as a normal expression.
8. The method for filtering the quality of the human face image according to claim 1, wherein the specific obtaining method of the angle detection model is as follows:
collecting a face image data set, acquiring face key point coordinates in the data set by using a face detection tool, labeling angles of face pictures in the data set into a three-dimensional space coordinate format (X, Y, Z), selecting angle feature points of a face in a three-dimensional space from the face key points, splicing the feature point coordinates into a feature matrix, and dividing the labeled data set into a training set, a test set and a verification set; and constructing a support vector machine, and training, verifying and testing the support vector machine to obtain a fixed optimal expression detection model.
9. The method for filtering the quality of the human face image according to claim 8, wherein the specific process of the angle detection is as follows:
acquiring face key points in a recheckable image by using a face detection tool, transmitting the face key point information into an angle detection model, and judging whether the face in the recheckable image rotates at a large angle or rotates obliquely according to a return result of the angle detection model; if the return angle data is larger than the set value, the human face angle deflection in the recheckable image is over large.
CN202211256212.7A 2022-10-13 2022-10-13 Face image quality filtering method Pending CN115661894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256212.7A CN115661894A (en) 2022-10-13 2022-10-13 Face image quality filtering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256212.7A CN115661894A (en) 2022-10-13 2022-10-13 Face image quality filtering method

Publications (1)

Publication Number Publication Date
CN115661894A true CN115661894A (en) 2023-01-31

Family

ID=84988223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256212.7A Pending CN115661894A (en) 2022-10-13 2022-10-13 Face image quality filtering method

Country Status (1)

Country Link
CN (1) CN115661894A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245883A (en) * 2023-05-11 2023-06-09 南京市智慧医疗投资运营服务有限公司 Image quality detection and image correction method for bill

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245883A (en) * 2023-05-11 2023-06-09 南京市智慧医疗投资运营服务有限公司 Image quality detection and image correction method for bill

Similar Documents

Publication Publication Date Title
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN105139004B (en) Facial expression recognizing method based on video sequence
CN111445459B (en) Image defect detection method and system based on depth twin network
CN110675368B (en) Cell image semantic segmentation method integrating image segmentation and classification
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN113592911B (en) Apparent enhanced depth target tracking method
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN112990392A (en) New material floor defect target detection system based on improved YOLOv5 algorithm
CN112101208A (en) Feature series fusion gesture recognition method and device for elderly people
CN113298809B (en) Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN112508836A (en) Deep learning image enhancement method based on label frame splicing
CN115661894A (en) Face image quality filtering method
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN112766028A (en) Face fuzzy processing method and device, electronic equipment and storage medium
CN111626197B (en) Recognition method based on human behavior recognition network model
CN111553250B (en) Accurate facial paralysis degree evaluation method and device based on face characteristic points
CN110910497B (en) Method and system for realizing augmented reality map
CN116580121B (en) Method and system for generating 2D model by single drawing based on deep learning
CN109064497B (en) Video tracking method based on color clustering supplementary learning
CN113903074B (en) Eye attribute classification method, device and storage medium
CN108765384A (en) A kind of conspicuousness detection method of joint manifold ranking and improvement convex closure
CN111881732B (en) SVM (support vector machine) -based face quality evaluation method
CN114548250A (en) Mobile phone appearance detection method and device based on data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination