CN115393695A - Method and device for evaluating quality of face image, electronic equipment and storage medium - Google Patents

Method and device for evaluating quality of face image, electronic equipment and storage medium Download PDF

Info

Publication number
CN115393695A
CN115393695A CN202110553078.6A CN202110553078A CN115393695A CN 115393695 A CN115393695 A CN 115393695A CN 202110553078 A CN202110553078 A CN 202110553078A CN 115393695 A CN115393695 A CN 115393695A
Authority
CN
China
Prior art keywords
face
evaluation result
face image
index
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110553078.6A
Other languages
Chinese (zh)
Inventor
周世峰
张美鸥
段云峰
尚晶
陶涛
刘虹
江勇
徐海勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110553078.6A priority Critical patent/CN115393695A/en
Publication of CN115393695A publication Critical patent/CN115393695A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for evaluating the quality of a face image, electronic equipment and a storage medium, wherein the method for evaluating the quality of the face image comprises the following steps: obtaining a human face image index evaluation result based on the target human face image; wherein, the evaluation result of the human face image index comprises: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result; and obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results. The method for evaluating the quality of the face image can overcome the defect that the prior art cannot meet the requirement of high-quality face quality evaluation, and realize high-quality face quality evaluation.

Description

Face image quality evaluation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a device for evaluating the quality of a face image, electronic equipment and a storage medium.
Background
The existing face quality evaluation methods include an evaluation method based on face quality standards, an evaluation method based on features and the like. The method based on the human face quality evaluation standard is a commonly used method at present. The human face quality evaluation standard comprises indexes such as human face brightness, contrast, definition, posture, shielding and the like. The method can be classified into a single-standard and multi-standard based method. Methods based on a single criterion often select only one evaluation criterion to evaluate the face quality. In the multi-standard method, a plurality of evaluation standards are selected for weighted average to obtain a fused evaluation index, and most of the existing algorithms only select a few evaluation indexes such as human face brightness, definition, posture and the like.
The evaluation method based on the characteristics has less research, mainly extracts texture characteristics such as HOG (Histogram of Oriented Gradient), LBP (Local Binary Patterns, linear back projection algorithm), GIST (Generalized Search Trees) and the like, trains a characteristic classifier to obtain a single characteristic quality score, and then fuses a plurality of characteristic vectors to obtain a face quality score.
The above-mentioned existing face quality evaluation methods mainly have the following defects:
(1) The evaluation standard is single. At present, most of existing evaluation methods only consider a few quality evaluation standards of human face images such as definition, posture, illumination and the like or single evaluation indexes, and the existing evaluation methods cannot meet the requirement of high-quality human face quality evaluation.
(2) Limitations are presented. The existing method for fusing a plurality of evaluation indexes in a weighted average mode enables the weight of some evaluation indexes to be low, which is equivalent to reducing the standard, and can not meet the strict quality evaluation requirements.
Disclosure of Invention
The invention provides a method and a device for evaluating the quality of a face image, electronic equipment and a storage medium, which are used for solving the defect that the prior art cannot meet the requirement of high-quality face quality evaluation and realizing high-quality face quality evaluation.
In a first aspect, the present invention provides a method for evaluating quality of a face image, including:
obtaining a human face image index evaluation result based on the target human face image; wherein, the evaluation result of the human face image index comprises: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result;
and obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results.
In one embodiment, the obtaining a facial image index evaluation result based on the target facial image includes:
inputting the target face image into a face shielding detection model to obtain a face shielding index evaluation result;
the face occlusion detection model is obtained by training by taking the target face image as a sample and taking a face occlusion index evaluation result corresponding to the predetermined target face image as a sample label.
In an embodiment, the training, with the target face image as a sample and a predetermined face occlusion index evaluation result corresponding to the target face image as a sample label, includes:
obtaining a training data set based on a first shelter to be detected and the target face image;
training the first shelter to shelter the target face image based on the training data set;
acquiring the shielding overlapping rate of the target face image and the first shielding object;
and obtaining the evaluation result of the human face shielding index based on the shielding overlapping rate.
In an embodiment, the training, with the target face image as a sample and a predetermined face occlusion index evaluation result corresponding to the target face image as a sample label, includes:
inputting the target face image into a feature extraction layer to obtain a face feature image;
inputting the face feature map into a full connection layer to obtain a label classification result corresponding to the face feature map;
and obtaining the evaluation result of the face shielding index based on the label classification result.
In one embodiment, the result of evaluating the face occlusion index includes: a face watermark shielding evaluation result and/or a face real object shielding evaluation result;
the evaluation result of the human face state index comprises the following steps: a human face posture evaluation result and/or a human eye opening and closing state evaluation result;
the evaluation result of the human face clearness index comprises the following steps: at least one of a face yin-yang evaluation result, a face sharpness evaluation result, a face contrast evaluation result and a face illumination evaluation result.
In one embodiment, the obtaining a facial image index evaluation result based on the target facial image includes:
acquiring the opening and closing state of human eyes based on the target human face image;
obtaining an evaluation result of the human eye opening and closing state based on the human eye opening and closing state;
alternatively, the first and second electrodes may be,
obtaining a human face three-dimensional model based on the target human face image;
obtaining a face organ angle based on the face three-dimensional model;
and obtaining the human face posture evaluation result based on the human face organ angle.
In one embodiment, further comprising:
acquiring a face image to be evaluated;
obtaining a target face surrounding frame based on the face image to be evaluated;
and cutting the face image to be evaluated based on the target face surrounding frame to obtain the target face image.
In one embodiment, the obtaining a facial image index evaluation result based on the target facial image includes:
acquiring a human face image index evaluation result with high priority;
and obtaining a face image index evaluation result with low priority based on the target face image under the condition that the face image index evaluation result with high priority is determined to meet the expected result based on the target face image.
In one embodiment, the obtaining a facial image quality evaluation result based on the facial image index evaluation result includes:
and obtaining the quality evaluation result of the face image based on the index evaluation result of the face image and the target weight value of the evaluation result.
In a second aspect, the present invention provides a face image quality evaluation apparatus, including:
the index evaluation result acquisition module is used for acquiring a face image index evaluation result based on the target face image; wherein, the evaluation result of the human face image index comprises: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result;
and the quality evaluation result acquisition module is used for obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results.
In a third aspect, the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for evaluating the quality of a face image according to any one of the above methods when executing the computer program.
In a fourth aspect, the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method for evaluating the quality of a face image as described in any one of the above.
The method for evaluating the quality of the face image comprises the steps of obtaining a face image index evaluation result based on a target face image, obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results, and carrying out weighted calculation on a plurality of index evaluation results to obtain a final evaluation result compared with the face quality evaluation standard in the prior art.
The method provided by the invention obtains the face image quality evaluation result by combining the face image index evaluation result under the condition that each face image index evaluation result is determined to accord with the expected result, thereby ensuring that each face image index evaluation result meets the requirement and further improving the standard of the face image quality evaluation result.
Moreover, in the evaluation method provided by the invention, the evaluation result of the human face image index comprises the following steps: compared with the evaluation method in the prior art, the evaluation method has the advantages that the evaluation results of the human face shielding indexes are increased, the evaluation results of the human face shielding indexes are utilized, the high-quality human face quality evaluation requirement can be met, and the high-quality human face quality evaluation is realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for evaluating the quality of a human face image according to the present invention;
FIG. 2 is a schematic diagram of a distribution range of normal human face illumination values provided by the present invention;
FIG. 3 is a schematic diagram of a human face sharpness detection process provided by the present invention;
FIG. 4 is a schematic flow chart of yin-yang face detection provided by the present invention;
FIG. 5 is a schematic view of a MTCNN algorithm for detecting human faces according to the present invention;
FIG. 6 is a schematic block diagram of a human face image quality evaluation apparatus provided by the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method, apparatus, electronic device and storage medium for evaluating the quality of a face image according to the present invention are described below with reference to fig. 1 to 7.
The invention provides a human face image quality evaluation method, as shown in figure 1, the human face image quality evaluation method comprises the following steps:
step 110, obtaining a face image index evaluation result based on the target face image; the human face image index evaluation result comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result, and a face clarity index evaluation result.
In some embodiments, the evaluation result of the face occlusion index includes: and the face watermark occlusion evaluation result and/or the face real object occlusion evaluation result.
The face shielding index evaluation result can be applied to the situation that the face is shielded by obstacles such as hats and the like, for example, when a camera takes a picture of a person entering a target area, most of the face is shielded by the obstacles such as the hats and the like, so that the face cannot be recognized, and the picture is abandoned without subsequent evaluation work.
When the hat only partially covers the face but does not affect the face recognition, the face image can be continuously recognized.
Therefore, the face occlusion index evaluation result can judge whether the corresponding face image can be used for face recognition or not, and the identity of the party is determined.
When the face is shielded by obstacles such as a hat and the like, the face recognition efficiency and the judgment result are influenced, and the face shielding index evaluation result is used as a reference factor, so that the accuracy of face recognition is improved.
The evaluation result of the face state index comprises the following steps: human face posture evaluation results and/or human eye opening and closing state evaluation results.
The face pose evaluation result can be an evaluation on the aspects of the deflection angle of the face and the like, and when the angles of the face facing the camera are different, the accuracy and the efficiency of face recognition can be influenced, and further the result of face image quality evaluation can be influenced.
Similarly, when the human eyes are opened or closed, the accuracy and efficiency of the face recognition are also affected.
Therefore, the human face posture evaluation result and/or the human eye opening and closing state evaluation result can be referred to determine the human face state index evaluation result, and whether the human face image can be used for subsequent human face recognition work or not is determined according to the human face state index evaluation result.
The evaluation result of the human face clearness index comprises the following steps: at least one of a face yin-yang evaluation result, a face definition evaluation result, a face contrast evaluation result and a face illumination evaluation result.
The face yin-yang evaluation result, the face definition evaluation result, the face contrast evaluation result and the face illumination evaluation result also influence the accuracy and the efficiency of face recognition, so that the evaluation results are used as a reference factor, and the accuracy of face recognition is improved.
And 120, obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with the expected results.
It should be noted that, through the face image quality evaluation result, it can be determined whether the face image can be used for the face recognition to determine the need of the scene of the identity, and when the face image quality evaluation result meets the expected standard, it can be determined that the corresponding face image can be applied to the scene of the identity determined by the face recognition, otherwise, it cannot be applied to the scene of the identity determined by the face recognition.
In some embodiments, when the facial image index evaluation results all meet the expected results, obtaining a facial image quality evaluation result based on the facial image index evaluation results, including:
and under the condition that the human face image index evaluation results all accord with the expected results, obtaining a human face image quality evaluation result based on the multiple human face image index evaluation results and the weights of the multiple human face image index evaluation results.
It should be noted that the weights of the multiple facial image index evaluation results correspond to the priorities of the multiple facial image index evaluation results, and the corresponding weight of the facial image index evaluation result with the higher priority is larger.
For example, among the face shielding index evaluation result, the face state index evaluation result, and the face clarity index evaluation result, the priority of the face shielding index evaluation result is the highest, and the priority of the face clarity index evaluation result is the lowest.
The priority levels of the index evaluation results are ranked according to the importance of the index evaluation results, the more important index evaluation results have higher corresponding priority levels, and the less important index evaluation results have lower corresponding priority levels.
Because the evaluation result of the face shielding index has the largest influence on the face image recognition, the evaluation result of the face shielding index has the highest priority, and the evaluation result of the face definition index has relatively smaller influence on the face image recognition, so the evaluation result of the face definition index has the lowest priority.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
and inputting the target face image into a face shielding detection model to obtain a face shielding index evaluation result.
The face occlusion detection model is obtained by training by taking a target face image as a sample and taking a face occlusion index evaluation result corresponding to the predetermined target face image as a sample label.
In some embodiments, training, with a target face image as a sample and a predetermined face occlusion index evaluation result corresponding to the target face image as a sample label, includes:
obtaining a training data set based on a first shelter to be detected and a target face image;
training a first shelter to shelter the target face image based on the training data set;
acquiring the shielding overlapping rate of the target face image and the first shielding object;
and obtaining a face shielding index evaluation result based on the shielding overlapping rate.
In some embodiments, training, with a target face image as a sample and a predetermined face occlusion index evaluation result corresponding to the target face image as a sample label, includes:
inputting the target face image into a feature extraction layer to obtain a face feature image;
inputting the face feature map into a full connection layer to obtain a label classification result corresponding to the face feature map;
and obtaining a face shielding index evaluation result based on the label classification result.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
based on the target face image, the opening and closing state of human eyes is obtained;
obtaining an evaluation result of the open-close state of the human eyes based on the open-close state of the human eyes;
alternatively, the first and second electrodes may be,
obtaining a human face three-dimensional model based on the target human face image;
obtaining a face organ angle based on the face three-dimensional model;
and obtaining a human face posture evaluation result based on the human face organ angle.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
and obtaining a face illumination evaluation result based on the target face image.
It can be understood that the face illumination evaluation result is a result obtained by evaluating a face illumination index, and the face illumination index refers to the brightness of a face image and can be divided into three cases: normal, overexposed, underexposed.
In this embodiment, the gray level average value is used as the face illumination index, and the formula for defining the gray level average value i is as follows:
Figure BDA0003076046730000091
where needle I (x, y) represents the face grayscale image pixel value, and h and w represent the target face image height and width, respectively. For the human face illumination index, the value range is 0-255, and the larger the value is, the stronger the illumination is.
In order to set the illumination threshold value of the normal face, the illumination index is compared with the illumination threshold value to obtain a face illumination evaluation result.
600 normal, overexposed and underexposed face images can be selected, and tests show that the illumination value of a normal face is between 60 and 180, so that the threshold values are set to be 60 and 180. Fig. 2 shows the distribution range of normal face illumination values.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
and obtaining a face contrast evaluation result based on the target face image.
For example, based on the target face image, the target face image contrast is obtained, and the target face image contrast is compared with a contrast threshold to obtain a face contrast evaluation result.
It is understood that the human face contrast refers to the measurement of different brightness levels between the brightest white and the darkest black of the bright and dark areas in the human face, i.e. refers to the magnitude of a gray scale contrast. The larger the gray scale difference range is, the larger the contrast is, and the smaller the difference range is, the smaller the contrast is.
In the invention, the root mean square contrast value is used as a contrast evaluation index c, and the calculation mode is as follows:
Figure BDA0003076046730000101
wherein I (x, y) represents the gray level image pixel value of the human face,
Figure BDA0003076046730000102
representing the average value of the image pixels. The face contrast value ranges from 0 to 125. Similar to the above experimental manner for obtaining the illumination value of the normal face, verification shows that the contrast threshold is set to be 25, and if the contrast threshold is greater than 25, the normal face image is represented, otherwise, the low-contrast face image is represented.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
and obtaining a face definition evaluation result based on the target face image.
It is understood that face sharpness is based on a quadratic blur algorithm. The secondary fuzzy algorithm is a commonly used image definition judgment algorithm. The secondary fuzzy algorithm obtains a fuzzy image of the image by carrying out primary fuzzy filtering processing on the target face image, then compares the change conditions of adjacent pixel values of the original target face image and the fuzzy image, determines the definition value according to the change magnitude, and indicates that the image is clearer if the calculated change result is smaller, and conversely, the image is more fuzzy.
The specific treatment process comprises the following steps.
S1: filtering the vertical and horizontal directions of the target face image respectively to obtain a blurred image B x And B y The following filter template of size 3 × 3 is used:
Figure BDA0003076046730000111
s2: and calculating pixel value changes in the vertical direction and the horizontal direction of the original target face image.
With S I(x) And S I(y) The method respectively represents the pixel change values of the original target face image in the vertical direction and the horizontal direction, and the specific calculation formula is as follows:
Figure BDA0003076046730000112
s3: and respectively calculating the pixel value change of the vertically and horizontally blurred face images.
With S B(x) And S B(y) The pixel value change values of the vertically and horizontally blurred face image are respectively represented by the following specific calculation formula:
Figure BDA0003076046730000113
s4: and calculating the difference value of the original target person Oldham image and the blurred image pixel value change.
By V x And V x The difference values of the pixel value changes of the original target face image and the blurred image in the vertical direction and the horizontal direction are respectively represented, and the specific calculation formula is as follows:
Figure BDA0003076046730000114
Figure BDA0003076046730000115
s5: calculating the face clarity value clarity, wherein a specific calculation formula is as follows:
Figure BDA0003076046730000121
the process of detecting the face sharpness is shown in fig. 3.
S6: and judging whether the definition value is greater than the target definition value, for example, whether the definition value is greater than 0.38, if so, determining that the corresponding face image is a normal face.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
obtaining a human eye state evaluation result based on the target human face image; wherein, the human eye state evaluation result belongs to one of the human face state index evaluation results.
In the above-described embodiment, the judgment on the human eye state is to extract the human eye region from the target human face image first, and then judge whether the human eye region image is in an open or closed state.
In the shooting process of a non-limiting scene, the size of the face image of a person can change due to the movement of the head, rotation or back and forth movement, so that the size and angle of the face of the person change, but the aspect ratio of eyes changes only in a small range.
Therefore, in the present embodiment, it is proposed to determine the eye state by using the face aspect ratio, where the eye height is the maximum distance between the upper and lower eyelids of the eye, and the eye width is the distance between the left and right canthi of the eye. When the human eyes are closed, the height-width ratio lambda is the minimum, and under the normal condition, the value range of the height-width ratio lambda of the human eyes is as follows: 0< λ <1.
In this embodiment, a face key point location algorithm is first used to extract a plurality of key points of a face, for example, 68 key points of the face, and then a plurality of key points of human eyes, for example, 12 key points of human eyes, are obtained from the plurality of key points of the face, wherein the number of the key points is 6 for each of the left and right eyes.
And finally, calculating the aspect ratio of the human eyes through the key points of the human eyes. Also by means of experimental verification, the aspect ratio of the normally open human eye is concentrated near 0.35, is not more than 0.4, and is more than 0.2, so that the threshold human eye state threshold value set in the invention is 0.2, and more than 0.2 indicates that the human eye is normally open.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
and obtaining a face yin-yang evaluation result based on the target face image.
In the above embodiment, the key points of the nose bridge of the face are obtained based on the plurality of key points of the face extracted in the previous step, then the straight lines for dividing the left and right faces are fitted through the key points, then the division mask is generated, and the left and right face regions are extracted after the calculation with the original image.
And finally, converting the left and right faces into HSV (hexagonal pyramid model) channels, wherein the V of the HSV channel represents the brightness of the image, and the V channel is extracted for judgment, which is different from the RGB channel. Fig. 4 shows a step diagram of yin-yang face detection.
In some embodiments, the obtaining of the evaluation result of the face image index based on the target face image includes:
and obtaining a human face posture evaluation result based on the target human face image.
In the above embodiment, the face pose estimation is based on the key points of the face 68, and the pitch angle, the tilt angle, and the horizontal rotation angle of the face are calculated as the face pose evaluation indexes. The specific treatment process comprises the following steps.
S1: A3D model of the human face (namely, a three-dimensional model of the human face) is defined, and a 3D model of 6 key points of the human face is defined in the invention.
The coordinates of the 6 key points of the human face in the three-dimensional space are specifically as follows:
0.0,0.0 and 0.0 percent of nose tip;
the balance is (0.0, -330.0, -65.0);
left canthus (-225.0, 170.0, -135.0);
right canthus: (225.0, 170.0, -135.0);
left mouth corner: -150.0, -150.0, -125.0);
right mouth angle: (150.0, -150.0, -125.0).
S2: a rotation matrix is calculated.
The result of the face pose estimation is computed, i.e. a transformation matrix from the 3D model to 6 key points of the face in the picture is determined, which contains information on rotation and translation. The Direct Linear Transformation (DLT) method is mainly used here to solve the rotation and translation matrices. The calculated rotation matrix R is expressed as:
Figure BDA0003076046730000141
s3: the euler angles (i.e., the face organ angles) are calculated from the rotation matrix.
By theta x 、θ y And theta z Respectively representing a pitch angle, a horizontal rotation angle and an inclination angle, and the specific calculation formula is as follows:
Figure BDA0003076046730000142
Figure BDA0003076046730000143
Figure BDA0003076046730000144
in some embodiments, the obtaining of the evaluation result of the face image index based on the target face image includes:
obtaining a face character watermark shielding detection evaluation result based on the target face image; the face character watermark shielding detection evaluation result is one of face shielding index evaluation results.
The text watermark is located on the face image, and may be used to distinguish the purpose of the image, or a subject mechanism for shooting the image, for example.
In this embodiment, a YOLO (you only look once) neural network model (e.g., YOLO v3 network model) is used to detect the position of the text watermark.
S1: and adding the character watermark to be detected to the random position of the face image to construct a training data set.
S2: and training the character watermark based on the YOLO neural network model, and blocking the position of the character watermark in the face image.
S3: and calculating the shielding overlapping rate of the face surrounding frame and the character watermark surrounding frame to serve as a character watermark shielding evaluation index, and judging whether the shielding overlapping rate is smaller than a target overlapping rate threshold value, wherein the shielding overlapping rate is smaller than the target overlapping rate threshold value and indicates that the shielding is not carried out.
The method of determining whether one frame overlaps another frame is to calculate an IOU (Intersection over Union), i.e. a ratio of Intersection to Union of two target frames.
In the present invention, since the bounding box of the text watermark may be small, the value of the IOU calculated directly is also small. Therefore, the invention calculates the ratio of the overlapping area of the character watermark frame and the face to the area of the character watermark frame, and the calculation mode is as follows:
Figure BDA0003076046730000151
wherein A represents the area of the character watermark frame, and B represents the area of the face frame.
In this embodiment, when the overlap ratio is greater than 0.1, it is considered that the text watermark blocks the face.
In some embodiments, the obtaining of the evaluation result of the face image index based on the target face image includes:
and obtaining a face cap shielding evaluation result and/or a reticulate watermark shielding evaluation result based on the target face image. The human face cap shielding evaluation result is a human face real object shielding evaluation result, and the reticulate pattern watermark shielding evaluation result is a watermark shielding evaluation result. The reticulate pattern watermark can be used for face image anti-counterfeiting.
The face recognition requires the integrity of five sense organs, so the face cap shielding mainly means that the cap shields eyes, and the face which wears the cap but does not shield the eyes is judged to be not shielded.
The moire watermark shielding means that the face is covered with the moire watermark. The invention designs a multi-label classification network to classify two shielding conditions. The classified labels are coded by one-hot, the position with shielding is 1, the position without shielding is 0, and in addition, a normal label is added to indicate normal and no shielding, and when at least one of the two classes of cap shielding and reticulate shielding is 1, the normal position is 0.
The multi-label classification network adopts three convolutional layer blocks as a feature extraction layer, a face image with the input of 224 × 3 of the network is extracted into a feature graph through the feature extraction layer and then connected with two full-connection layers, the output of the network adopts an activation function of sigmoid, and the calculation mode is as follows:
Figure BDA0003076046730000161
when x → - ∞ f (x) → 0, and when x → + ∞ f (x) → 1, so the sigmoid function normalizes the input to a probability range of 0-1, where each class corresponds to a binary operation. The trained loss function adopts a cross entropy loss function loss, which is calculated as follows:
Figure BDA0003076046730000162
where M denotes the number of categories, y c Labels for samples belonging to class c, p c Representing the probability of a prediction belonging to class c.
In some embodiments, the method for evaluating the quality of the face image further comprises:
acquiring a face image to be evaluated;
obtaining a target face surrounding frame based on the face image to be evaluated;
and cutting the face image to be evaluated based on the target face surrounding frame to obtain the target face image.
It can be understood that, after the face image to be evaluated is obtained, all faces in the face image to be evaluated are detected by adopting a face detection algorithm, so as to obtain a face enclosure frame.
And calculating the size of the face enclosure frame to obtain the face enclosure frame with the largest size, then cutting the face enclosure frame with the largest size, and taking the cut image as a target face image.
In the above embodiment, the MTCNN (i.e., multitask convolutional neural network) algorithm is used to detect faces, and the MTCNN population can be divided into three-layer network structures, namely P-Net, R-Net and O-Net, which are respectively used for preliminary detection, screening and determination of face targets, and the processing flow is as shown in fig. 5. After face detection, the largest face box with size larger than 90 x 90 is screened.
In some embodiments, obtaining a facial image index evaluation result based on the target facial image includes:
acquiring a human face image index evaluation result with high priority;
and obtaining the index evaluation result of the face image with low priority based on the target face image under the condition that the index evaluation result of the face image with high priority meets the expected result based on the target face image.
Under the condition that the face image index evaluation result with high priority is determined to not meet the expected result based on the target face image, the process of face image quality evaluation is stopped, time is saved, and efficiency is improved.
It should be noted that the priority level here is a high level among the plurality of facial image index evaluation results, for example, two facial image index evaluation results have a high priority level and another low priority level.
In some embodiments, obtaining the facial image quality evaluation result based on the facial image index evaluation result includes:
and obtaining a human face image quality evaluation result based on the human face image index evaluation result and the target weight value of the evaluation result.
In summary, the facial image quality evaluation method provided by the present invention obtains a facial image index evaluation result based on the target facial image, obtains a facial image quality evaluation result based on the facial image index evaluation result under the condition that the facial image index evaluation results all conform to the expected result, and obtains a final evaluation result by performing weighted calculation on a plurality of index evaluation results compared with the facial quality evaluation standard in the prior art.
The method provided by the invention obtains the face image quality evaluation result by combining the face image index evaluation result under the condition that each face image index evaluation result is determined to accord with the expected result, thereby ensuring that each face image index evaluation result meets the requirement, and improving the standard of the face image quality evaluation result.
Moreover, in the evaluation method provided by the invention, the evaluation result of the human face image index comprises the following steps: compared with the evaluation method in the prior art, the evaluation method has the advantages that the evaluation results of the human face shielding indexes are increased, the evaluation results of the human face shielding indexes are utilized, the high-quality human face quality evaluation requirement can be met, and the high-quality human face quality evaluation is realized.
In addition, the method for evaluating the quality of the face image, provided by the invention, has the following advantages: the evaluation method can be adjusted according to different application scenes, and different standard interfaces are selected and called, for example, at least one of a face shielding index evaluation result, a face state index evaluation result and a face clear index evaluation result can be called, so that the evaluation method is suitable for face quality evaluation under different scenes, for example, the evaluation method is applied to construction of a high-quality face recognition base, face image filing and storage, certificate photo screening and the like; secondly, the expansibility is strong, and different human face standard interfaces can be added in the evaluation algorithm iterative optimization; and finally, the accuracy is high, and a mode of combining a traditional algorithm and a deep learning technology is adopted for different face quality evaluation indexes, so that whether a single evaluation index meets the standard or not is calculated or judged better.
The following describes the facial image quality evaluation device provided by the present invention, and the facial image quality evaluation device described below and the facial image quality evaluation method described above may be referred to in correspondence with each other.
As shown in fig. 6, the face image quality evaluation device 600 includes: an index evaluation result acquisition module 610 and a quality evaluation result acquisition module 620.
The index evaluation result obtaining module 610 is configured to obtain a face image index evaluation result based on the target face image; the human face image index evaluation result comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result.
In some embodiments, the evaluation result of the face occlusion index includes: and the human face watermark shielding evaluation result and/or the human face real object shielding evaluation result.
The human face state index evaluation result comprises the following steps: human face posture evaluation results and/or human eye opening and closing state evaluation results.
The evaluation result of the human face clearness index comprises the following steps: at least one of a face yin-yang evaluation result, a face definition evaluation result, a face contrast evaluation result and a face illumination evaluation result.
And the quality evaluation result acquisition module 620 is configured to obtain a face image quality evaluation result based on the face image index evaluation result when the face image index evaluation results all meet the expected result.
In some embodiments, the index evaluation result obtaining module 610 is further configured to input the target face image into the face occlusion detection model, so as to obtain a face occlusion index evaluation result.
The face occlusion detection model is obtained by training by taking a target face image as a sample and taking a face occlusion index evaluation result corresponding to a predetermined target face image as a sample label.
In some embodiments, the face occlusion detection model is trained by:
obtaining a training data set based on a first shelter to be detected and a target face image;
training a first shelter to shelter the target face image based on the training data set;
acquiring the shielding overlapping rate of the target face image and the first shielding object;
and obtaining a face occlusion index evaluation result based on the occlusion overlapping rate.
In some embodiments, the face occlusion detection model is trained by:
inputting the target face image into a feature extraction layer to obtain a face feature image;
inputting the face feature map into a full connection layer to obtain a label classification result corresponding to the face feature map;
and obtaining a face shielding index evaluation result based on the label classification result.
In some embodiments, the index evaluation result obtaining module 610 includes: the human eye state acquisition unit and the human face state evaluation unit.
The human eye state acquisition unit is used for acquiring the human eye opening and closing state based on the target human face image.
The human face state evaluation unit is used for obtaining a human eye opening and closing state evaluation result based on the human eye opening and closing state.
In some embodiments, the index evaluation result acquisition module 610 includes: the device comprises a face model acquisition unit, an angle acquisition unit and a face state evaluation unit.
The face model obtaining unit is used for obtaining a face three-dimensional model based on the target face image.
The angle acquisition unit is used for obtaining the angles of the human face organs based on the human face three-dimensional model.
The face state evaluation unit is used for obtaining a face posture evaluation result based on the face organ angle.
In some embodiments, the apparatus 600 for evaluating quality of a human face image further comprises: the system comprises an image acquisition module, a face frame acquisition module and an image cutting module.
The image acquisition module is used for acquiring a face image to be evaluated.
The face frame acquisition module is used for acquiring a target face surrounding frame based on the face image to be evaluated.
And the image cutting module is used for cutting the face image to be evaluated based on the target face surrounding frame to obtain the target face image.
In some embodiments, the index evaluation result acquisition module 610 includes: a first result obtaining unit and a second result obtaining unit.
The first result obtaining unit is used for obtaining the evaluation result of the human face image index with high priority.
The second result acquisition unit is used for obtaining a face image index evaluation result with a low priority based on the target face image under the condition that the face image index evaluation result with a high priority is determined to meet the expected result based on the target face image.
It should be noted that the priority level here is a high level among the plurality of facial image index evaluation results, for example, two facial image index evaluation results have a high priority level and another low priority level.
In some embodiments, the quality evaluation result obtaining module 620 is further configured to obtain a facial image quality evaluation result based on the facial image index evaluation result and the target weight value of the evaluation result.
The electronic device and the storage medium provided by the present invention are described below, and the electronic device and the storage medium described below and the above-described face image quality evaluation method can be referred to in correspondence with each other.
Fig. 7 illustrates a physical structure diagram of an electronic device, and as shown in fig. 7, the electronic device may include: a processor (processor) 710, a communication interface (communication interface) 720, a memory (memory) 730, and a communication bus 740, wherein the processor 710, the communication interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a method of facial image quality assessment comprising:
step 110, obtaining a human face image index evaluation result based on the target human face image; the human face image index evaluation result comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result, and a face clarity index evaluation result.
And 120, obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with the expected results.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to execute the method for evaluating the quality of a face image provided by the above methods, the method comprising:
step 110, obtaining a face image index evaluation result based on the target face image; the evaluation result of the human face image index comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result, and a face clarity index evaluation result.
And 120, obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with the expected results.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the above-mentioned methods for evaluating the quality of a face image, the method comprising:
step 110, obtaining a human face image index evaluation result based on the target human face image; the human face image index evaluation result comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result, and a face clarity index evaluation result.
And 120, obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with the expected results.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for evaluating the quality of a face image is characterized by comprising the following steps:
obtaining a human face image index evaluation result based on the target human face image; the evaluation result of the human face image index comprises the following steps: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result;
and obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results.
2. The method for evaluating the quality of a human face image according to claim 1, wherein the obtaining of a human face image index evaluation result based on a target human face image comprises:
inputting the target face image into a face shielding detection model to obtain a face shielding index evaluation result;
the face occlusion detection model is obtained by training by taking the target face image as a sample and taking a face occlusion index evaluation result corresponding to the predetermined target face image as a sample label.
3. The method for evaluating the quality of a facial image according to claim 2, wherein the training with the target facial image as a sample and a predetermined facial occlusion index evaluation result corresponding to the target facial image as a sample label comprises:
obtaining a training data set based on a first shelter to be detected and the target face image;
training the first shelter to shelter the target face image based on the training data set;
acquiring the shielding overlapping rate of the target face image and the first shielding object;
and obtaining the evaluation result of the human face shielding index based on the shielding overlapping rate.
4. The method for evaluating the quality of a facial image according to claim 2, wherein the training with the target facial image as a sample and a predetermined facial occlusion index evaluation result corresponding to the target facial image as a sample label comprises:
inputting the target face image into a feature extraction layer to obtain a face feature image;
inputting the face feature map into a full connection layer to obtain a label classification result corresponding to the face feature map;
and obtaining the evaluation result of the face shielding index based on the label classification result.
5. The face image quality evaluation method according to claim 1,
the face shielding index evaluation result comprises the following steps: a face watermark shielding evaluation result and/or a face real object shielding evaluation result;
the evaluation result of the human face state index comprises the following steps: a human face posture evaluation result and/or a human eye opening and closing state evaluation result;
the evaluation result of the human face clearness index comprises the following steps: at least one of a face yin-yang evaluation result, a face sharpness evaluation result, a face contrast evaluation result and a face illumination evaluation result.
6. The method for evaluating the quality of a facial image according to claim 5, wherein the obtaining of the evaluation result of the facial image index based on the target facial image comprises:
based on the target face image, the opening and closing state of human eyes is obtained;
obtaining an evaluation result of the human eye opening and closing state based on the human eye opening and closing state;
alternatively, the first and second electrodes may be,
obtaining a human face three-dimensional model based on the target human face image;
obtaining a face organ angle based on the face three-dimensional model;
and obtaining the human face posture evaluation result based on the human face organ angle.
7. The method for evaluating the quality of a human face image according to claim 1, wherein the obtaining of a human face image index evaluation result based on a target human face image comprises:
acquiring a human face image index evaluation result with high priority;
and obtaining a face image index evaluation result with low priority based on the target face image under the condition that the face image index evaluation result with high priority is determined to meet the expected result based on the target face image.
8. The method for evaluating the quality of the facial image according to any one of claims 1 to 7, wherein the obtaining of the evaluation result of the quality of the facial image based on the evaluation result of the index of the facial image comprises:
and obtaining the quality evaluation result of the face image based on the index evaluation result of the face image and the target weight value of the evaluation result.
9. A face image quality evaluation apparatus, characterized by comprising:
the index evaluation result acquisition module is used for acquiring a face image index evaluation result based on the target face image; wherein, the evaluation result of the human face image index comprises: at least one of a face occlusion index evaluation result, a face state index evaluation result and a face clarity index evaluation result;
and the quality evaluation result acquisition module is used for obtaining a face image quality evaluation result based on the face image index evaluation result under the condition that the face image index evaluation results all accord with expected results.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for evaluating the quality of facial images according to any one of claims 1 to 8 when executing the program.
11. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the facial image quality assessment method according to any one of claims 1 to 8.
CN202110553078.6A 2021-05-20 2021-05-20 Method and device for evaluating quality of face image, electronic equipment and storage medium Pending CN115393695A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110553078.6A CN115393695A (en) 2021-05-20 2021-05-20 Method and device for evaluating quality of face image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110553078.6A CN115393695A (en) 2021-05-20 2021-05-20 Method and device for evaluating quality of face image, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115393695A true CN115393695A (en) 2022-11-25

Family

ID=84114600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110553078.6A Pending CN115393695A (en) 2021-05-20 2021-05-20 Method and device for evaluating quality of face image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115393695A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503606A (en) * 2023-06-27 2023-07-28 清华大学 Road surface wet and slippery region segmentation method and device based on sub-graph feature fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503606A (en) * 2023-06-27 2023-07-28 清华大学 Road surface wet and slippery region segmentation method and device based on sub-graph feature fusion
CN116503606B (en) * 2023-06-27 2023-08-29 清华大学 Road surface wet and slippery region segmentation method and device based on sub-graph feature fusion

Similar Documents

Publication Publication Date Title
CN108446617B (en) Side face interference resistant rapid human face detection method
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
JP3552179B2 (en) Feature vector generation method for speaker recognition
US20210118144A1 (en) Image processing method, electronic device, and storage medium
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN109086723B (en) Method, device and equipment for detecting human face based on transfer learning
CN103942539B (en) A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face
JP2007047965A (en) Method and device for detecting object of digital image, and program
CN113793336B (en) Method, device and equipment for detecting blood cells and readable storage medium
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN106529494A (en) Human face recognition method based on multi-camera model
CN111832405A (en) Face recognition method based on HOG and depth residual error network
Hebbale et al. Real time COVID-19 facemask detection using deep learning
JP4639754B2 (en) Image processing device
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
Subasic et al. Face image validation system
CN116453232A (en) Face living body detection method, training method and device of face living body detection model
CN113743378B (en) Fire monitoring method and device based on video
CN110009708B (en) Color development transformation method, system and terminal based on image color segmentation
CN115393695A (en) Method and device for evaluating quality of face image, electronic equipment and storage medium
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
CN111832464A (en) Living body detection method and device based on near-infrared camera
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination