CN113569694B - Face screening method, device, equipment and storage medium - Google Patents

Face screening method, device, equipment and storage medium Download PDF

Info

Publication number
CN113569694B
CN113569694B CN202110831104.7A CN202110831104A CN113569694B CN 113569694 B CN113569694 B CN 113569694B CN 202110831104 A CN202110831104 A CN 202110831104A CN 113569694 B CN113569694 B CN 113569694B
Authority
CN
China
Prior art keywords
face
face image
pixel
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110831104.7A
Other languages
Chinese (zh)
Other versions
CN113569694A (en
Inventor
白刚
姜卫平
郭忠武
李国华
韩煜
王荣芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bohui Technology Inc
Original Assignee
Beijing Bohui Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bohui Technology Inc filed Critical Beijing Bohui Technology Inc
Priority to CN202110831104.7A priority Critical patent/CN113569694B/en
Publication of CN113569694A publication Critical patent/CN113569694A/en
Application granted granted Critical
Publication of CN113569694B publication Critical patent/CN113569694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a face screening method, a device, equipment and a storage medium, which are applied to the technical field of visual image processing, wherein the method carries out affine transformation on characteristic points of any face image according to a face detection model to obtain a target face image; calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; performing normalization processing on each pixel of the target face image to obtain a new face image; for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the direction according to the central position of the pixel; training parameters of the new face image in a machine learning model according to the characteristic parameters of all pixels in 8 directions; matching the resolution, the facial integrity, the result of whether the target facial image is the facial gesture and training parameters of the machine learning model according to the facial image indexes; and if the matching meets the preset requirement, determining the target face image as a face recognition sample.

Description

Face screening method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision image processing technologies, and in particular, to a face screening method, apparatus, device, and storage medium.
Background
With the popularization of human face recognition technology, face recognition is used in more places, the face recognition is an active research direction in the fields of deep learning and pattern recognition, the face recognition is widely applied to the directions of intelligent video monitoring, identity authentication, public security management and control, sensitive person recognition and the like, and because the face recognition is widely applied, the requirements on the face recognition precision are higher and higher, especially in the field of identity authentication, whether a face recognition channel or face payment is relevant to the life and property safety of people, once the recognition is wrong, threat is formed to the personal safety and even public safety, and the face recognition mode based on manually extracted features is adopted at the present stage.
Disclosure of Invention
In view of this, the embodiment of the application provides a face screening method, which can accurately identify face images with different qualities, and solves the technical problems of low speed of traditional manual identification, single extraction characteristic and poor robustness of face sample quality detection.
In a first aspect, an embodiment of the present application provides a face screening method, including:
carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target facial image is in a frontal posture or not according to a facial posture evaluation algorithm;
calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; the characteristic parameters of the target face image comprise pixel average values and variances;
performing normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the directions according to the central position of the pixel, and determining the characteristic parameters of the pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
Training parameters of the new face image in a machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
matching the resolution of the target face image, the integrity of the face, the result of whether the target face image is in a face gesture or not, and training parameters of a machine learning model according to face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
In some embodiments, affine transformation is performed on feature points of any face image according to a face detection model to obtain a target face image, including:
performing face detection and feature point positioning on any face image according to a face detection model to obtain coordinates of a face region and feature points, wherein the feature points comprise left eye coordinates and right eye coordinates;
and aligning the coordinates of the feature points to the specified coordinate positions in affine transformation to obtain the target face image after the face is aligned.
In some embodiments, feature parameters in the target facial image are calculated according to a facial sharpness evaluation algorithm; the feature parameters of the target face image include pixel mean and variance, including:
summing according to the pixel value of each pixel of the target face image, and taking the pixel average value of the target face image after summing;
the variance is determined from the average of the differences between each pixel of the target facial image and the average of the pixels.
In some embodiments, for each pixel of the new face image, calculating a product of the pixel value and the pixel values of 8 direction neighboring pixels according to the center position of the pixel, and determining the characteristic parameter of the pixel according to the product, including:
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of a plurality of pixels adjacent to each other in 8 directions, wherein the pixel value is multiplied by the right HR, the left HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD and the upper right RU of the new face image according to the central position of the pixel;
determining the product as a characteristic parameter of a plurality of pixels whose pixel values are adjacent to 8 directions of the new face image, the characteristic parameter of the product including: shape, average, left variance, right variance.
In some embodiments, training parameters of the new face image in the machine learning model according to the feature parameters of all pixels in 8 directions includes:
inputting the characteristic parameters of all pixels in the 8 directions into a machine learning model;
calculating weight coefficients of feature vectors corresponding to the feature parameters of all pixels;
and determining the mapping scores of all the pixel feature vectors according to the weight coefficients of the feature vectors, wherein the mapping scores serve as training parameters of a machine learning model.
In some embodiments, matching the resolution of the target face image, the facial integrity, and the result of whether the target face image is in a frontal pose, and training parameters of the machine learning model according to the face image index, includes:
performing index matching on the resolution result of the target facial image and a facial resolution evaluation index;
performing index matching on the result of the face integrity and the face integrity evaluation index;
performing index matching on the front face posture result and the face posture evaluation index;
and performing index matching on the training parameters of the machine learning model and the face definition evaluation index.
In some embodiments, if the matching meets a preset requirement, determining the target face image as a face recognition sample includes:
And if the resolution of the target face image, the facial integrity and the front face posture result and the training parameters of the machine learning model meet the matching grades of preset requirements, determining the target face image as a face recognition sample, wherein the matching grades are respectively equal, medium, lower and the like.
In a second aspect, embodiments of the present application provide a face screening apparatus, the apparatus including:
the acquisition module is used for carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
the first evaluation module is used for calculating the resolution of the target facial image according to a facial resolution evaluation algorithm aiming at the spatial distribution of the local features of the target facial image;
the second evaluation module is used for calculating the face integrity of the target face image according to a face integrity evaluation algorithm aiming at the situation that key points of the target face image are shielded;
the third evaluation module is used for judging whether the target facial image is in the front face posture or not according to a facial posture evaluation algorithm;
the processing module calculates characteristic parameters of the target face image according to a face definition evaluation algorithm; the characteristic parameters of the target face image comprise pixel average values and variances;
The first calculation module performs normalization processing on each pixel of the target face image to obtain a new face image;
the second calculating module calculates, for each pixel of the new face image, a product of the pixel value and pixel values of 8 adjacent pixels in directions according to a center position of the pixel, and determines a characteristic parameter of the pixel according to the product, where the product of the pixel values in each direction respectively forms an image, and specifically calculating the direction includes: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
the index matching module matches the resolution of the target face image, the integrity of the face, the result of whether the target face image is in the face gesture or not, and training parameters of the machine learning model according to the face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
And the determining module is used for determining the target face image as a face recognition sample if the matching meets the preset requirement.
In a third aspect, embodiments of the present application provide a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the face screening method when executing the computer program
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs steps of the face screening method.
The beneficial effects of this application mainly lie in: according to the face detection method, a mode that a face detection model and a machine learning model are combined is adopted, model training is carried out on characteristic points of a new face image through various algorithms, matching is carried out according to the resolution of a target face image, the calculation result of the face integrity and the front face posture and training parameters of the model and face image indexes, if the calculation result and the training parameters meet preset requirements, the target face image is determined to be a face recognition sample and used for quality detection of the face image to be put in storage, the face detection method can accurately identify face images with different qualities, the technical problems that the traditional manual identification speed is low, the extraction characteristic is single, and the quality detection robustness of the face sample is poor are solved, and meanwhile the problems of high face identification precision, low application cost, incapability of copying, misjudgment and the like are met.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of a face screening method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of obtaining a new face image according to feature parameters of a target face image according to an embodiment of the present application.
Fig. 3 shows a schematic flow chart of calculating the product of 8 adjacent pixel values in the directions according to the embodiment of the application.
Fig. 4 shows a flowchart of acquiring training parameters according to an embodiment of the present application.
Fig. 5 shows a schematic diagram of an index matching flow provided in an embodiment of the present application.
Fig. 6 shows a schematic flow chart of determining a face recognition sample provided in an embodiment of the present application.
Fig. 7 shows a schematic structural diagram of a face screening apparatus according to an embodiment of the present application.
Fig. 8 shows a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations, and thus the following detailed description of the embodiments of the present application, as provided in the figures, is not intended to limit the scope of the application as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The deep convolutional neural network (Deep Convolutional Neural Network, DCNN) model is a mode recognition mode, is successfully applied to image processing, aims to perform characterization quantization on image data by using a convolutional mode, adopts unsupervised or semi-supervised feature learning and layering features to replace manual feature acquisition, and determines the usability and reliability of face recognition training by the quality of a face sample library when the DCNN is applied to the face recognition training, so that the quality of the face sample library is guaranteed to be a critical problem, the effect and experience of various face recognition application processes are directly influenced, in order to realize more accurate and high-quality automatic filtering of face sample quality detection, the deep learning is fused with a traditional image processing technology, the face sample quality detection is realized according to multidimensional features, the usability and reliability of the face recognition training of 1 to N or N to N are effectively improved, and different scenes applied to the same kind of technology are further expanded.
According to the face detection model, affine transformation is carried out on the feature points of any face image, so that a target face image is obtained; calculating the resolution of the target face image according to the spatial distribution of the local features of the target face image according to a face resolution evaluation algorithm, calculating the face integrity of the target face image according to the situation that key points of the target face image are shielded according to a face integrity evaluation algorithm, judging whether the target face image is in a front face posture according to a face posture evaluation algorithm, and calculating feature parameters of the target face image according to a face clarity evaluation algorithm; and carrying out normalization processing on each pixel of the target face image to obtain a new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the directions according to the central position of each pixel of the new face image, training the training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in the 8 directions, matching the resolution, the face integrity and the result of whether the target face image is in the face posture or not and the training parameters of the machine learning model according to the face image index, and determining the target face image as a face recognition sample if the matching meets the preset requirement. Specifically, affine transformation is carried out on feature points of any face image by using a face detection model of a convolutional neural network to obtain a target face image, the design is used for identifying the face image, has the characteristics of no contact and high precision, and is particularly suitable for being used in a characteristic fine-granularity feature point analysis mode in the living body detection and identification process, so that the image processing reaches the level of near manpower; then, according to the resolution and the facial integrity of the target facial image calculated by the target facial image and the judgment of the frontal facial attitude, the target facial image is subjected to face quality detection by a plurality of strategy algorithms, so that the accurate identification of faces with different qualities is realized; according to the method, a new face image is obtained by carrying out normalization processing on each pixel of the target face image, training parameters of the new face image in a machine learning model are trained according to characteristic parameters of the product of pixel values of each pixel of the new face image and 8 adjacent pixels in the directions, and finally, the resolution, the face integrity and whether the target face image is in a face posture or not are matched with the training parameters of the machine learning model according to face image indexes, if the calculation result and the training parameters meet preset requirements, the target face image is determined to be a face recognition sample and used for quality detection of the face image to be put in.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic flow chart of a face screening method according to an embodiment of the present application; as shown in fig. 1, the face filtering specifically includes the following steps:
step S10, affine transformation is carried out on the feature points of any face image according to the face detection model, and a target face image is obtained.
In the specific implementation of step S10, face detection and feature point positioning are performed on any face image according to the face detection model, and feature point coordinates are aligned to designated position coordinates of affine transformation through affine transformation, so as to obtain an aligned target face image.
Step S20, calculating the resolution of the target face image according to the face resolution evaluation algorithm, with respect to the spatial distribution of the local features of the target face image.
In the specific implementation of step S20, an evaluation value of the target face image resolution is calculated for the spatial distribution of the target face image local features according to the face resolution evaluation algorithm.
Step S30, calculating the face integrity of the target face image according to the face integrity evaluation algorithm aiming at the situation that the key points of the target face image are shielded.
In the specific implementation of step S30, according to the face integrity evaluation algorithm, the face integrity of the target face image is determined according to the condition that 5 key points of the target face image are masked, where the 5 key points are respectively: left eye center, right eye center, nose tip, left mouth corner, right mouth corner.
Step S40, judging whether the target facial image is in the frontal posture according to the facial posture evaluation algorithm.
In the specific implementation of step S40, three angles, such as pitch angle, yaw angle and roll angle, of the target facial image are extracted according to the facial pose evaluation algorithm, and whether the facial pose orientation of the target facial image is in a side face, bottom view or top view state is determined according to the threshold values of the three angles.
Step S50, calculating characteristic parameters of the target face image according to a face definition evaluation algorithm; the feature parameters of the target face image include pixel mean and variance.
In the implementation of step S50, the pixel values of each pixel of the target face image are summed up according to the face sharpness evaluation algorithm, and then averaged, and the average of the differences between the pixel values of each pixel and the average is calculated, and the variance is determined according to the average.
Step S60, normalization processing is carried out on each pixel of the target face image, and a new face image is obtained.
In practice, step S60 normalizes the target face image to a new face image of a specified standard form on a per-pixel basis.
Step S70, for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the direction according to the central position of the pixel, and determining the characteristic parameters of the pixel according to the product.
In the implementation of step S70, for each pixel of the new face image, products between the pixel and 8 adjacent pixels in the right side, the left side, the lower side, the upper side, the lower left side, the upper left side, the lower right side and the upper right side of the new face image are calculated according to the center position of the pixel, and the calculated products are determined as the feature parameters of the pixel.
Step S80, training parameters of a new face image in a machine learning model according to characteristic parameters of all pixels in 8 directions, wherein the characteristic parameters of all pixels comprise: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel.
In the implementation of step S80, the feature parameters of all pixels are substituted into the linear equation set of the machine learning model, the basis solution is obtained, so as to obtain feature vectors corresponding to the feature parameters of all pixels, and according to the feature vectors, the mapping scores of the feature vectors corresponding to all pixels are calculated, where the mapping scores are used as training parameters of the machine learning model.
Step S90, matching the resolution of the target face image, the integrity of the face and the result of whether the target face image is in the frontal posture or not and training parameters of the machine learning model according to the face image indexes.
In the implementation of step S90, the resolution of the target face image, the integrity of the face, the result of whether the target face image is in the face pose, and the training parameters of the machine learning model are respectively matched with the face resolution evaluation index, the face integrity evaluation index, the face pose evaluation index, and the face definition evaluation index in the face image indexes.
Step S100, if the matching meets the preset requirement, the target face image is determined to be a face recognition sample.
In the implementation of step S100, if the resolution and the face integrity of the target face image, the result of the front pose of the target face image, and the training parameters of the machine learning model meet the matching level of the preset requirement, the target face image is determined as a face recognition sample, and the face recognition sample is stored in the sample library of the face recognition system.
In a possible implementation manner, in the step S10, affine transformation is performed on feature points of any face image according to a face detection model to obtain a target face image, including:
Step 101, carrying out face detection and feature point positioning on any face image according to a face detection model to obtain coordinates of a face region and feature points, wherein the feature points comprise left eye coordinates and right eye coordinates.
In the implementation of step 101, face detection and feature point positioning of a face image are performed on any face image according to a face detection model of a convolutional neural network, so as to obtain coordinates of feature points of each face region of the face image, wherein the feature points comprise left eye coordinates and right eye coordinates.
And 102, aligning the coordinates of the feature points to the specified coordinate positions in affine transformation to obtain the target face image after the face is aligned.
In the implementation of step 102, the left eye coordinates and the right eye coordinates of the feature points are aligned to the affine transformation appointed coordinate positions, and the target face image after the face is aligned is obtained.
In a possible implementation, in the step S20, the calculating the resolution of the target facial image according to the facial resolution evaluation algorithm for the spatial distribution of the local features of the target facial image includes:
in the specific implementation, step 20 calculates an evaluation value of the resolution of the target facial image local feature with respect to the spatial distribution of the target facial image local feature according to the facial resolution evaluation algorithm.
In a possible implementation, in the step S30, according to a face integrity evaluation algorithm, for a case where a key point of a target face image is masked, calculating the face integrity of the target face image includes:
in the specific implementation of step 30, the face integrity of the target face image is calculated according to the number of masked key points of the target face image according to the face posture evaluation algorithm, wherein the key points comprise a left eye center, a right eye center, a nose tip, a left mouth angle and a right mouth angle.
In a possible implementation, in the step S40, determining whether the target face image is in the frontal pose according to the face pose evaluation algorithm includes:
in the specific implementation, step 40 is to determine, according to a face pose evaluation algorithm, whether the target face image is in a frontal pose with respect to a face pose orientation of the target face image, where the face pose orientation includes: side facing, bottom facing, top facing.
In a possible implementation, fig. 2 shows a schematic flow chart of obtaining a new face image according to feature parameters of a target face image provided in an embodiment of the present application; in the step S50, feature parameters of the target face image are calculated according to the face definition evaluation algorithm; the feature parameters of the target face image include pixel mean and variance, including:
Step S501, summing is performed according to the pixel value of each pixel of the target face image, and the pixel average value of the new face image is obtained after summing.
Step S502, determining a variance according to an average of differences between each pixel of the target face image and the average value of the pixels.
In the specific implementation of steps S501 and 502, the pixel value of each pixel of the target face image is summed up according to the face definition evaluation algorithm, and then an average value is obtained, the average difference between the pixel value of each pixel and the average value is calculated, and the variance is determined according to the average value.
In a possible implementation manner, in the step S60, normalization processing is performed for each pixel of the target face image to obtain a new face image, including:
step S601, normalizes the target face image to a new face image of a specified standard for the pixel value of each pixel of the target face image.
Step S60, when embodied, of enlarging or reducing the length and width of the target face image by linear normalization for each pixel of the target face image, normalizing the target face image to a new face image of a specified standard according to the linear property of the image;
the specific formula for calculating the pixel value of the new face image is as follows:
Where I (I, j) represents the pixel value of the target face image (I, j) position, μ (I, j) represents the average value of the target face image (I, j) position, σ (I, j) represents the variance of the target face image (I, j) position, C represents a constant, I represents a horizontal pixel, and j represents a vertical pixel.
In one possible implementation, fig. 3 is a schematic diagram of a flow chart for calculating products of 8 adjacent pixel values in directions according to an embodiment of the present application; in the above step S70, for each pixel of the new face image, a product of the pixel value and the pixel values of 8 adjacent pixels in the direction is calculated according to the center position of the pixel, and the feature parameters of the pixel are determined according to the product, and specifically includes the following steps:
step S701, for each pixel of the new face image, calculating the product of the pixel value and the pixel values of a plurality of pixels adjacent to each other in 8 directions, according to the center position of the pixel, the pixel values of the pixels on the right HR, the left HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD, and the upper right RU of the new face image; the specific calculation formula is as follows:
wherein,pixel values representing a new face image, and pixels located right HR, left HL, lower VD, upper VU, lower left LD, upper left LU, lower right RD, upper right RU of the pixel- >Pixel +.>Pixel arrangementPixel +.>Pixel +.>Pixel +.>Pixel +.>Pixel +.>
In step S702, the product is determined as a characteristic parameter of a plurality of pixels whose pixel values are adjacent to 8 directions of the new face image.
In the implementation of steps S701 and S702, for each pixel of the new face image, according to the center position coordinate of the pixel, according to a face sharpness algorithm, the product of the pixel value and the pixel values of a plurality of pixels adjacent to each other in 8 directions, including: shape, average, left variance, right variance.
In one possible implementation, fig. 4 shows a schematic flow chart of acquiring training parameters provided in an embodiment of the present application; in the step S80, training parameters of the new face image in the machine learning model are trained according to the feature parameters of all pixels in 8 directions, and specifically include the following steps:
in step S801, feature parameters of all pixels in 8 directions are input to the machine learning model.
Step S802, calculating the weight coefficient of the feature vector corresponding to the feature parameters of all pixels.
In step S803, a mapping score of all the pixel feature vectors is determined according to the weight coefficients of the feature vectors, and the mapping score is used as a training parameter of the machine learning model.
In the specific implementation, the steps S801, S802 and S803 are implemented by inputting the feature parameters of all pixels in 8 directions of the new face image into a machine learning model, solving a basic solution system through a linear equation set in the machine learning model to obtain feature vectors of the feature parameters, calculating weight coefficients of the feature vectors corresponding to the feature parameters of all pixels, and outputting mapping scores of the feature vectors corresponding to all pixels, wherein the mapping scores are used as training parameters of the machine learning model;
for example, the input of weight coefficients: t= { x1, x2, x3,. -%, x99, x100, x102}; score=99;
wherein T represents the feature vector of the new face image, x1 to x102 represent the dimension of the feature vector, score represents the mapping Score;
selecting a kernel function K (T, z) and a penalty parameter C >0, constructing and solving, wherein the specific formula is as follows:
Y=K(T,z)+C;
wherein, T in K (T, z) represents the eigenvector, z represents the function coefficient, namely, the substitution, Y represents the mapping score, and an equation set is constructed by the eigenvector and the mapping score:
the number of feature parameters corresponding to 102 dimensions of the input feature vector is 102, a penalty parameter C is added, 103 feature parameters are added in the equation set, an estimated value of a feature parameter x and an estimated value of the penalty parameter C in a kernel function K (T, z) are calculated by using a least square method estimation method, and a calculation result of a weight coefficient of the feature vector of the new face image is obtained.
In one possible implementation, fig. 5 shows a schematic diagram of an index matching flow provided by an embodiment of the present application; in the step S90, the resolution, the integrity of the face, the result of whether the target face image is the face pose, and the training parameters of the machine learning model are matched according to the face image indexes, and specifically includes the following steps:
step S901, index matching is performed on the result of the target facial image resolution and the facial resolution evaluation index.
In the specific implementation, step S901 performs index matching on the result of the target facial image resolution and the facial resolution evaluation index preset in the machine learning model, and determines whether the result of the resolution meets the corresponding level of the facial resolution evaluation index, where the level includes: upper, medium, lower, etc.; the level of judgment resolution is shown in the following chart:
facial resolution Grade of face resolution evaluation index
<50*50 Lower and lower
>=50×50 and<80*80 medium and medium
>=80*80 First class
Step S902, performing index matching on the result of the face integrity and the face integrity evaluation index.
In the implementation of step S902, the number of the key points of the target face image that are masked is matched with the face integrity evaluation index preset in the machine learning model, and whether the result of the face integrity meets the corresponding level of the face integrity evaluation index is determined, where the level includes: upper, medium, lower, etc.; the level of judging the face integrity is shown in the following chart:
Number of blocked key points Grade of face integrity evaluation index
>=3 Lower and lower
>=1 and<3 medium and medium
<1 First class
Step S903, index matching is performed between the front face posture result and the face posture evaluation index.
In the specific implementation, step S903 performs index matching according to the angle of the face posture orientation and the face posture evaluation index preset in the machine learning model, and determines whether the face posture meets the corresponding level of the face posture evaluation index, where the level includes: upper, medium, lower, etc.; the level of judging the face posture is shown in the following chart:
step S904, performing index matching on training parameters of the machine learning model and the face definition evaluation index.
In the specific implementation, step S904, performing index matching according to the quality score corresponding to the training parameter of the local area image in the machine learning model and the face definition evaluation index, and determining whether the training parameter meets the corresponding level of the face definition evaluation index, where the level includes: upper, medium, lower, etc.; the rank of the judgment face sharpness evaluation index is shown in the following chart:
face definition score Grade of face definition evaluation index
>60 Lower and lower
>30 and<=60 medium and medium
<=30 First class
In one possible implementation, fig. 6 shows a schematic flow chart of determining a face recognition sample provided in an embodiment of the present application; in the step S100, if the matching meets the preset requirement, the target face image is determined as a face recognition sample, which specifically includes the following steps:
in step S1001, it is determined whether the result of the target facial image resolution meets the upper or middle level of the facial resolution evaluation index, and if the result meets the upper or middle level, the result meets the preset requirement.
Step S1002, judging whether the result of the face integrity meets the upper or middle level of the face integrity evaluation index, and if so, meeting the preset requirement.
Step S1003, determining whether the frontal posture meets the upper or middle level of the facial posture evaluation index, and if the frontal posture meets the upper or middle level, meeting the preset requirement.
Step S1004, judging whether the training parameters meet the upper or middle level of the face definition evaluation index, and if the training parameters meet the upper or middle level, meeting the preset requirements.
In step S1005, if the resolution, the face integrity, the frontal pose and the training parameters of the target face image all meet the preset requirements, the target face image is determined as a face recognition sample.
In the specific implementation of steps S1001, S1002, S1003, S1004, and S1005, the machine learning model is used to determine whether the resolution, the face integrity, the frontal pose, and the training parameters of the target face image meet the level requirements of the face integrity evaluation index, the face pose evaluation index, and the face sharpness evaluation index, respectively, and if so, the target face image is determined to be a face recognition sample.
Fig. 7 is a schematic structural diagram of a face screening device according to an embodiment of the present application, as shown in fig. 7, where the device includes:
the obtaining module 1101 performs affine transformation on feature points of any face image according to the face detection model, and obtains a target face image.
The first evaluation module 1102 calculates the resolution of the target facial image for the spatial distribution of the local features of the target facial image according to a facial resolution evaluation algorithm.
The second evaluation module 1103 calculates the face integrity of the target face image for the case where the key points of the target face image are masked according to the face integrity evaluation algorithm.
The third evaluation module 1104 determines whether the target face image is in a frontal pose according to a face pose evaluation algorithm.
The processing module 1105 calculates feature parameters of the target face image according to a face definition evaluation algorithm; the feature parameters of the target face image include pixel mean and variance.
The first calculation module 1106 performs normalization processing for each pixel of the target face image to obtain a new face image;
the second calculating module 1107 calculates, for each pixel of the new face image, a product of the pixel value and the pixel values of the 8 adjacent pixels in directions according to the center position of the pixel, and determines a feature parameter of the pixel according to the product, where the product of the pixel values in each direction respectively forms an image, and specifically calculates the direction includes: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
Training parameters module 1108, training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein, the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
the index matching module 1109 matches the resolution of the target face image, the integrity of the face, and whether the target face image is in the face pose, and training parameters of the machine learning model according to the face image indexes including: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and a determining module 1110, configured to determine the target facial image as a face recognition sample if the matching meets the preset requirement.
The apparatus provided by the embodiments of the present application may be specific hardware on a device or software or firmware installed on a device, etc. The device provided in the embodiments of the present application has the same implementation principle and technical effects as those of the foregoing method embodiments, and for a brief description, reference may be made to corresponding matters in the foregoing method embodiments where the device embodiment section is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
Corresponding to the face screening method in fig. 1, the embodiment of the present application further provides a computer device 120, as shown in fig. 8, where the device includes a memory 1201, a processor 1202, and a computer program stored in the memory 1201 and capable of running on the processor 1202, where the processor 1202 implements the method when executing the computer program.
Carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
according to a face resolution evaluation algorithm, calculating the resolution of the target face image aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the situation that key points of the target face image are shielded;
judging whether the target facial image is in the front face posture or not according to a facial posture evaluation algorithm;
calculating characteristic parameters of the target face image according to a face definition evaluation algorithm; the characteristic parameters of the target facial image comprise pixel mean values and variances;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the direction according to the central position of the pixel, and determining the characteristic parameters of the pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
Training parameters of the new face image in a machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein, the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
matching the resolution of the target face image, the integrity of the face, the result of whether the target face image is in the face gesture or not, and training parameters of the machine learning model according to face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
Corresponding to the face screening method in fig. 1, the embodiment of the present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
according to a face resolution evaluation algorithm, calculating the resolution of the target face image aiming at the spatial distribution of the local features of the target face image;
According to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the situation that key points of the target face image are shielded;
judging whether the target facial image is in the front face posture or not according to a facial posture evaluation algorithm;
calculating characteristic parameters of the target face image according to a face definition evaluation algorithm; the characteristic parameters of the target facial image comprise pixel mean values and variances;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the direction according to the central position of the pixel, and determining the characteristic parameters of the pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
training parameters of the new face image in a machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein, the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
Matching the resolution of the target face image, the integrity of the face, the result of whether the target face image is in the face gesture or not, and training parameters of the machine learning model according to face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
In the embodiments of the present application, the computer program may further execute other machine readable instructions when executed by the processor to perform other methods described in the present application, and the specific implementation of the method steps and principles are referred to in the foregoing description and will not be described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of face screening, the method comprising:
carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target facial image is in a frontal posture or not according to a facial posture evaluation algorithm;
calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; the characteristic parameters of the target face image comprise pixel average values and variances;
performing normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 adjacent pixels in the directions according to the central position of the pixel, and determining the characteristic parameters of the pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
Training parameters of the new face image in a machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
matching the resolution of the target face image, the integrity of the face, the result of whether the target face image is in a frontal posture or not, and training parameters of a machine learning model according to face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample library chart of the face recognition sample.
2. The method according to claim 1, wherein affine transformation is performed on feature points of any face image according to a face detection model to obtain a target face image, comprising:
according to the face detection model, carrying out face detection and feature point positioning on any face image to obtain coordinates of a face region and feature points, wherein the feature points comprise left eye coordinates and right eye coordinates;
And aligning the coordinates of the feature points to the specified coordinate positions in affine transformation to obtain the target face image after the face is aligned.
3. The method according to claim 1, wherein feature parameters in the target face image are calculated according to a face sharpness evaluation algorithm; the feature parameters of the target face image include pixel mean and variance, including:
summing according to the pixel value of each pixel of the target face image, and taking the pixel average value of the target face image after summing;
the variance is determined from the average of the differences between each pixel of the target facial image and the average of the pixels.
4. The method of claim 1, wherein for each pixel of the new face image, calculating a product of the pixel value and the pixel values of 8 pixels adjacent in the direction based on the center position of the pixel, and determining the characteristic parameter of the pixel based on the product, comprises:
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of a plurality of pixels adjacent to each other in 8 directions, wherein the pixel value is multiplied by the right HR, the left HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD and the upper right RU of the new face image according to the central position of the pixel;
Determining the product as a characteristic parameter of a plurality of pixels whose pixel values are adjacent to 8 directions of the new face image, the characteristic parameter of the product including: shape, average, left variance, right variance.
5. The method of claim 1, wherein training parameters of the new face image in the machine learning model based on the feature parameters of all pixels in 8 directions comprises:
inputting the characteristic parameters of all pixels in the 8 directions into a machine learning model;
calculating weight coefficients of feature vectors corresponding to the feature parameters of all pixels;
and determining the mapping scores of all the pixel feature vectors according to the weight coefficients of the feature vectors, wherein the mapping scores serve as training parameters of a machine learning model.
6. The method of claim 1, wherein matching the resolution of the target face image, the face integrity, and the result of whether the target face image is in a frontal pose, and training parameters of the machine learning model according to the face image index comprises:
performing index matching on the result of the target facial image resolution and a facial resolution evaluation index;
performing index matching on the result of the face integrity and the face integrity evaluation index;
Performing index matching on the front face posture result and the face posture evaluation index;
and performing index matching on the training parameters of the machine learning model and the face definition evaluation index.
7. The method of claim 1, wherein determining the target facial image as a face recognition sample if the matching meets a preset requirement comprises:
and if the resolution of the target face image, the facial integrity and the front face posture result and the training parameters of the machine learning model meet the matching grades of preset requirements, determining the target face image as a face recognition sample, wherein the matching grades are respectively equal, medium, lower and the like.
8. A facial screening apparatus, the apparatus comprising:
the acquisition module is used for carrying out affine transformation on the feature points of any face image according to the face detection model to obtain a target face image;
the first evaluation module is used for calculating the resolution of the target facial image according to a facial resolution evaluation algorithm aiming at the spatial distribution of the local features of the target facial image;
the second evaluation module is used for calculating the face integrity of the target face image according to a face integrity evaluation algorithm aiming at the situation that key points of the target face image are shielded;
The third evaluation module is used for judging whether the target facial image is in the front face posture or not according to a facial posture evaluation algorithm;
the processing module is used for carrying out normalization processing on the pixel value of each pixel of the target face image according to a face definition evaluation algorithm to obtain a new face image;
a first calculation module that calculates feature parameters of the new face image; the characteristic parameters of the new face image comprise pixel average value and variance;
the second calculating module calculates, for each pixel of the new face image, a product of the pixel value and pixel values of 8 adjacent pixels in directions according to a center position of the pixel, and determines a characteristic parameter of the pixel according to the product, where the product of the pixel values in each direction respectively forms an image, and specifically calculating the direction includes: right side, left side, lower, upper, lower left, upper left, lower right, upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all pixels include: the shape of the pixel, the average value of the pixel darkness, the left variance of the pixel, the right variance of the pixel;
The index matching module matches the resolution, the integrity of the face, the result of whether the target face image is the face gesture and training parameters of the machine learning model according to the face image indexes, wherein the face image indexes comprise: face resolution evaluation index, face integrity evaluation index, face posture evaluation index, and face definition evaluation index;
and the determining module is used for determining the target face image as a face recognition sample if the matching meets the preset requirement.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 7.
CN202110831104.7A 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium Active CN113569694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110831104.7A CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110831104.7A CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113569694A CN113569694A (en) 2021-10-29
CN113569694B true CN113569694B (en) 2024-03-19

Family

ID=78166336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110831104.7A Active CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113569694B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711268A (en) * 2018-12-03 2019-05-03 浙江大华技术股份有限公司 A kind of facial image screening technique and equipment
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7185186B2 (en) * 2019-02-01 2022-12-07 ブラザー工業株式会社 Image processor, method for training machine learning model, and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711268A (en) * 2018-12-03 2019-05-03 浙江大华技术股份有限公司 A kind of facial image screening technique and equipment
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image

Also Published As

Publication number Publication date
CN113569694A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
Cozzolino et al. Splicebuster: A new blind image splicing detector
Ramachandra et al. Towards making morphing attack detection robust using hybrid scale-space colour texture features
Venkatesh et al. Detecting morphed face attacks using residual noise from deep multi-scale context aggregation network
CN106372629B (en) Living body detection method and device
CN111488756A (en) Face recognition-based living body detection method, electronic device, and storage medium
Srinivas et al. Analysis of facial marks to distinguish between identical twins
Boehnen et al. A fast multi-modal approach to facial feature detection
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN109816051B (en) Hazardous chemical cargo feature point matching method and system
Choi et al. Starsac: Stable random sample consensus for parameter estimation
CN104573644A (en) Multi-mode face identification method
CN111460950A (en) Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
CN102915435A (en) Multi-pose face recognition method based on face energy diagram
KR101326691B1 (en) Robust face recognition method through statistical learning of local features
CN111126190A (en) Disguised face recognition method based on free energy theory and dynamic texture analysis
Davy et al. Reducing anomaly detection in images to detection in noise
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN112926464A (en) Face living body detection method and device
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN113569694B (en) Face screening method, device, equipment and storage medium
Abo-Eleneen Thresholding based on Fisher linear discriminant
Wang et al. A gradient based weighted averaging method for estimation of fingerprint orientation fields
WO2015131710A1 (en) Method and device for positioning human eyes
Broussard et al. Using artificial neural networks and feature saliency to identify iris measurements that contain the most discriminatory information for iris segmentation
KR101763761B1 (en) Method of identifying shape of iris and device for identifying iris

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant