CN113569694A - Face screening method, device, equipment and storage medium - Google Patents

Face screening method, device, equipment and storage medium Download PDF

Info

Publication number
CN113569694A
CN113569694A CN202110831104.7A CN202110831104A CN113569694A CN 113569694 A CN113569694 A CN 113569694A CN 202110831104 A CN202110831104 A CN 202110831104A CN 113569694 A CN113569694 A CN 113569694A
Authority
CN
China
Prior art keywords
face
pixel
face image
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110831104.7A
Other languages
Chinese (zh)
Other versions
CN113569694B (en
Inventor
白刚
姜卫平
郭忠武
李国华
韩煜
王荣芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Bohui Technology Inc
Original Assignee
Beijing Bohui Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bohui Technology Inc filed Critical Beijing Bohui Technology Inc
Priority to CN202110831104.7A priority Critical patent/CN113569694B/en
Publication of CN113569694A publication Critical patent/CN113569694A/en
Application granted granted Critical
Publication of CN113569694B publication Critical patent/CN113569694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The application provides a face screening method, a face screening device, face screening equipment and a storage medium, which are applied to the technical field of visual image processing, wherein the method carries out affine transformation on feature points of any face image according to a face detection model to obtain a target face image; calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; normalizing each pixel of the target face image to obtain a new face image; calculating, for each pixel of the new face image, a pixel value product of the pixel value and 8 direction-adjacent pixels according to a center position of the pixel; training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; matching the resolution, the face integrity, the result of whether the target face image is in the face posture or not and training parameters of a machine learning model according to the face image indexes; and if the matching meets the preset requirement, determining the target face image as a face recognition sample.

Description

Face screening method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision image processing technologies, and in particular, to a face screening method, apparatus, device, and storage medium.
Background
With the popularization of human face recognition technology, face recognition is increasingly used in more places, is an active research direction in the fields of deep learning and pattern recognition, is widely applied to the fields of intelligent video monitoring, identity authentication, public security control, sensitive person recognition and the like, and due to the wide application of face recognition, the requirement on face recognition precision is higher and higher, particularly in the field of identity authentication, whether a face recognition channel or face payment is closely related to the life and property safety of people, once recognition is wrong, personal safety and even public safety are threatened, in the current stage, a face recognition mode is carried out based on manually extracted features, and due to the fact that the speed of manually extracting the features is low, the extracted features are single, the robustness is poor, the requirement on face recognition precision cannot be met, in addition, the application provides a multi-dimensional face sample quality detection method according to various features, the application cost is high, the reproducibility is low, and the problems of misjudgment, missed judgment and the like are avoided.
Disclosure of Invention
In view of this, the embodiment of the present application provides a face screening method, which can accurately identify face images with different qualities, and solve the technical problems of slow speed, single extraction feature, and poor robustness of face sample quality detection in the conventional manual identification.
In a first aspect, an embodiment of the present application provides a face screening method, where the method includes:
carrying out affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm and aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target face image is in a face posture or not according to a face posture evaluation algorithm;
calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; the feature parameters of the target facial image include a pixel mean and a variance;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of the pixel, and determining the characteristic parameter of the pixel according to the product, wherein the product of each direction pixel value respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all the pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
matching the resolution of the target facial image, the facial integrity, the result of whether the target facial image is in a facial pose, and training parameters of a machine learning model according to facial image indexes, wherein the facial image indexes comprise: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
In some embodiments, performing affine transformation on feature points of any face image according to a face detection model to obtain a target face image, includes:
carrying out face detection and feature point positioning on any face image according to a face detection model to obtain coordinates of a face area and feature points, wherein the feature points comprise left-eye coordinates and right-eye coordinates;
and aligning the coordinates of the feature points to the specified coordinate position in the affine transformation to obtain a target face image after the face is aligned.
In some embodiments, the feature parameters in the target face image are calculated according to a face sharpness evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance, including:
summing according to the pixel value of each pixel of the target face image, and taking the pixel average value of the target face image after summing;
the variance is determined based on the average of the differences between each pixel of the target facial image and the pixel mean.
In some embodiments, for each pixel of the new face image, calculating a product of the pixel value and pixel values of 8 direction-adjacent pixels according to the center position of the pixel, and determining a characteristic parameter of the pixel according to the product, comprises:
for each pixel of the new face image, calculating the product of the pixel value and pixel values of a plurality of pixels adjacent to the right side HR, the left side HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD and the upper right RU of the new face image in 8 directions according to the central position of the pixel;
determining the product as the characteristic parameters of a plurality of pixels adjacent to the new face image in 8 directions, wherein the characteristic parameters of the product comprise: shape, mean, left variance, right variance.
In some embodiments, training parameters of the new face image in the machine learning model are trained according to the feature parameters of all pixels in 8 directions, including:
inputting the characteristic parameters of all the pixels in the 8 directions into a machine learning model;
calculating the weight coefficient of the characteristic vector corresponding to the characteristic parameters of all the pixels;
and determining the mapping scores of all the pixel feature vectors according to the weight coefficients of the feature vectors, wherein the mapping scores are used as training parameters of the machine learning model.
In some embodiments, matching the resolution of the target facial image, the face integrity, and whether the target facial image is the result of a frontal facial pose, and the training parameters of the machine learning model according to facial image metrics, comprises:
performing index matching on the resolution result of the target face image and the face resolution evaluation index;
matching the result of the face integrity with the evaluation index of the face integrity;
matching the result of the face posture with the face posture evaluation index;
and performing index matching on the training parameters of the machine learning model and the facial definition evaluation indexes.
In some embodiments, if the matching meets a preset requirement, determining the target facial image as a face recognition sample includes:
and if the resolution of the target face image, the face integrity, the face posture result and the training parameters of the machine learning model meet the matching grades of preset requirements, determining the target face image as a face recognition sample, wherein the matching grades are respectively equal, medium and lower.
In a second aspect, an embodiment of the present application provides a facial screening apparatus, including:
the acquisition module performs affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
the first evaluation module is used for calculating the resolution of the target face image according to the spatial distribution of the local features of the target face image by using a face resolution evaluation algorithm;
the second evaluation module is used for calculating the face integrity of the target face image aiming at the condition that the key points of the target face image are shielded according to a face integrity evaluation algorithm;
the third evaluation module judges whether the target face image is in the face posture or not according to a face posture evaluation algorithm;
the processing module is used for calculating the characteristic parameters of the target face image according to a face definition evaluation algorithm; the feature parameters of the target facial image include a pixel mean and a variance;
the first calculation module is used for carrying out normalization processing on each pixel of the target face image to obtain a new face image;
the second calculation module is used for calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of each pixel of the new face image, and determining the characteristic parameter of each pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
the training parameter module is used for training the training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all the pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
an index matching module that matches a resolution of a target face image, a face integrity, and a result of whether the target face image is in a face pose, and training parameters of a machine learning model according to face image indexes, the face image indexes including: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and the determining module is used for determining the target face image as a face recognition sample if the matching meets the preset requirement.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the face screening method according to any one of claims 1 to 7 when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the face screening method.
The beneficial effect of this application mainly lies in: the method adopts a mode that a face detection model and a machine learning model are fused, model training is carried out on feature points of a new face image through multiple algorithms, and according to the calculation results of the resolution, the face integrity and the face posture of a target face image and the training parameters of the model, matching is carried out according to the indexes of the face image, if the calculation results and the training parameters meet the preset requirements, the target face image is determined to be a face recognition sample for quality detection of the face image to be put in storage.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a schematic flow chart of a face screening method provided in an embodiment of the present application.
Fig. 2 is a schematic flow chart illustrating a process of obtaining a new face image according to feature parameters of a target face image according to an embodiment of the present application.
Fig. 3 shows a schematic flowchart for calculating a product of 8 directional neighboring pixel values according to an embodiment of the present application.
Fig. 4 shows a schematic flow chart for acquiring training parameters according to an embodiment of the present application.
Fig. 5 shows a schematic diagram of an index matching process provided in the embodiment of the present application.
Fig. 6 is a schematic flow chart illustrating a process of determining a face recognition sample according to an embodiment of the present application.
Fig. 7 shows a schematic structural diagram of a facial screening apparatus according to an embodiment of the present application.
Fig. 8 shows a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations and, thus, the following detailed description of the embodiments of the present application, which is provided in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
A Deep Convolutional Neural Network (DCNN) model is a mode identification mode, is successfully applied to image processing, aims to perform characterization and quantification on image data in a convolution mode, adopts unsupervised or semi-supervised feature learning and layered features to replace manual feature acquisition, and determines the usability and reliability of facial identification training according to the quality of a facial sample library when DCNN is applied to the facial identification training, so that the guarantee that the high-quality facial sample library is a problem, directly influences the effect and experience of various facial identification application processes, integrates the Deep learning and the traditional image processing technology, realizes the detection of the facial sample quality according to multi-dimensional features, effectively improves the usability and reliability of the facial identification training of 1 to N or N to N, further expanding the application of the method to different scenes of the same technology.
According to the method, affine transformation is carried out on feature points of any facial image according to a face detection model to obtain a target facial image; according to a face resolution evaluation algorithm, calculating the resolution of a target face image aiming at the spatial distribution of local features of the target face image, according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded, judging whether the target face image is in a face posture according to a face posture evaluation algorithm, and calculating feature parameters of the target face image according to a face definition evaluation algorithm; the method comprises the steps of carrying out normalization processing on each pixel of a target face image to obtain a new face image, calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of each pixel of the new face image, training the training parameters of the new face image in a machine learning model according to the characteristic parameters of all the pixels in 8 directions, matching the resolution, the face integrity and the result of whether the target face image is in a face posture or not and the training parameters of the machine learning model according to face image indexes, and determining the target face image as a face recognition sample if the matching meets the preset requirement. Specifically, a face detection model of a convolutional neural network is used for carrying out affine transformation on feature points of any face image to obtain a target face image, the design is used for identifying the face image and has the characteristics of no contact and high precision, and the face detection model is particularly suitable for being used in a living body detection and identification process in a special fine-grained feature point analysis mode, so that the processing of the image reaches the level close to the human power; then, according to the resolution and the face integrity of the target face image calculated by the target face image and the facial posture, the face quality of the target face image is detected through various strategy algorithms, and the accurate recognition of different quality faces is realized; the method comprises the steps of carrying out normalization processing according to each pixel of a target face image to obtain a new face image, training a training parameter of the new face image in a machine learning model according to a characteristic parameter of a pixel value product of each pixel of the new face image and 8 direction adjacent pixels, matching the resolution ratio and the face integrity of the target face image and a result of whether the target face image is in a face posture with the training parameter of the machine learning model according to a face image index, determining the target face image as a face recognition sample if the calculation result and the training parameter meet preset requirements, and using the face recognition sample as quality detection of the face image to be warehoused. Meanwhile, the problems of high face recognition precision, low application cost, no copying, misjudgment, missed judgment and the like are solved.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a schematic flow chart of a face screening method provided in an embodiment of the present application; as shown in fig. 1, the face screening specifically includes the following steps:
and step S10, performing affine transformation on the feature points of any face image according to the face detection model to obtain a target face image.
When the step S10 is specifically implemented, the face detection and feature point positioning are performed on any face image according to the face detection model, and the feature point coordinates are aligned to the specified position coordinates of the affine transformation through the affine transformation, so as to obtain an aligned target face image.
Step S20, calculating the resolution of the target face image with respect to the spatial distribution of the local features of the target face image according to a face resolution evaluation algorithm.
In specific implementation, step S20 calculates an evaluation value of the target face image resolution with respect to the spatial distribution of the local features of the target face image, based on a face resolution evaluation algorithm.
Step S30, according to the face integrity evaluation algorithm, the face integrity of the target face image is calculated for the case where the key points of the target face image are occluded.
When the step S30 is implemented specifically, according to the face integrity evaluation algorithm, the face integrity of the target face image is determined according to the condition that 5 key points of the target face image are occluded, where the 5 key points are: left eye center, right eye center, nose tip, left mouth corner, right mouth corner.
In step S40, it is determined whether or not the target face image is in the face pose according to the face pose evaluation algorithm.
In specific implementation, step S40 extracts three angles of the target face image, such as a pitch angle, a yaw angle, and a roll angle, according to the face posture evaluation algorithm, and determines whether the face posture orientation of the target face image is in a side-view, an up-view, or an down-view state, according to threshold values of the three angles.
Step S50, calculating the characteristic parameters of the target face image according to the face definition evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance.
In specific implementation, step S50 sums the pixel values of each pixel of the target face image according to the face sharpness evaluation algorithm, then takes an average, calculates the average of the differences between the pixel values of each pixel and the average, and determines the variance according to the average.
Step S60, a new face image is obtained by normalizing each pixel of the target face image.
Step S60, when embodied, normalizes the target face image to a new face image of a specified standard form according to each pixel of the target face image.
Step S70, for each pixel of the new face image, calculates the product of the pixel value and the pixel values of the 8 direction-adjacent pixels according to the center position of the pixel, and determines the feature parameter of the pixel according to the product.
In specific implementation, step S70 calculates, for each pixel of the new face image, products between the pixel and 8 pixels adjacent to the new face image in the right, left, lower, upper, lower, left, upper, lower, right, and upper right directions according to the center position of the pixel, and determines the calculated products as the feature parameters of the pixel.
Step S80, training parameters of the new face image in the machine learning model according to the feature parameters of all pixels in 8 directions, where the feature parameters of all pixels include: the shape of the pixel, the average of the shading of the pixel, the left variance of the pixel, the right variance of the pixel.
In the specific implementation of step S80, the feature parameters of all pixels are substituted into the linear equation set of the machine learning model to obtain a basic solution system, i.e., the feature vectors corresponding to the feature parameters of all pixels are obtained, and the mapping scores of the feature vectors corresponding to all pixels are calculated according to the feature vectors, and the mapping scores are used as the training parameters of the machine learning model.
Step S90, matching the resolution, face integrity, and whether the target face image is the result of a frontal face pose, and the training parameters of the machine learning model according to the face image indicators.
In specific implementation, step S90 matches the resolution of the target face image, the integrity of the face, the result of whether the target face image is in the face pose, and the training parameters of the machine learning model with the face resolution evaluation index, the integrity of the face evaluation index, the pose evaluation index, and the sharpness of the face evaluation index in the face image indexes.
And step S100, if the matching meets the preset requirement, determining the target face image as a face recognition sample.
In the specific implementation of step S100, if the resolution and the face integrity of the target face image, the result of the front face pose of the target face image, and the training parameters of the machine learning model meet the matching level of the preset requirement, the target face image is determined as a face recognition sample, and the face recognition sample is stored in a sample library of the face recognition system.
In a possible implementation, in step S10, performing affine transformation on feature points of any face image according to the face detection model to obtain a target face image, includes:
step 101, performing face detection and feature point positioning on any face image according to a face detection model to obtain coordinates of a face area and feature points, wherein the feature points comprise left eye coordinates and right eye coordinates.
In specific implementation, step 101 performs face detection and feature point positioning of a face image on any face image according to a face detection model of a convolutional neural network to obtain coordinates of feature points of each face region of the face image, where the feature points include left-eye coordinates and right-eye coordinates.
And 102, aligning the coordinates of the feature points to the specified coordinate position in the affine transformation to obtain a target face image after the face is aligned.
And 102, aligning the left eye coordinates and the right eye coordinates of the feature points to the affine transformation specified coordinate position during specific implementation to obtain a target face image after the face is aligned.
In one possible implementation, in step S20, the calculating the resolution of the target face image according to the face resolution evaluation algorithm with respect to the spatial distribution of the local features of the target face image includes:
in specific implementation, step 20 calculates an evaluation value of the resolution of the local feature of the target face image according to a face resolution evaluation algorithm with respect to the spatial distribution of the local feature of the target face image.
In one possible implementation, in step S30, calculating the face integrity of the target face image for the case that the key points of the target face image are occluded according to the face integrity evaluation algorithm includes:
in specific implementation, step 30 calculates the face integrity of the target face image according to the number of the masked key points of the target face image according to the face posture evaluation algorithm, wherein the key points include a left eye center, a right eye center, a nose tip, a left mouth angle and a right mouth angle.
In one possible implementation, the determining whether the target face image is in the face pose according to the face pose evaluation algorithm in step S40 includes:
in specific implementation, the step 40 determines, according to a face posture evaluation algorithm, whether the target face image is in a face posture with respect to a face posture orientation of the target face image, where the face posture orientation includes: side facing, looking up, looking down.
In one possible implementation, fig. 2 shows a schematic flow chart of obtaining a new face image according to feature parameters of a target face image provided by an embodiment of the present application; in step S50, the feature parameters of the target face image are calculated based on the face sharpness evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance, including:
step S501, summing the pixel values of each pixel of the target face image, and taking the pixel average value of the new face image after summing.
Step S502, determining a variance from the average of the difference between each pixel of the target face image and the pixel mean.
In specific implementation, steps S501 and S502 sum the pixel values of each pixel of the target face image according to the face sharpness evaluation algorithm, then take an average, calculate the average of the differences between the pixel values of each pixel and the average, and determine the variance according to the average.
In one possible implementation, in step S60, performing normalization processing on each pixel of the target face image to obtain a new face image, including:
step S601 normalizes the target face image into a new face image of a specified standard with respect to the pixel value of each pixel of the target face image.
Step S60, when embodied, enlarges or reduces the length and width of the target face image by linear normalization for each pixel of the target face image, and normalizes the target face image into a new face image of a specified standard according to the linear property of the image;
the specific formula for calculating the pixel value of the new face image is as follows:
Figure BDA0003175590750000131
where I (I, j) represents a pixel value of the position of the target face image (I, j), μ (I, j) represents an average value of the positions of the target face images (I, j), σ (I, j) represents a variance of the positions of the target face images (I, j), C represents a constant, I represents a horizontal pixel, and j represents a vertical pixel.
In one possible implementation, fig. 3 shows a schematic flow chart of calculating a product of 8 directional neighboring pixel values provided by an embodiment of the present application; in step S70, for each pixel of the new face image, calculating a product of the pixel value and pixel values of 8 direction-adjacent pixels according to the center position of the pixel, and determining a feature parameter of the pixel according to the product, specifically includes the following steps:
step S701, aiming at each pixel of the new face image, calculating the product of the pixel value and the pixel values of a plurality of pixels adjacent to the right side HR, the left side HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD and the upper right RU of the new face image in 8 directions according to the central position of the pixel; the specific calculation formula is as follows:
Figure BDA0003175590750000141
Figure BDA0003175590750000142
Figure BDA0003175590750000143
Figure BDA0003175590750000144
Figure BDA0003175590750000145
Figure BDA0003175590750000146
Figure BDA0003175590750000147
Figure BDA0003175590750000148
wherein the content of the first and second substances,
Figure BDA0003175590750000149
pixel values representing the new face image, and pixels located at right HR, left HL, below VD, above VU, below LD, above LU, below RD, and above RU of the pixel
Figure BDA00031755907500001410
Pixel
Figure BDA00031755907500001411
Pixel
Figure BDA00031755907500001412
Pixel
Figure BDA00031755907500001413
Pixel
Figure BDA00031755907500001414
Pixel
Figure BDA00031755907500001415
Pixel
Figure BDA00031755907500001416
Pixel
Figure BDA00031755907500001417
In step S702, the product is determined as the characteristic parameters of a plurality of pixels whose pixel values are adjacent to the new face image in 8 directions.
When the steps S701 and S702 are implemented specifically, for each pixel of the new face image, according to the central position coordinates of the pixel, and according to the face sharpness algorithm, respectively calculating the product of the pixel value and the pixel values of the new face image right HR, left HL, lower VD, upper VU, lower left LD, upper left LU, lower right RD, upper right RU, 8 direction-adjacent pixels, and determining the product as the characteristic parameter of the pixel value and the 8 direction-adjacent pixels of the new face image, where the characteristic parameter includes: shape, mean, left variance, right variance.
In a possible implementation, fig. 4 shows a schematic flow chart of acquiring training parameters provided in the embodiment of the present application; in the step S80, training parameters of the new face image in the machine learning model according to the feature parameters of all pixels in 8 directions includes the following steps:
in step S801, the feature parameters of all the pixels in 8 directions are input to the machine learning model.
Step S802, calculating weight coefficients of feature vectors corresponding to the feature parameters of all pixels.
In step S803, a mapping score of all pixel feature vectors is determined according to the weight coefficient of the feature vector, and the mapping score is used as a training parameter of the machine learning model.
In the specific implementation of steps S801, S802, and S803, the feature parameters of all pixels in 8 directions of the new facial image are input into the machine learning model, a basic solution system is obtained through a linear equation set in the machine learning model to obtain the feature vectors of the feature parameters, the weight coefficients of the feature vectors corresponding to the feature parameters of all pixels are calculated, and the mapping scores of the feature vectors corresponding to all pixels are output as the training parameters of the machine learning model;
for example, the weight coefficients are input: t { x1, x2, x3,., x99, x100, x102 }; score 99;
wherein T represents a feature vector of the new face image, x1 to x102 represent dimensions of the feature vector, and Score represents a mapping Score;
selecting a kernel function K (T, z) and a penalty parameter C >0, constructing and solving, wherein the specific formula is as follows:
Y=K(T,z)+C;
wherein, T in K (T, z) represents a characteristic vector, z represents a function coefficient, namely a substitution, Y represents a mapping score, and an equation set is constructed by the characteristic vector and the mapping score:
Figure BDA0003175590750000161
the number of input feature vector dimensions is 102, namely 102, corresponding to the feature parameters of 102 dimensions, a penalty parameter C is added, the equation set comprises 103 feature parameters, and the estimated value of the feature parameter x and the estimated value of the penalty parameter C in the kernel function K (T, z) are calculated by using a least square estimation method to obtain the calculation result of the weight coefficient of the feature vector of the new face image.
In one possible implementation, fig. 5 shows a schematic diagram of an index matching process provided in an embodiment of the present application; in the step S90, matching the resolution, the face integrity, the result of whether the target face image is in the face pose, and the training parameters of the machine learning model according to the face image index includes the following steps:
step S901 performs index matching between the result of the target face image resolution and the face resolution evaluation index.
Step S901, when implementing specifically, performs index matching on the result of the target face image resolution and a face resolution evaluation index preset in the machine learning model, and determines whether the result of the resolution meets a corresponding level of the face resolution evaluation index, where the level includes: upper, medium, lower, etc.; the level of the judgment resolution is shown in the following figure:
facial resolution Grade of face resolution evaluation index
<50*50 Low grade
>50 and 50<80*80 Medium and high grade
>=80*80 First class
And step S902, performing index matching on the result of the face integrity and the face integrity evaluation index.
Step S902, in a specific implementation, performs index matching on the number of the masked key points of the target face image and a face integrity evaluation index preset in the machine learning model, and determines whether a result of the face integrity corresponds to a corresponding level of the face integrity evaluation index, where the level includes: upper, medium, lower, etc.; the level of face integrity is judged as follows:
number of occluded key points Evaluation index of face integrityStage
>=3 Low grade
>1 and<3 medium and high grade
<1 First class
Step S903, the result of the face posture is matched with the face posture evaluation index.
In specific implementation, step S903 performs index matching according to the angle of the face posture orientation and a face posture evaluation index preset in the machine learning model, and determines whether the face posture meets a corresponding grade of the face posture evaluation index, where the grade includes: upper, medium, lower, etc.; the level of the frontal face pose is determined as shown in the following figure:
Figure BDA0003175590750000181
and step S904, performing index matching on the training parameters of the machine learning model and the face definition evaluation indexes.
When the step S904 is implemented specifically, index matching is performed according to the quality score corresponding to the training parameter of the local area image in the machine learning model and the face sharpness evaluation index, and it is determined whether the training parameter meets the corresponding grade of the face sharpness evaluation index, where the grade includes: upper, medium, lower, etc.; the evaluation index for determining the face clarity is shown in the following graph:
face clarity score Grade of face sharpness evaluation index
>60 Low grade
>30 and<=60 medium and high grade
<=30 First class
In one possible implementation, fig. 6 shows a schematic flow chart of determining a face recognition sample provided by an embodiment of the present application; in the step S100, if the matching meets the preset requirement, the target face image is determined to be a face recognition sample, which specifically includes the following steps:
step S1001, determining whether the result of the target face image resolution meets the upper or middle level of the face resolution evaluation index, and if so, meeting a preset requirement.
Step S1002, determining whether the result of the face integrity meets the first-class or medium-class face integrity evaluation index, and if so, meeting a preset requirement.
Step S1003, determining whether the face posture meets the first-class or medium-class face posture evaluation index, and if yes, meeting the preset requirement.
Step S1004, determining whether the training parameter meets the first or medium level of the face sharpness evaluation index, and if so, meeting a preset requirement.
Step S1005, if the resolution, the face integrity, the face pose and the training parameters of the target face image all meet the preset requirements, determining the target face image as a face recognition sample.
In specific implementation, the steps S1001, S1002, S1003, S1004, and S1005 respectively determine whether the resolution, the face integrity, the face posture, and the training parameters of the target face image meet the level requirements of the face integrity evaluation index, the face posture evaluation index, and the face clarity evaluation index by using a machine learning model, and if so, determine the target face image as a face recognition sample.
Fig. 7 is a schematic structural diagram of a facial screening apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus includes:
the obtaining module 1101 performs affine transformation on feature points of any face image according to the face detection model to obtain a target face image.
The first evaluation module 1102 calculates the resolution of the target face image with respect to the spatial distribution of the local features of the target face image according to a face resolution evaluation algorithm.
The second evaluation module 1103 calculates the face integrity of the target face image in accordance with a face integrity evaluation algorithm for the case where the key points of the target face image are occluded.
The third evaluation module 1104 determines whether the target face image is in a frontal facial pose according to a facial pose evaluation algorithm.
The processing module 1105, according to the face definition evaluation algorithm, calculates the feature parameters of the target face image; the feature parameters of the target face image include a pixel mean and a variance.
A first calculation module 1106, which performs normalization processing on each pixel of the target face image to obtain a new face image;
the second calculating module 1107, for each pixel of the new face image, calculates the product of the pixel value and the pixel values of 8 direction neighboring pixels according to the center position of the pixel, and determines the feature parameter of the pixel according to the product, where the product of each direction pixel value respectively constitutes an image, and the specific calculating direction includes: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
the training parameter module 1108 is used for training the training parameters of the new facial image in the machine learning model according to the characteristic parameters of all the pixels in the 8 directions; wherein, the characteristic parameters of all pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
the index matching module 1109 matches the resolution of the target face image, the face integrity, and the result of whether the target face image is in a face pose, and the training parameters of the machine learning model according to the face image indexes, which include: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
the determining module 1110 determines the target facial image as a face recognition sample if the matching meets the preset requirement.
The apparatus provided in the embodiments of the present application may be specific hardware on a device, or software or firmware installed on a device, etc. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Corresponding to the face filtering method in fig. 1, an embodiment of the present application further provides a computer device 120, fig. 8, as shown in fig. 8, the device includes a memory 1201, a processor 1202, and a computer program stored in the memory 1201 and executable on the processor 1202, where the processor 1202 implements the method when executing the computer program.
Carrying out affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm and aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target face image is in the face posture or not according to a face posture evaluation algorithm;
calculating characteristic parameters of the target face image according to a face definition evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of the pixel, and determining the characteristic parameter of the pixel according to the product, wherein the product of each direction pixel value respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein, the characteristic parameters of all pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
matching the resolution of the target facial image, the facial integrity, the result of whether the target facial image is in a facial pose, and training parameters of the machine learning model according to facial image indexes, wherein the facial image indexes comprise: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
Corresponding to the face screening method in fig. 1, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the following steps:
carrying out affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm and aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target face image is in the face posture or not according to a face posture evaluation algorithm;
calculating characteristic parameters of the target face image according to a face definition evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of the pixel, and determining the characteristic parameter of the pixel according to the product, wherein the product of each direction pixel value respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein, the characteristic parameters of all pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
matching the resolution of the target facial image, the facial integrity, the result of whether the target facial image is in a facial pose, and training parameters of the machine learning model according to facial image indexes, wherein the facial image indexes comprise: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and if the matching meets the preset requirement, determining the target face image as a face recognition sample.
In the embodiments of the present application, when being executed by a processor, the computer program may further execute other machine-readable instructions to perform other methods described in the present application, and for specific implementation steps and principles, reference is made to the above description, which is not repeated herein in detail.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of face screening, the method comprising:
carrying out affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
calculating the resolution of the target face image according to a face resolution evaluation algorithm and aiming at the spatial distribution of the local features of the target face image;
according to a face integrity evaluation algorithm, calculating the face integrity of the target face image aiming at the condition that key points of the target face image are shielded;
judging whether the target face image is in a face posture or not according to a face posture evaluation algorithm;
calculating characteristic parameters in the target face image according to a face definition evaluation algorithm; the feature parameters of the target facial image include a pixel mean and a variance;
carrying out normalization processing on each pixel of the target face image to obtain a new face image;
for each pixel of the new face image, calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of the pixel, and determining the characteristic parameter of the pixel according to the product, wherein the product of each direction pixel value respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all the pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
matching the resolution, face integrity and the result of whether the target face image is a face gesture with the training parameters of the machine learning model according to face image indexes, wherein the face image indexes comprise: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and if the matching meets the preset requirement, determining the target facial image as a face recognition sample library image.
2. The method of claim 1, wherein performing affine transformation on feature points of an arbitrary face image according to a face detection model to obtain a target face image comprises:
according to the face detection model, carrying out face detection and feature point positioning on any face image to obtain coordinates of a face area and feature points, wherein the feature points comprise left-eye coordinates and right-eye coordinates;
and aligning the coordinates of the feature points to the specified coordinate position in the affine transformation to obtain a target face image after the face is aligned.
3. The method according to claim 1, characterized in that feature parameters in the target face image are calculated according to a face sharpness evaluation algorithm; the feature parameters of the target face image include a pixel mean and a variance, including:
summing according to the pixel value of each pixel of the target face image, and taking the pixel average value of the target face image after summing;
the variance is determined based on the average of the differences between each pixel of the target facial image and the pixel mean.
4. The method of claim 1, wherein for each pixel of the new face image, calculating a product of the pixel value and pixel values of 8 direction-adjacent pixels according to the center position of the pixel, and determining a feature parameter of the pixel according to the product comprises:
for each pixel of the new face image, calculating the product of the pixel value and pixel values of a plurality of pixels adjacent to the right side HR, the left side HL, the lower VD, the upper VU, the lower left LD, the upper left LU, the lower right RD and the upper right RU of the new face image in 8 directions according to the central position of the pixel;
determining the product as the characteristic parameters of a plurality of pixels adjacent to the new face image in 8 directions, wherein the characteristic parameters of the product comprise: shape, mean, left variance, right variance.
5. The method of claim 1, wherein training parameters of the new face image in the machine learning model are trained according to the feature parameters of all pixels in 8 directions, and the training parameters comprise:
inputting the characteristic parameters of all the pixels in the 8 directions into a machine learning model;
calculating the weight coefficient of the characteristic vector corresponding to the characteristic parameters of all the pixels;
and determining the mapping scores of all the pixel feature vectors according to the weight coefficients of the feature vectors, wherein the mapping scores are used as training parameters of the machine learning model.
6. The method of claim 1, wherein matching the resolution of the target facial image, the face integrity, and whether the target facial image is a result of a frontal facial pose, and the training parameters of the machine learning model according to facial image metrics comprises:
performing index matching on the result of the target face image resolution and the face resolution evaluation index;
matching the result of the face integrity with the evaluation index of the face integrity;
matching the result of the face posture with the face posture evaluation index;
and performing index matching on the training parameters of the machine learning model and the facial definition evaluation indexes.
7. The method according to claim 1, wherein if the matching meets a preset requirement, determining the target facial image as a face recognition sample comprises:
and if the resolution of the target face image, the face integrity, the face posture result and the training parameters of the machine learning model meet the matching grades of preset requirements, determining the target face image to be a face recognition sample, wherein the matching grades are respectively upper-grade, middle-grade and lower-grade.
8. A facial screening apparatus, comprising:
the acquisition module performs affine transformation on the feature points of any facial image according to the face detection model to obtain a target facial image;
the first evaluation module is used for calculating the resolution of the target face image according to the spatial distribution of the local features of the target face image by using a face resolution evaluation algorithm;
the second evaluation module is used for calculating the face integrity of the target face image aiming at the condition that the key points of the target face image are shielded according to a face integrity evaluation algorithm;
the third evaluation module judges whether the target face image is in the face posture or not according to a face posture evaluation algorithm;
the processing module is used for carrying out normalization processing on the pixel value of each pixel of the target face image according to a face definition evaluation algorithm to obtain a new face image;
a first calculation module that calculates a feature parameter of the new face image; the characteristic parameters of the new face image comprise a pixel average value and a variance;
the second calculation module is used for calculating the product of the pixel value and the pixel values of 8 direction adjacent pixels according to the central position of each pixel of the new face image, and determining the characteristic parameter of each pixel according to the product, wherein the product of the pixel values in each direction respectively forms an image, and the specific calculation direction comprises the following steps: right side, left side, lower side, upper side, lower left, upper left, lower right, and upper right;
the training parameter module is used for training the training parameters of the new face image in the machine learning model according to the characteristic parameters of all pixels in 8 directions; wherein the characteristic parameters of all the pixels comprise: the shape of the pixel, the average value of the brightness of the pixel, the left variance of the pixel, and the right variance of the pixel;
an index matching module that matches the resolution, facial integrity, and results of whether the target facial image is in a facial pose, and training parameters of the machine learning model according to facial image indexes, the facial image indexes including: a face resolution evaluation index, a face integrity evaluation index, a face posture evaluation index and a face definition evaluation index;
and the determining module is used for determining the target face image as a face recognition sample if the matching meets the preset requirement.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of the preceding claims 1 to 7 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
CN202110831104.7A 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium Active CN113569694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110831104.7A CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110831104.7A CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113569694A true CN113569694A (en) 2021-10-29
CN113569694B CN113569694B (en) 2024-03-19

Family

ID=78166336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110831104.7A Active CN113569694B (en) 2021-07-22 2021-07-22 Face screening method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113569694B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711268A (en) * 2018-12-03 2019-05-03 浙江大华技术股份有限公司 A kind of facial image screening technique and equipment
US20200247138A1 (en) * 2019-02-01 2020-08-06 Brother Kogyo Kabushiki Kaisha Image processing device generating dot data using machine learning model and method for training machine learning model
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711268A (en) * 2018-12-03 2019-05-03 浙江大华技术股份有限公司 A kind of facial image screening technique and equipment
US20200247138A1 (en) * 2019-02-01 2020-08-06 Brother Kogyo Kabushiki Kaisha Image processing device generating dot data using machine learning model and method for training machine learning model
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image

Also Published As

Publication number Publication date
CN113569694B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
Cozzolino et al. Splicebuster: A new blind image splicing detector
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
CN106372629B (en) Living body detection method and device
EP1271394A2 (en) Method for automatically locating eyes in an image
Boehnen et al. A fast multi-modal approach to facial feature detection
Ramachandra et al. Towards making morphing attack detection robust using hybrid scale-space colour texture features
CN106778517A (en) A kind of monitor video sequence image vehicle knows method for distinguishing again
JP2008146539A (en) Face authentication device
CN109816051B (en) Hazardous chemical cargo feature point matching method and system
WO2019102608A1 (en) Image processing device, image processing method, and image processing program
CN111160284A (en) Method, system, equipment and storage medium for evaluating quality of face photo
US7646915B2 (en) Image recognition apparatus, image extraction apparatus, image extraction method, and program
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
KR101326691B1 (en) Robust face recognition method through statistical learning of local features
WO2021084972A1 (en) Object tracking device and object tracking method
CN113011385A (en) Face silence living body detection method and device, computer equipment and storage medium
Rahman et al. Human ear recognition using geometric features
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
WO2015131710A1 (en) Method and device for positioning human eyes
CN107729879A (en) Face identification method and system
CN113569694B (en) Face screening method, device, equipment and storage medium
CN111104857A (en) Identity recognition method and system based on gait energy diagram
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN114267031A (en) License plate detection method, license plate detection device, equipment terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant