CN113657315B - Quality screening method, device, equipment and storage medium for face image - Google Patents

Quality screening method, device, equipment and storage medium for face image Download PDF

Info

Publication number
CN113657315B
CN113657315B CN202110965436.4A CN202110965436A CN113657315B CN 113657315 B CN113657315 B CN 113657315B CN 202110965436 A CN202110965436 A CN 202110965436A CN 113657315 B CN113657315 B CN 113657315B
Authority
CN
China
Prior art keywords
face
frame sequence
quality
preset
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110965436.4A
Other languages
Chinese (zh)
Other versions
CN113657315A (en
Inventor
叶明�
戴磊
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110965436.4A priority Critical patent/CN113657315B/en
Publication of CN113657315A publication Critical patent/CN113657315A/en
Priority to PCT/CN2022/071692 priority patent/WO2023024417A1/en
Application granted granted Critical
Publication of CN113657315B publication Critical patent/CN113657315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a quality screening method, device and equipment of face images and a storage medium, which are used for improving the accuracy of the quality screening of the face images. The quality screening method of the face image comprises the following steps: invoking a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence; invoking a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence; classifying the model training frame sequence according to a preset classification strategy, and determining a plurality of target relative mass fractions based on classification results; training a preset face quality model through a plurality of target relative mass fractions to obtain an optimized face quality model; and invoking the optimized face quality model to perform quality evaluation and quality screening on the face image to be identified to obtain a target face image. In addition, the invention also relates to a blockchain technology, and the target face image can be stored in a blockchain node.

Description

Quality screening method, device, equipment and storage medium for face image
Technical Field
The present invention relates to the field of face comparison, and in particular, to a method, apparatus, device, and storage medium for quality screening of face images.
Background
Face recognition is a very common biological recognition system at present, and is widely applied to various scenes, a face quality model is used as a front model of face recognition, and quality screening is carried out on input data so as to ensure the accuracy of a recognition model, so that the face recognition system is a very important module.
In the prior art, the more common implementation manner of the face quality model is as follows: 1) The non-end-to-end mode, i.e. manually collecting and labeling some pictures which can affect the face recognition accuracy, letting the quality model learn the features, and finally scoring the picture quality by constraining the modes that the features must be within a certain range, which has the defects that: first, data is difficult to collect and annotate in large quantities; secondly, a few low-quality scenes need to be designed manually, and the method cannot adapt to various conditions in an actual environment; 2) The end-to-end mode directly utilizes the training data of the recognition model to train the quality model according to the similarity outputted by the recognition model, and the disadvantage of the mode is that: firstly, the difference of the similarity between different pictures and a low-quality face picture library is not necessarily caused by quality, and can also be generated by other factors such as age, dressing and the like; secondly, the training data of the general recognition model does not contain too many low-quality pictures, so that the low-quality pictures are added in a data augmentation mode, and the adaptability of the face quality model to various conditions in an actual environment is poor, so that the accuracy of face image quality screening is low.
Disclosure of Invention
The invention provides a quality screening method, device, equipment and storage medium for face images, which are used for carrying out matching and cleaning treatment on a face image frame sequence by calling a preset Hungary matching algorithm to obtain a model training frame sequence, classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, determining a plurality of target relative quality scores based on the classification result, training a preset face quality model through the plurality of target relative quality scores to obtain an optimized face quality model, carrying out quality evaluation and quality screening on the face images to be identified, and improving the accuracy of the quality screening of the face images.
The first aspect of the present invention provides a quality screening method for a face image, including: acquiring an initial face video, and calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person; invoking a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence; classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence; training a preset face quality model through the target relative mass fractions to obtain an optimized face quality model; and acquiring a face image to be identified, and calling the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified to obtain a target face image.
Optionally, in a first implementation manner of the first aspect of the present invention, the acquiring an initial face video, invoking a preset tracking algorithm to detect the initial face video, and obtaining a face picture frame sequence, where the face picture frame sequence includes a plurality of face picture frames corresponding to the same person, and includes: acquiring an initial face video, and carrying out picture interception on the initial face video based on a preset interception frame number to obtain an initial picture frame sequence; and invoking a preset tracking algorithm to perform face detection and tracking on the initial picture frame sequence to obtain a detection result, and filtering invalid picture frames in the detection result to obtain a face picture frame sequence, wherein the invalid picture frames are picture frames without face information, and the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person.
Optionally, in a second implementation manner of the first aspect of the present invention, the calling a preset hungarian matching algorithm to perform matching and cleaning processing on the face picture frame sequence, and obtaining a model training frame sequence includes: invoking a preset Hungary matching algorithm, and matching each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores; and comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and screening the human face picture frame sequence according to the comparison result to obtain a model training frame sequence.
Optionally, in a third implementation manner of the first aspect of the present invention, comparing the plurality of matching scores with a preset score threshold to obtain a comparison result, and screening the face picture frame sequence according to the comparison result to obtain a model training frame sequence includes: comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and deleting the human face picture frame sequence and re-acquiring the human face picture frame sequence if the comparison result is that the plurality of matching scores are all larger than the score threshold value, so as to obtain a model training frame sequence; and if the comparison result is that the matching scores smaller than the score threshold exist in the matching scores, reserving all face picture frames corresponding to the matching scores smaller than the score threshold in the face picture frame sequence and a preset number of face picture frames corresponding to the matching scores larger than the score threshold to obtain a model training frame sequence.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the classifying the model training frame sequence according to a preset classification policy to obtain a classification result, determining a plurality of target relative mass fractions based on the classification result, where the plurality of target relative mass fractions includes one relative mass fraction corresponding to each frame picture in the model training frame sequence, and the determining includes: dividing the model training frame sequence into preset categories according to a preset classification strategy to obtain classification results, wherein the classification results comprise a positive sample frame sequence and a negative sample frame sequence; setting the relative mass fraction of each picture frame in the positive sample frame sequence as a preset mass fraction to obtain a plurality of positive sample relative mass fractions; sorting the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from big to small to obtain a sorting result, extracting the matching score ranked as first in the sorting result, and determining the matching score ranked as first as a standard matching score; subtracting the standard matching score from the matching score corresponding to each frame of picture in the negative sample frame sequence to obtain a plurality of subtraction results, and respectively solving absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores; and combining the positive sample relative mass fractions and the negative sample relative mass fractions to obtain a plurality of target relative mass fractions, wherein the target relative mass fractions comprise a relative mass fraction corresponding to each frame of picture in the model training frame sequence.
Optionally, in a fifth implementation manner of the first aspect of the present invention, training a preset face quality model through the multiple target relative mass fractions, and obtaining an optimized face quality model includes: performing value taking in a preset value taking interval according to a preset interval to obtain a plurality of initial super-parameters, substituting each initial super-parameter and the plurality of target relative mass fractions into a preset model mass score calculation formula to obtain a plurality of model mass scores corresponding to each initial super-parameter; drawing a receiver operation characteristic curve corresponding to each initial hyper-parameter based on a plurality of model quality scores corresponding to each initial hyper-parameter to obtain a receiver operation characteristic curve set, and determining a target model hyper-parameter through the receiver operation characteristic curve set; and performing super-parameter updating on a preset face quality model through the target model super-parameters to obtain an optimized face quality model.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the acquiring a face image to be identified, invoking the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified, and obtaining a target face image includes: acquiring a face image to be identified, calling the optimized face quality model, and calculating the quality score of the face image to be identified; comparing the quality score of the face image to be identified with a preset quality score threshold to obtain a quality identification result, and determining a quality qualified image in the quality identification result as a target face image, wherein the quality identification result comprises a quality qualified image and a quality unqualified image.
The second aspect of the present invention provides a quality screening apparatus for face images, including: the acquisition module is used for acquiring an initial face video, calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person; the matching module is used for calling a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence; the classification module is used for classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence; the training module is used for training a preset face quality model through the plurality of target relative mass fractions to obtain an optimized face quality model; the recognition module is used for acquiring the face image to be recognized, calling the optimized face quality model, and sequentially carrying out quality evaluation and quality screening on the face image to be recognized to obtain a target face image.
Optionally, in a first implementation manner of the second aspect of the present invention, the acquiring module includes: the acquisition unit is used for acquiring an initial face video, and carrying out picture interception on the initial face video based on a preset intercepting frame number to obtain an initial picture frame sequence; the detection unit is used for calling a preset tracking algorithm to detect and track the face of the initial picture frame sequence to obtain a detection result, filtering invalid picture frames in the detection result to obtain a face picture frame sequence, wherein the invalid picture frames are picture frames without face information, and the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person.
Optionally, in a second implementation manner of the second aspect of the present invention, the matching module includes: the matching unit is used for calling a preset Hungary matching algorithm, and matching each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores; the first comparison unit is used for comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and screening the face picture frame sequence according to the comparison result to obtain a model training frame sequence.
Optionally, in a third implementation manner of the second aspect of the present invention, the first comparing unit is specifically configured to: comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and deleting the human face picture frame sequence and re-acquiring the human face picture frame sequence if the comparison result is that the plurality of matching scores are all larger than the score threshold value, so as to obtain a model training frame sequence; and if the comparison result is that the matching scores smaller than the score threshold exist in the matching scores, reserving all face picture frames corresponding to the matching scores smaller than the score threshold in the face picture frame sequence and a preset number of face picture frames corresponding to the matching scores larger than the score threshold to obtain a model training frame sequence.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the classification module includes: the classification unit is used for classifying the model training frame sequence into preset categories according to a preset classification strategy to obtain classification results, wherein the classification results comprise a positive sample frame sequence and a negative sample frame sequence; the setting unit is used for setting the relative mass fraction of each picture frame in the positive sample frame sequence to be a preset mass fraction to obtain a plurality of positive sample relative mass fractions; the sorting unit is used for sorting the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from large to small to obtain a sorting result, extracting the matching score ranked as first in the sorting result, and determining the matching score ranked as first as a standard matching score; the computing unit is used for subtracting the standard matching score from the matching score corresponding to each frame of picture in the negative sample frame sequence to obtain a plurality of subtraction results, and respectively solving absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores; and the merging unit is used for merging the plurality of positive sample relative mass fractions and the plurality of negative sample relative mass fractions to obtain a plurality of target relative mass fractions, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame picture in the model training frame sequence.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the training module includes: the value taking unit is used for taking values in a preset value taking interval according to a preset interval to obtain a plurality of initial super-parameters, substituting each initial super-parameter and the plurality of target relative mass fractions into a preset model mass score calculation formula to obtain a plurality of model mass scores corresponding to each initial super-parameter; the determining unit is used for drawing a receiver operation characteristic curve corresponding to each initial super-parameter based on a plurality of model quality scores corresponding to each initial super-parameter to obtain a receiver operation characteristic curve set, and determining a target model super-parameter through the receiver operation characteristic curve set; and the updating unit is used for carrying out super-parameter updating on the preset face quality model through the target model super-parameter to obtain the optimized face quality model.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the identification module includes: the judging unit is used for acquiring a face image to be identified, calling the optimized face quality model and calculating the quality score of the face image to be identified; the second comparison unit is used for comparing the quality score of the face image to be identified with a preset quality score threshold value to obtain a quality identification result, and determining a quality qualified image in the quality identification result as a target face image, wherein the quality identification result comprises a quality qualified image and a quality unqualified image.
A third aspect of the present invention provides a quality filtering apparatus for face images, including: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the quality screening device of the face image to perform the quality screening method of the face image described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein that, when executed on a computer, cause the computer to perform the above-described quality screening method of face images.
In the technical scheme provided by the invention, an initial face video is acquired, a preset tracking algorithm is invoked to detect the initial face video, and a face picture frame sequence is obtained, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person; invoking a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence; classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence; training a preset face quality model through the target relative mass fractions to obtain an optimized face quality model; and acquiring a face image to be identified, and calling the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified to obtain a target face image. In the embodiment of the invention, the preset Hungary matching algorithm is invoked to perform matching and cleaning treatment on the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, and the quality evaluation and quality screening are performed on the human face image to be identified, so that the accuracy of the quality screening of the human face image is improved.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a method for screening quality of face images according to an embodiment of the present invention;
fig. 2 is a schematic diagram of another embodiment of a quality filtering method for face images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a quality filtering apparatus for face images according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of another embodiment of a quality filtering apparatus for face images according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of a quality filtering apparatus for face images according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a quality screening method, device, equipment and storage medium for face images, which are used for carrying out matching and cleaning treatment on a face image frame sequence by calling a preset Hungary matching algorithm to obtain a model training frame sequence, classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, determining a plurality of target relative quality scores based on the classification result, training a preset face quality model through the plurality of target relative quality scores to obtain an optimized face quality model, carrying out quality evaluation and quality screening on the face images to be identified, and improving the accuracy of the quality screening of the face images.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For easy understanding, the following describes a specific flow of an embodiment of the present invention, referring to fig. 1, and an embodiment of a method for screening quality of a face image in an embodiment of the present invention includes:
101. The method comprises the steps of obtaining an initial face video, and calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person.
It can be understood that the execution body of the present invention may be a quality screening device of a face image, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
In the implementation, the server acquires an initial face video, and detects the initial face video by calling a preset tracking algorithm to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person. When the mean shift algorithm is used for visual tracking, firstly, picture interception is performed on an initial face video based on a preset interception frame number to obtain an initial picture frame sequence, a color histogram corresponding to each frame picture in the initial picture frame sequence is drawn, a confidence map is created according to the color histogram corresponding to each frame picture, a peak value of the confidence map is queried through mean shift, the confidence map is used for representing a probability density function of an image, the image with the probability density lower than a preset threshold value is determined to be an invalid picture frame, and the invalid picture frame is deleted, so that a plurality of face picture frames (namely a face picture frame sequence) corresponding to the same person are finally obtained.
102. And calling a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence.
And the server invokes a preset Hungary matching algorithm to match and clean the human face picture frame sequence to obtain a model training frame sequence. The hungarian algorithm is based on the thought of sufficiency demonstration in hall determination, the core of the algorithm is finding an augmentation path, and the method is an algorithm for solving maximum matching of bipartite graphs by using the augmentation path.
103. Classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame picture in the model training frame sequence.
The server classifies the model training frame sequence according to a preset classification strategy to obtain a classification result, and determines a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame picture in the model training frame sequence. And (3) matching each frame in the face picture frame sequence with a preset standard picture by calling a Hungary matching algorithm to obtain a plurality of matching scores, wherein the preset classification strategy is to divide the frame sequence with the matching score higher than a preset score threshold in the model training frame sequence into a positive sample frame sequence, divide the frame sequence with the matching score smaller than or equal to the preset score threshold in the model training frame sequence into a negative sample frame sequence, and determine the positive sample frame sequence and the negative sample frame sequence as classification results, wherein the relative quality score of the positive sample frame sequence is set to 0, the highest matching score corresponding to a certain negative sample frame is extracted from the negative sample frame sequence, the matching score corresponding to each frame picture in the rest negative sample frame sequences except the certain negative sample frame is subtracted from the highest matching score and the absolute value is calculated, and finally a plurality of relative quality scores are obtained, and each frame picture corresponds to one relative quality score.
104. Training a preset face quality model through a plurality of target relative mass fractions to obtain an optimized face quality model.
The server trains a preset face quality model through a plurality of target relative mass fractions to obtain an optimized face quality model. The process of optimizing the face quality model mainly comprises the steps of calculating a plurality of model quality scores corresponding to each initial super parameter based on a plurality of target relative quality scores under different initial super parameter values, drawing a receiver operation characteristic curve (receiver operating characteristic, ROC) corresponding to each initial super parameter, wherein the ROC curve takes a true example rate (true positive rate, TPR) as a vertical axis, takes a false example rate (false positive rate, FPR) as a horizontal axis, obtaining coordinate points under different thresholds, connecting all coordinate points to obtain a receiver operation characteristic curve set, and determining the target model super parameter to update the super parameter of the model to obtain the optimized face quality model.
105. And acquiring a face image to be identified, and calling the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified to obtain a target face image.
The server acquires the face image to be identified, invokes the optimized face quality model, and sequentially carries out quality evaluation and quality screening on the face image to be identified to obtain a target face image. And calling the optimized face quality model to score the face image to be identified to obtain a quality score of the face image to be identified, and comparing the quality score of the face image to be identified with a preset quality score threshold value, so that whether the quality of the face image to be identified is qualified or not can be rapidly judged, and the quality qualified image is determined as a target face image, thereby improving the accuracy and efficiency of the face image quality screening.
In the embodiment of the invention, the preset Hungary matching algorithm is called to match and clean the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, the quality evaluation and the quality screening are carried out on the human face image to be identified to obtain the target human face image, and after the target human face image is obtained, the preset human face model can be called to carry out human face recognition on the target human face image to carry out the quality screening on the human face image so as to ensure the precision of the human face recognition model and improve the human face recognition accuracy.
Referring to fig. 2, another embodiment of a method for screening quality of face images according to an embodiment of the present invention includes:
201. The method comprises the steps of obtaining an initial face video, and calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person.
The method comprises the steps that a server obtains an initial face video, a preset tracking algorithm is called to detect the initial face video, and a face picture frame sequence is obtained and comprises a plurality of face picture frames corresponding to the same person. Specifically, a server acquires an initial face video, and performs picture interception on the initial face video based on a preset interception frame number to obtain an initial picture frame sequence; the server invokes a preset tracking algorithm to detect and track the face of the initial image frame sequence to obtain a detection result, and filters invalid image frames in the detection result to obtain a face image frame sequence, wherein the invalid image frames are image frames without face information, and the face image frame sequence comprises a plurality of face image frames corresponding to the same person. For example: some initial face video has 10 seconds, assuming that 25 frames are 1 second (i.e. preset intercepting frame number), 10×25=250 pictures (i.e. initial picture frame sequence) are obtained in total, and a person a detects 3×25=75 face pictures from entering a camera to leaving for 3 seconds, so as to obtain a face picture frame sequence.
202. And calling a preset Hungary matching algorithm, and matching each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores.
The server invokes a preset Hungary matching algorithm to match each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores, wherein each frame picture corresponds to one matching score. The hungarian algorithm is based on the thought of sufficiency demonstration in hall determination, the core of the algorithm is finding an augmentation path, and the algorithm is an algorithm for solving maximum matching of bipartite images by using the augmentation path.
203. And respectively comparing the plurality of matching scores with a preset score threshold to obtain a comparison result, and screening the human face picture frame sequence according to the comparison result to obtain a model training frame sequence.
And the server compares the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and screens the human face picture frame sequence according to the comparison result to obtain a model training frame sequence. Specifically, the server compares the plurality of matching scores with a preset score threshold to obtain a comparison result, and if the comparison result is that the plurality of matching scores are all larger than the score threshold, the face picture frame sequence is deleted and the face picture frame sequence is acquired again to obtain a model training frame sequence; if the comparison result is that the matching scores smaller than the score threshold value exist in the plurality of matching scores, the server reserves all face picture frames corresponding to the matching scores smaller than the score threshold value in the face picture frame sequence and a preset number of face picture frames corresponding to the matching scores larger than the score threshold value, and a model training frame sequence is obtained.
And the server cleans the face picture frame sequence according to a plurality of matching scores, if the matching scores are all larger than a score threshold, discarding the pictures which are not helpful for optimizing the face quality model, if the matching scores are all larger than the threshold, which means that the collected data are not useful, the collection of the face picture frame sequence needs to be carried out again, otherwise, the pictures (the number of assumed to be N) corresponding to the matching scores which are all lower than the score threshold and the pictures which are higher than the score threshold are retained, the pictures with the highest matching scores are randomly taken from the rest pictures, and the positive and negative sample proportion can be ensured.
204. Classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame picture in the model training frame sequence.
The server classifies the model training frame sequence according to a preset classification strategy to obtain a classification result, and determines a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame picture in the model training frame sequence. Specifically, the server divides the model training frame sequence into preset categories according to a preset classification strategy to obtain a classification result, wherein the classification result comprises a positive sample frame sequence and a negative sample frame sequence; the server sets the relative mass fraction of each picture frame in the positive sample frame sequence as a preset mass fraction to obtain a plurality of positive sample relative mass fractions; the server sorts the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from big to small to obtain a sorting result, extracts the matching score ranked as the first in the sorting result, and determines the matching score ranked as the first as a standard matching score; the server subtracts the matching score corresponding to each frame picture in the negative sample frame sequence from the standard matching score to obtain a plurality of subtraction results, and respectively obtains absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores; the server combines the positive sample relative mass fractions and the negative sample relative mass fractions to obtain a plurality of target relative mass fractions, wherein the target relative mass fractions comprise the relative mass fractions corresponding to each frame of picture in the model training frame sequence.
Each picture frame in the positive sample frame sequence corresponds to a positive sample relative mass fraction, each picture frame in the negative sample frame sequence corresponds to a negative sample relative mass fraction, the server sets the relative mass fraction of each picture frame in the positive sample frame sequence to 0, the matching fractions corresponding to each picture in the negative sample frame sequence are ordered according to the order from big to small, the matching fractions ranked as the first in the ordering result are extracted to obtain standard matching fractions, the fractions corresponding to each picture in the rest negative sample frame sequence are subtracted from the standard matching fractions respectively and the absolute value is calculated to obtain a plurality of negative sample relative mass fractions, the plurality of positive sample relative mass fractions and the plurality of negative sample relative mass fractions are combined, and a plurality of target relative mass fractions are finally obtained, for example: the number of the target relative mass fractions is 200, and the number of the target relative mass fractions is finally obtained by combining 100 positive sample relative mass fractions and 100 negative sample relative mass fractions.
205. Training a preset face quality model through a plurality of target relative mass fractions to obtain an optimized face quality model.
The server trains a preset face quality model through a plurality of target relative mass fractions to obtain an optimized face quality model. Specifically, the server takes values in a preset value interval according to a preset interval to obtain a plurality of initial super parameters, and substitutes each initial super parameter and a plurality of target relative mass fractions into a preset model quality score calculation formula to obtain a plurality of model quality scores corresponding to each initial super parameter; the server draws a receiver operation characteristic curve corresponding to each initial hyper-parameter based on a plurality of model quality scores corresponding to each initial hyper-parameter to obtain a receiver operation characteristic curve set, and determines a target model hyper-parameter through the receiver operation characteristic curve set; and the server carries out super-parameter updating on the preset face quality model through the target model super-parameters to obtain the optimized face quality model.
The preset interval is 0.05, the server takes values from 0 to 1 every 0.05 to obtain a plurality of initial super parameters, the corresponding model quality score under each value is calculated, and the calculation formula of the model quality score is as follows: model quality score = alpha match score- (1-alpha) target relative quality score, drawing a receiver operation characteristic curve corresponding to each initial superparameter through each initial superparameter and a plurality of target relative quality scores, obtaining a receiver operation characteristic curve set, calculating the area under the curve (area under the curve, AUC) of the receiver operation characteristic curve under each initial superparameter value, obtaining a set of area under the curve, sequencing the set of area under the curve according to the sequence from large to small of the area under the curve, obtaining an area under the curve sequence, determining the area under the curve with the first rank in the area under the curve sequence as the area under the target curve, inquiring the initial superparameter value corresponding to the area under the target curve, obtaining the target model superparameter, updating the superparameter of the model through the target model superparameter, and finally obtaining the optimized face quality model.
206. And acquiring a face image to be identified, and calling the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified to obtain a target face image.
The server acquires the face image to be identified, invokes the optimized face quality model, and sequentially carries out quality evaluation and quality screening on the face image to be identified to obtain a target face image. Specifically, the server acquires a face image to be identified, invokes an optimized face quality model, and calculates a quality score of the face image to be identified; the server compares the quality score of the face image to be identified with a preset quality score threshold to obtain a quality identification result, and the quality qualified image in the quality identification result is determined to be the target face image, wherein the quality identification result comprises a quality qualified image and a quality unqualified image. The optimized face quality model is called to score the face image to be identified, the quality score of the face image to be identified is obtained, the quality score of the face image to be identified is compared with a preset quality score threshold, the picture which is larger than or equal to the quality score threshold is determined to be a quality qualified image, the picture which is smaller than the quality score threshold is determined to be a quality unqualified image, and the quality qualified image is determined to be a target face image, so that the quality qualified image in the face image to be identified can be rapidly screened, and the accuracy and efficiency of face image quality screening are improved.
In the embodiment of the invention, the preset Hungary matching algorithm is invoked to perform matching and cleaning treatment on the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, and the quality evaluation and quality screening are performed on the human face image to be identified, so that the accuracy of the quality screening of the human face image is improved.
The above describes a method for screening the quality of a face image in an embodiment of the present invention, and the following describes a device for screening the quality of a face image in an embodiment of the present invention, referring to fig. 3, an embodiment of the device for screening the quality of a face image in an embodiment of the present invention includes:
The acquiring module 301 is configured to acquire an initial face video, invoke a preset tracking algorithm to detect the initial face video, and obtain a face picture frame sequence, where the face picture frame sequence includes a plurality of face picture frames corresponding to the same person;
The matching module 302 is configured to invoke a preset hungarian matching algorithm, and perform matching and cleaning processing on the face picture frame sequence to obtain a model training frame sequence;
the classification module 303 is configured to classify the model training frame sequence according to a preset classification policy, obtain a classification result, and determine a plurality of target relative mass fractions based on the classification result, where the plurality of target relative mass fractions include a relative mass fraction corresponding to each frame of picture in the model training frame sequence;
the training module 304 is configured to train a preset face quality model through a plurality of target relative mass fractions, so as to obtain an optimized face quality model;
The recognition module 305 is configured to acquire a face image to be recognized, call the optimized face quality model, and sequentially perform quality evaluation and quality screening on the face image to be recognized to obtain a target face image.
In the embodiment of the invention, the preset Hungary matching algorithm is invoked to perform matching and cleaning treatment on the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, and the quality evaluation and quality screening are performed on the human face image to be identified, so that the accuracy of the quality screening of the human face image is improved.
Referring to fig. 4, another embodiment of a quality filtering apparatus for face images according to an embodiment of the present invention includes:
The acquiring module 301 is configured to acquire an initial face video, invoke a preset tracking algorithm to detect the initial face video, and obtain a face picture frame sequence, where the face picture frame sequence includes a plurality of face picture frames corresponding to the same person;
The matching module 302 is configured to invoke a preset hungarian matching algorithm, and perform matching and cleaning processing on the face picture frame sequence to obtain a model training frame sequence;
The matching module 302 specifically includes:
a matching unit 3021, configured to invoke a preset hungarian matching algorithm, and match each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores;
the first comparing unit 3022 is configured to compare the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and screen the face picture frame sequence according to the comparison result to obtain a model training frame sequence;
the classification module 303 is configured to classify the model training frame sequence according to a preset classification policy, obtain a classification result, and determine a plurality of target relative mass fractions based on the classification result, where the plurality of target relative mass fractions include a relative mass fraction corresponding to each frame of picture in the model training frame sequence;
the training module 304 is configured to train a preset face quality model through a plurality of target relative mass fractions, so as to obtain an optimized face quality model;
The recognition module 305 is configured to acquire a face image to be recognized, call the optimized face quality model, and sequentially perform quality evaluation and quality screening on the face image to be recognized to obtain a target face image.
In the embodiment of the invention, the preset Hungary matching algorithm is invoked to perform matching and cleaning treatment on the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, and the quality evaluation and quality screening are performed on the human face image to be identified, so that the accuracy of the quality screening of the human face image is improved.
Optionally, the acquiring module 301 includes:
an obtaining unit 3011, configured to obtain an initial face video, and perform picture interception on the initial face video based on a preset interception frame number to obtain an initial picture frame sequence;
The detecting unit 3012 is configured to invoke a preset tracking algorithm to perform face detection and tracking on the initial image frame sequence, obtain a detection result, filter invalid image frames in the detection result, obtain a face image frame sequence, where the invalid image frames are image frames without face information, and the face image frame sequence includes a plurality of face image frames corresponding to the same person.
Optionally, the first comparing unit 3022 may be further specifically configured to:
comparing the plurality of matching scores with a preset score threshold to obtain a comparison result, and deleting the human face picture frame sequence and re-acquiring the human face picture frame sequence if the comparison result is that the plurality of matching scores are all larger than the score threshold to obtain a model training frame sequence; if the comparison result is that the matching scores smaller than the score threshold value exist in the plurality of matching scores, reserving all face picture frames corresponding to the matching scores smaller than the score threshold value in the face picture frame sequence and a preset number of face picture frames corresponding to the matching scores larger than the score threshold value to obtain a model training frame sequence.
Optionally, the classification module 303 includes:
The dividing unit 3031 is configured to divide the model training frame sequence into preset categories according to a preset classification strategy to obtain a classification result, where the classification result includes a positive sample frame sequence and a negative sample frame sequence;
A setting unit 3032, configured to set a relative mass score of each picture frame in the positive sample frame sequence to a preset mass score, so as to obtain a plurality of positive sample relative mass scores;
A ranking unit 3033, configured to rank the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from large to small, obtain a ranking result, extract the matching score ranked as the first in the ranking result, and determine the matching score ranked as the first as a standard matching score;
the computing unit 3034 is configured to subtract the matching score corresponding to each frame of picture in the negative sample frame sequence from the standard matching score to obtain a plurality of subtraction results, and respectively calculate absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores;
The merging unit 3035 is configured to merge the plurality of positive sample relative mass fractions and the plurality of negative sample relative mass fractions to obtain a plurality of target relative mass fractions, where the plurality of target relative mass fractions includes a relative mass fraction corresponding to each frame of picture in the model training frame sequence.
Optionally, the training module 304 includes:
The value unit 3041 is configured to perform value taking in a preset value interval according to a preset interval to obtain a plurality of initial superparameters, and substituting each initial superparameter and a plurality of target relative mass fractions into a preset model quality score calculation formula to obtain a plurality of model quality scores corresponding to each initial superparameter;
A determining unit 3042, configured to draw a receiver operation characteristic curve corresponding to each initial hyper-parameter based on a plurality of model quality scores corresponding to each initial hyper-parameter, obtain a receiver operation characteristic curve set, and determine a target model hyper-parameter through the receiver operation characteristic curve set;
and the updating unit 3043 is used for performing super-parameter updating on the preset face quality model through the target model super-parameters to obtain the optimized face quality model.
Optionally, the identification module 305 includes:
the judging unit 3051 is used for acquiring a face image to be identified, calling an optimized face quality model and calculating a quality score of the face image to be identified;
The second comparing unit 3052 is configured to compare the quality score of the face image to be identified with a preset quality score threshold to obtain a quality identification result, and determine a quality qualified image in the quality identification result as a target face image, where the quality identification result includes a quality qualified image and a quality unqualified image.
In the embodiment of the invention, the preset Hungary matching algorithm is invoked to perform matching and cleaning treatment on the human face picture frame sequence to obtain the model training frame sequence, the model training frame sequence is classified according to the preset classification strategy to obtain the classification result, a plurality of target relative mass fractions are determined based on the classification result, the preset human face mass model is trained through the plurality of target relative mass fractions to obtain the optimized human face mass model, and the quality evaluation and quality screening are performed on the human face image to be identified, so that the accuracy of the quality screening of the human face image is improved.
The above-mentioned fig. 3 and fig. 4 describe the quality screening apparatus for face images in the embodiment of the present invention in detail from the point of view of modularized functional entities, and the following describes the quality screening apparatus for face images in the embodiment of the present invention in detail from the point of view of hardware processing.
Fig. 5 is a schematic structural diagram of a quality filtering apparatus for face images according to an embodiment of the present invention, where the quality filtering apparatus 500 for face images may have relatively large differences due to different configurations or performances, and may include one or more processors (central processing units, CPU) 510 (e.g., one or more processors) and a memory 520, and one or more storage mediums 530 (e.g., one or more mass storage devices) storing application programs 533 or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the quality filtering apparatus 500 for face images. Still further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the face image quality screening apparatus 500.
The face image quality screening apparatus 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input/output interfaces 560, and/or one or more operating systems 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the structure of the quality screening device for face images shown in fig. 5 does not constitute a limitation of the quality screening device for face images, and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The invention also provides a quality screening device for the face image, the computer device comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the quality screening method for the face image in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, where the instructions when executed on a computer cause the computer to perform the steps of the quality screening method for face images.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. The quality screening method of the face image is characterized by comprising the following steps of:
acquiring an initial face video, and calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person;
Invoking a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence;
classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence;
Classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence, and the method comprises the following steps:
dividing the model training frame sequence into preset categories according to a preset classification strategy to obtain classification results, wherein the classification results comprise a positive sample frame sequence and a negative sample frame sequence;
Setting the relative mass fraction of each picture frame in the positive sample frame sequence as a preset mass fraction to obtain a plurality of positive sample relative mass fractions;
Sorting the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from big to small to obtain a sorting result, extracting the matching score ranked as first in the sorting result, and determining the matching score ranked as first as a standard matching score;
subtracting the standard matching score from the matching score corresponding to each frame of picture in the negative sample frame sequence to obtain a plurality of subtraction results, and respectively solving absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores;
combining the positive sample relative mass fractions and the negative sample relative mass fractions to obtain a plurality of target relative mass fractions, wherein the target relative mass fractions comprise a relative mass fraction corresponding to each frame of picture in the model training frame sequence;
Training a preset face quality model through the target relative mass fractions to obtain an optimized face quality model;
and acquiring a face image to be identified, and calling the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified to obtain a target face image.
2. The method for quality screening of face images according to claim 1, wherein the steps of obtaining an initial face video, and calling a preset tracking algorithm to detect the initial face video, and obtaining a face picture frame sequence, wherein the face picture frame sequence includes a plurality of face picture frames corresponding to the same person, and the method comprises the steps of:
Acquiring an initial face video, and carrying out picture interception on the initial face video based on a preset interception frame number to obtain an initial picture frame sequence;
And invoking a preset tracking algorithm to perform face detection and tracking on the initial picture frame sequence to obtain a detection result, and filtering invalid picture frames in the detection result to obtain a face picture frame sequence, wherein the invalid picture frames are picture frames without face information, and the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person.
3. The method for quality screening of face images according to claim 1, wherein the step of calling a preset hungarian matching algorithm to match and clean the face picture frame sequence to obtain a model training frame sequence comprises the steps of:
Invoking a preset Hungary matching algorithm, and matching each frame in the face picture frame sequence with a preset standard picture to obtain a plurality of matching scores;
and comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and screening the human face picture frame sequence according to the comparison result to obtain a model training frame sequence.
4. The method for quality screening of face images according to claim 3, wherein comparing the plurality of matching scores with a preset score threshold to obtain a comparison result, screening the face picture frame sequence according to the comparison result, and obtaining a model training frame sequence comprises:
Comparing the plurality of matching scores with a preset score threshold value respectively to obtain a comparison result, and deleting the human face picture frame sequence and re-acquiring the human face picture frame sequence if the comparison result is that the plurality of matching scores are all larger than the score threshold value, so as to obtain a model training frame sequence;
and if the comparison result is that the matching scores smaller than the score threshold exist in the matching scores, reserving all face picture frames corresponding to the matching scores smaller than the score threshold in the face picture frame sequence and a preset number of face picture frames corresponding to the matching scores larger than the score threshold to obtain a model training frame sequence.
5. The method for quality screening of face images according to claim 1, wherein training a preset face quality model by the plurality of target relative quality scores to obtain an optimized face quality model comprises:
Performing value taking in a preset value taking interval according to a preset interval to obtain a plurality of initial super-parameters, substituting each initial super-parameter and the plurality of target relative mass fractions into a preset model mass score calculation formula to obtain a plurality of model mass scores corresponding to each initial super-parameter;
drawing a receiver operation characteristic curve corresponding to each initial hyper-parameter based on a plurality of model quality scores corresponding to each initial hyper-parameter to obtain a receiver operation characteristic curve set, and determining a target model hyper-parameter through the receiver operation characteristic curve set;
And performing super-parameter updating on a preset face quality model through the target model super-parameters to obtain an optimized face quality model.
6. The method for quality screening of face images according to any one of claims 1 to 5, wherein the obtaining the face image to be identified, invoking the optimized face quality model to sequentially perform quality evaluation and quality screening on the face image to be identified, and obtaining a target face image includes:
Acquiring a face image to be identified, calling the optimized face quality model, and calculating the quality score of the face image to be identified;
Comparing the quality score of the face image to be identified with a preset quality score threshold to obtain a quality identification result, and determining a quality qualified image in the quality identification result as a target face image, wherein the quality identification result comprises a quality qualified image and a quality unqualified image.
7. A quality screening device for face images, the quality screening device for face images comprising:
The acquisition module is used for acquiring an initial face video, calling a preset tracking algorithm to detect the initial face video to obtain a face picture frame sequence, wherein the face picture frame sequence comprises a plurality of face picture frames corresponding to the same person;
the matching module is used for calling a preset Hungary matching algorithm, and carrying out matching and cleaning treatment on the human face picture frame sequence to obtain a model training frame sequence;
The classification module is used for classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence;
Classifying the model training frame sequence according to a preset classification strategy to obtain a classification result, and determining a plurality of target relative mass fractions based on the classification result, wherein the plurality of target relative mass fractions comprise one relative mass fraction corresponding to each frame of picture in the model training frame sequence, and the method comprises the following steps:
dividing the model training frame sequence into preset categories according to a preset classification strategy to obtain classification results, wherein the classification results comprise a positive sample frame sequence and a negative sample frame sequence;
Setting the relative mass fraction of each picture frame in the positive sample frame sequence as a preset mass fraction to obtain a plurality of positive sample relative mass fractions;
Sorting the matching scores corresponding to each frame of picture in the negative sample frame sequence according to the order from big to small to obtain a sorting result, extracting the matching score ranked as first in the sorting result, and determining the matching score ranked as first as a standard matching score;
subtracting the standard matching score from the matching score corresponding to each frame of picture in the negative sample frame sequence to obtain a plurality of subtraction results, and respectively solving absolute values of the plurality of subtraction results to obtain a plurality of negative sample relative mass scores;
combining the positive sample relative mass fractions and the negative sample relative mass fractions to obtain a plurality of target relative mass fractions, wherein the target relative mass fractions comprise a relative mass fraction corresponding to each frame of picture in the model training frame sequence;
the training module is used for training a preset face quality model through the plurality of target relative mass fractions to obtain an optimized face quality model;
the recognition module is used for acquiring the face image to be recognized, calling the optimized face quality model, and sequentially carrying out quality evaluation and quality screening on the face image to be recognized to obtain a target face image.
8. An electronic device, the electronic device comprising:
a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invoking the instructions in the memory to cause the electronic device to perform the quality screening method of a face image of any of claims 1-6.
9. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement a method of quality screening of a face image according to any of claims 1-6.
CN202110965436.4A 2021-08-23 2021-08-23 Quality screening method, device, equipment and storage medium for face image Active CN113657315B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110965436.4A CN113657315B (en) 2021-08-23 2021-08-23 Quality screening method, device, equipment and storage medium for face image
PCT/CN2022/071692 WO2023024417A1 (en) 2021-08-23 2022-01-13 Face image quality screening method, apparatus, and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110965436.4A CN113657315B (en) 2021-08-23 2021-08-23 Quality screening method, device, equipment and storage medium for face image

Publications (2)

Publication Number Publication Date
CN113657315A CN113657315A (en) 2021-11-16
CN113657315B true CN113657315B (en) 2024-05-14

Family

ID=78491910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110965436.4A Active CN113657315B (en) 2021-08-23 2021-08-23 Quality screening method, device, equipment and storage medium for face image

Country Status (2)

Country Link
CN (1) CN113657315B (en)
WO (1) WO2023024417A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657315B (en) * 2021-08-23 2024-05-14 平安科技(深圳)有限公司 Quality screening method, device, equipment and storage medium for face image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171256A (en) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 Facial image matter comments model construction, screening, recognition methods and equipment and medium
CN111079670A (en) * 2019-12-20 2020-04-28 北京百度网讯科技有限公司 Face recognition method, face recognition device, face recognition terminal and face recognition medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628700B2 (en) * 2016-05-23 2020-04-21 Intel Corporation Fast and robust face detection, region extraction, and tracking for improved video coding
US10691925B2 (en) * 2017-10-28 2020-06-23 Altumview Systems Inc. Enhanced face-detection and face-tracking for resource-limited embedded vision systems
CN111753731A (en) * 2020-06-24 2020-10-09 上海立可芯半导体科技有限公司 Face quality evaluation method, device and system and training method of face quality evaluation model
CN112991393A (en) * 2021-04-15 2021-06-18 北京澎思科技有限公司 Target detection and tracking method and device, electronic equipment and storage medium
CN113657315B (en) * 2021-08-23 2024-05-14 平安科技(深圳)有限公司 Quality screening method, device, equipment and storage medium for face image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171256A (en) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 Facial image matter comments model construction, screening, recognition methods and equipment and medium
CN111079670A (en) * 2019-12-20 2020-04-28 北京百度网讯科技有限公司 Face recognition method, face recognition device, face recognition terminal and face recognition medium

Also Published As

Publication number Publication date
WO2023024417A1 (en) 2023-03-02
CN113657315A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
CN105868708B (en) A kind of images steganalysis method and device
CN110941594A (en) Splitting method and device of video file, electronic equipment and storage medium
KR20080051956A (en) System and method for analyzing of human motion based silhouettes of real-time video stream
CN110929687B (en) Multi-user behavior recognition system based on key point detection and working method
CN109697716B (en) Identification method and equipment of cyan eye image and screening system
CN111914665A (en) Face shielding detection method, device, equipment and storage medium
CN111223129A (en) Detection method, detection device, monitoring equipment and computer readable storage medium
CN111210399A (en) Imaging quality evaluation method, device and equipment
CN109284700B (en) Method, storage medium, device and system for detecting multiple faces in image
CN111814690A (en) Target re-identification method and device and computer readable storage medium
CN114937232A (en) Wearing detection method, system and equipment for medical waste treatment personnel protective appliance
JP2020135551A (en) Object recognition device, object recognition method and object recognition program
CN111126143A (en) Deep learning-based exercise judgment guidance method and system
CN113657315B (en) Quality screening method, device, equipment and storage medium for face image
CN111611944A (en) Identity recognition method and device, electronic equipment and storage medium
KR20190050551A (en) Apparatus and method for recognizing body motion based on depth map information
KR20100116404A (en) Method and apparatus of dividing separated cell and grouped cell from image
CN114140663A (en) Multi-scale attention and learning network-based pest identification method and system
CN112183287A (en) People counting method of mobile robot under complex background
CN109993118B (en) Action recognition method and system
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
CN107403192B (en) Multi-classifier-based rapid target detection method and system
CN113627255B (en) Method, device and equipment for quantitatively analyzing mouse behaviors and readable storage medium
CN106557523A (en) Presentation graphics system of selection and equipment and object images search method and equipment
CN115035443A (en) Method, system and device for detecting fallen garbage based on picture shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant