CN109389019B - Face image selection method and device and computer equipment - Google Patents

Face image selection method and device and computer equipment Download PDF

Info

Publication number
CN109389019B
CN109389019B CN201710692300.4A CN201710692300A CN109389019B CN 109389019 B CN109389019 B CN 109389019B CN 201710692300 A CN201710692300 A CN 201710692300A CN 109389019 B CN109389019 B CN 109389019B
Authority
CN
China
Prior art keywords
face
attribute
image group
image
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710692300.4A
Other languages
Chinese (zh)
Other versions
CN109389019A (en
Inventor
何海峰
钮毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201710692300.4A priority Critical patent/CN109389019B/en
Publication of CN109389019A publication Critical patent/CN109389019A/en
Application granted granted Critical
Publication of CN109389019B publication Critical patent/CN109389019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The embodiment of the invention provides a face image selection method, a face image selection device and computer equipment, wherein the face image selection method comprises the following steps: acquiring a face image group; determining a target grade of each face image based on the first type face attribute value of each face image in the face image group; constructing a plurality of alternative image groups corresponding to the face image group; after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain a face image to be utilized; and determining a target face image corresponding to the face image group based on all the face images to be utilized corresponding to the face image group. The face image selection method provided by the embodiment of the invention can improve the accuracy of face recognition.

Description

Face image selection method and device and computer equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for selecting a face image and computer equipment.
Background
Face recognition, also commonly referred to as portrait recognition and facial recognition, is one of the active research directions in the field of pattern recognition, and is a biometric technology that performs identification based on facial feature information of a person. The technology is to collect images containing human faces by utilizing camera equipment, automatically detect and track the human faces in the collected images and further identify the detected human faces.
The specific process of the face recognition is as follows: determining a target face image from a face image group, wherein the face image group is composed of collected face images aiming at one person, the face images are images of face regions captured by a face detection algorithm, and the target face image is a face image with higher quality; calculating the similarity of the target face image and a template face image corresponding to a pre-recorded target face image; and outputting a corresponding recognition result based on the calculated similarity, wherein if the similarity is not less than a preset threshold value, the recognition result is successful, and otherwise, the recognition result is failed.
In the process of face recognition, the selected target face image has a great influence on whether the recognition is successful or not. At present, a method for selecting a face image comprises the following steps: calculating three human face attributes of the definition, the human face size and the human eye opening degree of each human face image in the human face image group; aiming at each face image, determining a comprehensive evaluation score of the face image based on the definition, the face size and the eye opening degree calculated aiming at the face image; and determining the face image corresponding to the highest comprehensive evaluation score as a target face image.
In the above-mentioned face image selection method, the comprehensive evaluation score is an arithmetic sum of attribute values of three face attributes, i.e., a definition, a face size, and a human eye opening degree. In this case, the face image corresponding to the highest comprehensive evaluation score is selected as the target face image, and the attribute value of one of the three face attributes of the target face image is small, but the attribute values of the other two face attributes are large. If the influence of the human face attribute with a small attribute value on the quality of the human face image is large, the quality of the target human face image is poor. Furthermore, the quality of the selected target face image is not good, so that the false recognition probability of recognizing the face by using the selected target face image is higher, and the accuracy of face recognition is reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a face image selection method, a face image selection device, computer equipment and a computer readable storage medium, so as to improve the accuracy of face recognition. The specific technical scheme is as follows:
in a first aspect, to achieve the above object, an embodiment of the present invention provides a method for selecting a face image, where the method includes:
acquiring a face image group, wherein the face image group comprises a plurality of face images;
determining a target grade of each face image based on a first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to a first type face attribute, and the first type face attribute comprises at least one face attribute;
constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain a face image to be utilized;
and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
Optionally, the step of constructing a plurality of candidate image groups corresponding to the face image group includes:
selecting a first preset number of face images from the face image group to construct a first alternative image group;
for any non-first alternative image group, determining the target number of face images which are not selected currently in the face image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
Optionally, the step of constructing a plurality of candidate image groups corresponding to the face image group includes:
dividing the face images in the face image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
Optionally, the step of determining the target level of each face image based on the first type of face attribute value of each face image in the face image group includes:
determining the target grade of each face image in the face image group according to the following modes:
determining a first face attribute value of a current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes;
judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade;
when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image;
and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
Optionally, the first type of face attributes include positive and negative faces, sharpness, brightness, shielding degree, deflection angle, and pitch angle.
Optionally, each face attribute in the first class of face attributes has a correspondence with at least one level;
the step of determining a screening rule for the candidate image group based on the target level of each face image in the candidate image group includes:
judging whether the face images in the alternative image group meet a first preset screening condition, wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes;
and if so, determining a preset first screening rule as a screening rule aiming at the alternative image group, wherein the first screening rule is a rule for screening the face image with the highest target grade.
Optionally, in a case that it is determined that the face image in the candidate image group does not satisfy the first preset screening condition, the method further includes:
determining a third face attribute value of each face image in the alternative image group; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes;
judging whether the face images in the alternative image group meet a second preset screening condition, wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value;
if yes, determining a preset second screening rule as a screening rule aiming at the alternative image group; the second screening rule is a rule for screening the face image with the highest first comprehensive score;
the first composite score is calculated by the following steps:
obtaining a reference value of each face attribute in second type face attributes of the face image aiming at each face image in the alternative image group, wherein the second type face attributes comprise third face attributes, and the reference value of the face attributes is determined based on a value corresponding to the face attributes;
and for each face image in the alternative image group, carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset for the third face attribute, and obtaining a first comprehensive score of the face image.
Optionally, in a case that it is determined that the face image in the candidate image group does not satisfy the second preset screening condition, the method further includes:
judging whether the face images in the alternative image group meet a third preset screening condition, wherein the third preset screening condition is as follows: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is greater than a fourth preset threshold value;
and if so, determining the first screening rule as the screening rule aiming at the alternative image group.
Optionally, in a case that it is determined that the face image in the candidate image group does not satisfy the third preset filtering condition, the method further includes:
determining a preset third screening rule as a screening rule aiming at the alternative image group; the third screening rule is a rule for screening the face image with the highest second comprehensive score;
the second composite score is calculated by the following steps:
determining a third type face attribute value of each face image in the alternative image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for a face attribute in the first type face attribute and the second type face attribute;
determining a second weight combination of each face image in the alternative image group according to the relation between a preset target level and the second weight combination;
and aiming at each face image in the alternative image group, obtaining a second comprehensive score of the face image according to the determined second weight combination of the face image and each face attribute value in a fourth type face attribute value, wherein the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute comprises the first type face attribute, the second type face attribute and the face attribute in the third type face attribute.
Optionally, the step of performing weighted calculation on each face image in the candidate image group according to the determined second weight combination of the face image and each face attribute value in the fourth class of face attribute values to obtain a first comprehensive score of the face image includes:
determining the value of each face attribute in a fourth type of face attributes of the face image according to the mapping relation between the value and the value corresponding to each face attribute in the preset fourth type of face attributes, wherein each type of face attribute corresponds to the same value range;
and according to the determined second weight combination of the face image, carrying out weighted calculation on the value of each face attribute corresponding to the face image to obtain a second comprehensive score of the face image.
Optionally, the second face attribute is a deflection angle;
the third face attribute is a pupil distance;
the second type of face attributes comprise a pupil distance, a pitch angle, a shielding degree and a deflection angle;
the third type of face attributes comprise whether the face is yin-yang face, open and close eyes and open and close mouth;
the fourth type of face attributes comprise definition, brightness, shielding degree, deflection angle, pitch angle, interpupillary distance, whether the face is yin-yang, eyes are opened and closed, and mouth is opened and closed.
In a second aspect, to achieve the above object, an embodiment of the present invention further provides a face image selection apparatus, where the apparatus includes:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a face image group, and the face image group comprises a plurality of face images;
the first determining module is used for determining the target grade of each face image based on the first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to the first type face attribute, and the first type face attribute comprises at least one face attribute;
the construction module is used for constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
the first obtaining module is used for determining a screening rule aiming at each alternative image group based on the target grade of each face image in the alternative image group after each alternative image group is formed, and screening the face images in the alternative image group through the corresponding screening rule to obtain the face images to be utilized;
and the second determining module is used for determining a target face image corresponding to the face image group based on all the face images to be utilized corresponding to the face image group.
Optionally, the building module includes:
the first construction submodule is used for selecting a first preset number of face images from the face image group and constructing a first alternative image group;
the second construction sub-module is used for determining the target number of the face images which are not selected currently in the face image group for any non-first alternative image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and the face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
Optionally, the constructing module is specifically configured to divide the face images in the face image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
Optionally, the first obtaining module is configured to determine a target level of each facial image in the facial image group according to the following manner:
determining a first face attribute value of a current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes;
judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade;
when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image;
and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
Optionally, the first type of face attributes include positive and negative faces, sharpness, brightness, shielding degree, deflection angle, and pitch angle.
Optionally, each face attribute in the first class of face attributes has a correspondence with at least one level;
the first obtaining module includes:
the judging submodule is used for judging whether the face images in the alternative image group meet a first preset screening condition, wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes;
and the first determining submodule is used for determining a preset first screening rule as the screening rule aiming at the alternative image group under the condition that the judgment result of the judging submodule is satisfied, wherein the first screening rule is a rule for screening the face image with the highest target level.
Optionally, the apparatus further comprises:
a third determining module, configured to determine a third face attribute value of each face image in the candidate image group when the determination result of the determining sub-module is not satisfied; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes;
the first judging module is used for judging whether the face images in the alternative image group meet a second preset screening condition, wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value;
a fourth determining module, configured to determine a preset second screening rule as the screening rule for the candidate image group if the determination result of the first determining sub-module is satisfied; the second screening rule is a rule for screening the face image with the highest first comprehensive score;
a second obtaining module, configured to obtain, for each face image in the candidate image group, a reference value of each face attribute in second types of face attributes of the face image, where the second types of face attributes include a third face attribute, and the reference value of the face attribute is determined based on a value corresponding to the face attribute;
and the third obtaining module is used for carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset aiming at the third face attribute aiming at each face image in the alternative image group so as to obtain a first comprehensive score of the face image.
Optionally, the apparatus further comprises:
a second judging module, configured to, when the judgment result of the first judging module is not satisfied, judge whether the face image in the candidate image group satisfies a third preset screening condition, where the third preset screening condition is: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is greater than a fourth preset threshold value;
a fifth determining module, configured to determine the first filtering rule as the filtering rule for the candidate image group if the determination result of the first image determining module is yes.
Optionally, the apparatus further comprises:
a sixth determining module, configured to determine a preset third filtering rule as the filtering rule for the candidate image group when the determination result of the second determining module is not satisfied; the third screening rule is a rule for screening the face image with the highest second comprehensive score;
a seventh determining module, configured to determine a third type face attribute value of each face image in the candidate image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for a face attribute in the first type face attribute and the second type face attribute;
the eighth determining module is used for determining a second weight combination of each face image in the alternative image group according to the relation between the preset target level and the second weight combination;
a fourth obtaining module, configured to obtain, for each face image in the candidate image group, a second comprehensive score of the face image according to each face attribute value in a fourth type face attribute value and a second weight combination of the determined face image, where the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute includes a face attribute in the first type face attribute, the second type face attribute, and the third type face attribute.
Optionally, the fourth obtaining module includes:
the second determining submodule is used for determining the value of each face attribute in the fourth type face attributes of the face image according to the mapping relation between the value corresponding to each face attribute in the preset fourth type face attributes and the value, wherein each type of face attribute corresponds to the same value range;
and the obtaining submodule is used for carrying out weighted calculation on the value of each face attribute corresponding to the face image according to the determined second weight combination of the face image to obtain a second comprehensive score of the face image.
Optionally, the second face attribute is a deflection angle;
the third face attribute is a pupil distance;
the second type of face attributes comprise a pupil distance, a pitch angle, a shielding degree and a deflection angle;
the third type of face attributes comprise whether the face is yin-yang face, open and close eyes and open and close mouth;
the fourth type of face attributes comprise definition, brightness, shielding degree, deflection angle, pitch angle, interpupillary distance, whether the face is yin-yang, eyes are opened and closed, and mouth is opened and closed.
In a third aspect, to achieve the above object, an embodiment of the present invention further provides a computer device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method steps of any one of the above-described face image selection methods when executing the program stored in the memory.
In a fourth aspect, to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements method steps in any one of the above face image selection methods.
According to the face image selection method, the face image selection device, the computer equipment and the readable storage medium, the screening rule of the alternative image group can be determined according to the target grade of the face image in the alternative image group, the face image to be utilized is determined through the screening rule, and then the target image is determined. Compared with the prior art, the selection is more targeted, different screening rules can be utilized to screen different alternative image groups, the face images with better quality are screened, and the accuracy of face recognition is further improved. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face image selection method according to an embodiment of the present invention;
fig. 2 is another schematic flow chart of a face image selection method according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of a face image selection method according to an embodiment of the present invention;
fig. 4 is another schematic flow chart of a face image selection method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face image selection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the accuracy of face recognition, the embodiment of the invention provides a face image selection method, a face image selection device, computer equipment and a computer readable storage medium.
First, a method for selecting a face image according to an embodiment of the present invention is described below.
It should be noted that the face image selection method provided by the embodiment of the present invention is applied to a computer device. In specific application, the computer device may be a camera device or a non-camera device, where the camera device includes a camera, an attendance terminal or other intelligent terminals with an image acquisition function, and the non-camera device is a device without an image acquisition function.
Referring to fig. 1, an embodiment of the present invention provides a method for selecting a face image, including the following steps:
s101: the method comprises the steps of obtaining a face image group, wherein the face image group comprises a plurality of face images. In the embodiment of the present invention, the face images in the face image group may be acquired by a camera device, where the camera device includes a camera, an intelligent terminal with an image acquisition function, and the like. The image pickup equipment acquires images of the face area in a time period from the moment when the face appears to the moment when the face disappears.
It can be understood that when the computer device is a camera device, the face image group can be acquired locally; when the computer equipment is non-camera equipment, the face image group collected by other camera equipment can be obtained, which is reasonable.
S102: and determining the target grade of each face image based on the first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to the first type face attribute, and the first type face attribute comprises at least one face attribute.
It should be noted that the first-class face attribute value is a value corresponding to a face attribute in the first-class face attribute. The face attributes included in the first type of face attributes are predetermined, and specifically may be determined according to actual conditions, or may be determined according to past experience, or may be determined according to the degree of influence of the face attributes on the quality of the face image, which is not limited to this, and is not listed here.
In the embodiment of the invention, after the face image group is obtained, the first class attribute value of each face image is determined, and the target grade of one face image is determined based on the first class face attribute value of the face image.
It can be understood that the method of determining the value of the face attribute is different for different face attributes. Illustratively, one face attribute is luminance and the other face attribute is sharpness. The method of determining the brightness value may be: calculating the brightness value of each pixel point of the face image; and calculating the average value of the brightness values of all the pixel points to serve as the brightness value of the face image. The definition value determining method comprises the following steps: and inputting the face image into a pre-trained definition classifier to obtain a definition value of the face image. Further, the sharpness classifier may be a neural network, a support vector machine, or a Haar classifier, etc. for determining sharpness, the Haar classifier being a tree-based classifier. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
It should be noted that the face attribute value may be a specific numerical value, may be a text, and is determined by the nature of the face attribute itself, and of course, the text may also be converted into a number. Illustratively, if the face attribute is brightness, the face attribute value is a brightness value; if the face attribute is open mouth or closed mouth, the face attribute value may be open mouth or closed mouth, or the face attribute value may be a preset numerical value corresponding to open mouth or a preset numerical value corresponding to closed mouth. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
The manner of determining the target level of each face image may be: selecting a grade corresponding to the face attribute as a target grade of the face image; or, the face attribute value in the first type of face attribute value of the face image is operated to obtain an operation result, and the grade corresponding to the operation result is used as the target grade of the face image.
Selecting a level corresponding to a face attribute as a target level of the face image, wherein the level can be a face attribute selected according to a preset selection rule; when only one grade corresponding to the selected face attribute exists, taking the grade corresponding to the face attribute as a target grade of the face image; and when the levels corresponding to the selected face attributes are multiple, selecting one level as the target level of the face image.
S103: and constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images.
It can be understood that the plurality of alternative image groups covers all the face images in the face image group, so that the situation that the face images are missed can be avoided. Each alternative image group comprises at least two face images, so that the face images in the alternative image group can be screened according to the screening rule determined aiming at the alternative image group in the subsequent process, the selection of the face images is more targeted and more detailed, and the face images with better quality are determined. If only one face image exists in the alternative image group, subsequent screening cannot be carried out.
Constructing the alternative image group can be understood as selecting a face image from the face image group, and using the selected face image as the face image in the alternative image group. It should be noted that the face images in different alternative image groups may be all different, or may be partially the same.
There are multiple ways of constructing the multiple alternative image groups corresponding to the face image group, and in a specific implementation manner, the step of constructing the multiple alternative image groups corresponding to the face image group may include:
dividing the face images in the face image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
It can be understood that, when the face images are not equally divided into a plurality of candidate image groups, the number of face images in all the candidate image groups is not all the same, the number of face images in some of the candidate image groups is the same, or the number of face images in all the candidate image groups is different. And equally dividing the face images in the face image group into a plurality of alternative image groups, which shows that the number of the face images in all the alternative image groups is the same.
S104: after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain the face images to be utilized.
In the embodiment of the present invention, a target level may be selected from the target levels of each face image in the candidate image group, and a filtering rule corresponding to the selected target level is determined as a filtering rule for the candidate image group; or calculating the target grade of each face image in the alternative image group, and determining a screening rule aiming at the alternative image group according to the calculation result; the method can also judge the condition met by the target level of the face image of the alternative image group, and determine the screening rule corresponding to the met condition as the screening rule aiming at the alternative image group.
It should be noted that each screening rule is preset, and different screening rules are different in the way of screening the face image. In addition, the face image to be utilized, which is obtained by screening from one alternative image group, can be used as the face image to be screened in another alternative image group.
S105: and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
It should be noted that at least one to-be-utilized face image is screened out from one alternative image group, and one face image can be selected from all to-be-utilized face image groups to serve as a target face image, where of course, the target face image is a face image with the best quality selected from all to-be-utilized face images.
In the embodiment of the invention, the screening rule of the alternative image group is determined according to the target grade of the face image in the alternative image group, the face image to be utilized is determined according to the screening rule, and then the target image is determined. Compared with the prior art, the selection is more targeted, different screening rules can be utilized to screen different alternative image groups, the face images with better quality are screened, and the accuracy of face recognition is further improved.
The following describes a face image selection method provided by the embodiment of the present invention with reference to specific embodiments.
As shown in fig. 2, a method for selecting a face image according to an embodiment of the present invention may include the following steps:
s201: the method comprises the steps of obtaining a face image group, wherein the face image group comprises a plurality of face images.
S202: and determining the target grade of each face image based on the first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to the first type face attribute, and the first type face attribute comprises at least one face attribute.
It should be noted that S201 to S202 are the same as S101 to S102 in the above embodiments, and are not described herein again.
S203: and selecting a first preset number of face images from the face image group to construct a first alternative image group.
It should be noted that the first preset number may be an experience value set manually, or may be determined according to the number of face images in the face images. The selected first preset number of face images constitute a first group of alternative images. Since a plurality of candidate image groups need to be constructed, the first preset number is certainly smaller than the number of face images in the face image group.
It can be understood that each face image in the face image group corresponds to one acquisition time, and the acquisition times of the face images are different. The selected first preset number of face images may be face images whose acquisition time is continuous. Because the acquisition time is continuous, it is stated that the corresponding face images are continuously acquired, and each face image has continuity with the previous face image or the next face image. Generally, the difference of the face images acquired in time succession is not too large, and the states of the faces in the face images are continuous, so that the probability of screening out the face image to be utilized with the best quality from the face images in one alternative image group is higher.
S204: for any non-first alternative image group, determining the target number of face images which are not selected currently in the face image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
It is understood that the target number may be understood as the number of face images in the face image group that are not currently selected, i.e., the number of face images in the current face image group except for face images that have already been in the candidate image group. The second preset number may be an experience value set manually, or may be a difference between the number of face images in the candidate image group that needs to be constructed this time and the number of face images to be used determined last time. The second preset number may be the same or different for different alternative image groups. The number of the face images to be utilized obtained by screening each alternative image group can be one or more. It should be noted that the number of face images to be utilized obtained by screening from the candidate image group is smaller than the number of face images in the candidate image group. In addition, under the condition that the number of the face images in each alternative image group is the same, the sum of the second preset number and the number of the face images to be utilized obtained by the last screening is the first preset number.
In the embodiment of the present invention, when the number of targets is greater than the second preset number, a second preset number of face images may be selected from face images that are not currently selected in the face image group. Specifically, the acquisition time corresponding to the selected second preset number of face images is later than the acquisition time of the previously selected face images. The acquisition time is later than that of the face image selected before, so that the condition of missing selection in the face image group can be avoided, and the probability of repeated selection can be reduced.
In addition, the acquisition time of the selected face image can be continuous with the acquisition time of the face image selected last time. The acquisition time is continuous with the acquisition time of the face image selected last time, so that the time relevance between the candidate image group constructed this time and the candidate image group constructed last time can be ensured, and the two candidate image groups have time relevance, which indicates that the face images in the two candidate image groups have relevance. When the to-be-utilized face images obtained by screening from one alternative image group and the selected second preset number of face images jointly construct the current alternative image group, because the to-be-utilized face images and the second preset number of face images have correlation, a uniform screening rule can be set for the current constructed alternative image group, and the probability of screening the face images with the best quality from the current constructed alternative image group is higher.
And if the number of the targets is not more than the second preset number, constructing an alternative image group by using the face images which are not selected currently and the face images to be utilized obtained by the last screening in the face image group. It should be noted that, when the number of the targets is determined to be 0, only one to-be-utilized face image obtained by the last screening is obtained, the to-be-utilized face image is the target face image, and if there are a plurality of to-be-utilized face images, one personal face image is selected from the plurality of to-be-utilized face images and is used as the target face image.
S205: after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain the face images to be utilized.
S206: and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
In this embodiment, S205-S206 are the same as S104-S105 of the above embodiments, and are not described herein again.
In the embodiment of the invention, a candidate image group is formed by using the face image to be utilized obtained by the last screening and the face image selected this time. The face images to be utilized are added into the next alternative image group for further screening, the probability that the quality of the face images to be utilized obtained later is better than that of the face images to be utilized obtained before is higher, the target face images are selected from the finally obtained face images to be utilized, the target face images with better quality can be obtained, and compared with the prior art, the accuracy of face recognition can be improved.
A method for selecting a face image according to an embodiment of the present invention is described below with reference to another embodiment.
As shown in fig. 3, a method for selecting a face image according to an embodiment of the present invention may include the following steps:
s301: the method comprises the steps of obtaining a face image group, wherein the face image group comprises a plurality of face images.
In this embodiment, S301 is the same as S101 in the above embodiment, and the relevant description content of S301 may refer to the relevant description content of S101, which is not described herein again.
S302: determining the target grade of each face image in the face image group according to the following modes:
step A1: determining a first face attribute value of a current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes.
In the embodiment of the present invention, the determination method of the target level of each face image is the same, and the contents of step a1 to step a4 are performed for each face image.
It is understood that the current face image may be understood as one face image of face images of which the target level is not currently determined. The first face attribute may be a face attribute randomly selected from the first type of face attributes, or may be a face attribute having the largest influence on the quality of the face image in the first type of face attributes.
Step A2: and judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade.
It should be noted that only one condition is preset for one level, and the attribute value range set for one preset condition may be at least one attribute value section, and may be a specific attribute value. For example, if the first face attribute is sharpness, the preset condition corresponding to one of the levels of the first face attribute may be: greater than or equal to 0 and less than 20; if the first face attribute is brightness, the preset condition corresponding to one of the levels of the first face attribute may be: greater than or equal to 0 and less than 50 or greater than or equal to 200 and less than or equal to 255; the first face attribute is a positive face and a negative face, and the preset condition corresponding to the first face attribute may be: negative face. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting. The positive face and the negative face comprise a positive face and a negative face, wherein the positive face refers to a face with a deflection angle smaller than a first angle threshold or a face with a pitch angle smaller than a second angle threshold, and can be understood as a relatively positive face; the negative face refers to a face with a deflection angle larger than or equal to a first angle threshold or a face with a pitch angle larger than or equal to a second angle threshold, and can be a large side face, a large depression face and a false-detection non-face. It is understood that the deflection angle is one of 3 angles describing the pose of the face, and is an angle formed by the face rotating left and right in the horizontal direction. The pitch angle is also one of 3 angles describing the posture of the human face, and is an angle formed by the human face rotating up and down in the vertical direction.
Step A3: and when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image.
It can be understood that, when it is determined that the first face attribute value satisfies a preset condition corresponding to the first face attribute, the level corresponding to the satisfied preset condition is determined as the target level of the current face image. After the target level of the face attribute of one face image is determined, the values corresponding to other face attributes in the first type of face attribute of the face image are not determined.
Step A4: and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
It can be understood that, if the determination result is negative, it indicates that the first face attribute value does not satisfy the preset condition corresponding to the first face attribute. If the determination result is negative, the target level of the facial image cannot be determined, then an unused facial attribute may be selected from the first type of facial attributes as a new first facial attribute, and the step a1 is executed again until the target level of the facial image is determined.
It should be noted that, for different face images, the number of face attribute values to be determined may be the same or may be different.
As an embodiment of the present invention, the first type of face attributes include positive and negative faces, sharpness, brightness, degree of occlusion, deflection angle, and pitch angle.
It is to be understood that the value corresponding to each face attribute in the first class of face attributes may be determined by using a classifier, or may be obtained by calculation, which is not limited herein.
Here, the degree of occlusion means a degree of occlusion of a face region. The shielding degree can be divided into mouth-nose shielding, double-eye shielding, serious shielding, slight shielding and no shielding, the serious shielding is shielding that the shielding area except the mouth-nose shielding and the double-eye shielding exceeds the preset upper limit, for example, the face image is an image of drinking water, because the cup shields a partial area of the face, the face image can be determined to be serious shielding, and the slight shielding is shielding that the shielding area except the mouth-nose shielding and the double-eye shielding does not exceed the preset upper limit. Of course, the shielding degree is not limited to the above classification, and the shielding degree may be divided by a shielding area, or may be divided by a shielding region.
S303: constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
s304: after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain a face image to be utilized;
s305: and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
The descriptions of S303 to S305 may refer to the descriptions of the corresponding parts of the above embodiments, and are not described herein again.
In the embodiment of the invention, after the target level of the face attribute of one face image is determined, the values corresponding to other face attributes in the first type of face attribute of the face image are not determined, so that the number of the face images of which the face attribute values need to be determined can be gradually reduced, resources can be saved, and the target level determination speed of the face image can be accelerated.
A method for selecting a face image according to an embodiment of the present invention is described below with reference to another embodiment.
As shown in fig. 4, a method for selecting a face image according to an embodiment of the present invention may include the following steps:
s401: the method comprises the steps of obtaining a face image group, wherein the face image group comprises a plurality of face images.
S402: and determining the target grade of each face image based on the first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to the first type face attribute, and the first type face attribute comprises at least one face attribute.
S403: and constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images.
The descriptions of S401 to S403 may refer to the descriptions of the corresponding parts of the above embodiments, which are not described herein again.
S404: after each alternative image group is formed, judging whether the face image in the alternative image group meets a first preset screening condition, executing S405, and otherwise, ending; wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes.
As an embodiment of the present invention, each face attribute in the first class of face attributes has a correspondence with at least one level.
It is to be understood that the target level of the face image may be determined according to the face attribute value, and further, the level may be corresponding to the face attribute, so that each face attribute in the first type of face attribute has a correspondence with at least one level. If one of the first class of face attributes does not have a corresponding level, it is determined that the face attribute value does not contribute to determining the target level of the face image, and it is determined that the face attribute value does not have a corresponding level, which wastes time and increases the time for determining the target level of the face image.
As an embodiment of the present invention, the second face attribute is a deflection angle.
It should be noted that, according to an empirical value, a previous experiment, or a degree of influence on the image quality, the deflection angle may be used as the second face attribute.
In the embodiment of the present invention, the larger the influence degree of one face attribute on the image quality is, the lower the level corresponding to the face attribute is, and conversely, the higher the level corresponding to the face attribute is. The second face attribute may be a face attribute having the smallest influence on image quality among the first type of face attributes.
Illustratively, the candidate image group includes a face image C and a face image D, the target level of the face image C is a02, the target level of the face image D is a04, the levels corresponding to the second face attributes are a07-a13, the target levels of the face images in the candidate image group are different and are not any one of a07-a13 at the same time, it may be determined that the face images in the candidate image group satisfy the first preset filtering condition, and S405 is executed.
S405: and determining a preset first screening rule as a screening rule aiming at the alternative image group, wherein the first screening rule is a rule for screening the face image with the highest target level.
It can be understood that, when the target levels of the face images in the alternative image group are not the levels corresponding to the second face attributes at the same time, and are not the same, because the second face attribute has the smallest influence degree on the image quality, and the influence degree of other face attributes in the first type of face attributes is larger than that of the second face attribute, it can be determined that the face images in the alternative image group have a larger difference. In conclusion, the face image with the highest target grade can be directly screened, the screening mode is simple and rapid, and the screening time can be saved.
In the embodiment of the present invention, the first filtering rule may be a filtering rule set in advance for satisfying a first preset filtering condition.
S406: and screening the face images in the alternative image group according to the corresponding screening rules to obtain the face images to be utilized.
Continuing with the example in S404, the face image with the highest target level is screened, and the obtained face image to be utilized is the face image D.
S407: and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
The descriptions of S406 to S407 may refer to the descriptions of the corresponding parts of the above embodiments, which are not described herein again.
As an embodiment of the present invention, in a case that it is determined that the face image in the candidate image group does not satisfy the first preset screening condition, the method may further include:
step B1: determining a third face attribute value of each face image in the alternative image group; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes.
When the face images in the alternative image group do not meet the first preset screening condition, the third face attribute value of each face image in the alternative image group needs to be determined. The method of determining the value of the third face attribute may be determined by the nature of the third face attribute. Illustratively, the third face attribute is a pupil distance, and the method for determining the value of the third face attribute may be: according to a key point positioning algorithm, a plurality of key points are marked in a face image, pupils of two eyes are positioned, and the number of pixels on a connecting line of the centers of the two pupils is calculated to be used as a pupil distance. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
As an embodiment of the present invention, the third face attribute may be a pupil distance.
In the embodiment of the invention, the interpupillary distance is selected in consideration of the contribution of each face attribute of the face image to face screening.
Step B2: judging whether the face images in the alternative image group meet a second preset screening condition, if so, executing the step B3; wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value.
It should be noted that, if the level of the face image in one candidate image group is simultaneously the level corresponding to the second face attribute, and the contribution of the third face attribute to the quality of the face image is considered, and the face image is more accurately screened, the screening rule for the candidate image group needs to be further determined according to the third face attribute value.
Illustratively, the face images in the alternative image group are a face image a and a face image B, the target level of the face image a is a07, and the interpupillary distance is 50; the target level of the face image B is A08, the interpupillary distance is 35, and the face image B can be considered as a small interpupillary distance face image; the grade corresponding to the second face attribute is A07-A13, and the first preset threshold value is 10; if the second preset threshold is 2, it may be determined that the face images in the alternative image group satisfy the second preset screening condition. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
Step B3: determining a preset second screening rule as a screening rule aiming at the alternative image group; and the second screening rule is a rule for screening the face image with the highest first comprehensive score.
It is understood that the second filtering rule is set in advance for the alternative image group satisfying the second filtering condition. The face images in each alternative image group are different, the determined target grades are different, the determined screening rules are different, and different screening modes of the face images in the alternative image groups are different according to the different screening rules.
As an embodiment of the present invention, the first composite score is calculated by the following two steps:
the first step is as follows: and obtaining a reference value of each face attribute in second type face attributes of the face image aiming at each face image in the alternative image group, wherein the second type face attributes comprise third face attributes, and the reference value of the face attributes is determined based on corresponding values of the face attributes.
In the embodiment of the present invention, the value corresponding to the face attribute is a face attribute value, and for each face image in the candidate image group, a second type face attribute value of the face image is obtained first, where the second type face attribute value is a value corresponding to the second type face attribute, that is, a value corresponding to each face attribute in the second type face attribute. The reference values of the face attributes are obtained in different manners for different face attributes, and may be determined in a reference value determination manner preset for the face attributes based on the face attribute values. The reference value determination mode set for the face attribute is set in advance in consideration of factors such as the nature of the face attribute.
As an embodiment of the present invention, the second type of face attributes are a pupil distance, a pitch angle, an occlusion degree, and a deflection angle.
In the embodiment of the present invention, considering the influence of the difference of the interpupillary distances of the face images on the screening of the face images, the determination method for the interpupillary distance of the face images is as follows: and determining a reference value of the interpupillary distance of the face image based on the difference value between the interpupillary distances of the face image. Illustratively, the alternative image group includes a face image C and a face image D, the interpupillary distance of the face image C is 32, the interpupillary distance of the face image D is 45, the difference value between the interpupillary distances of the two face images is 13, one interpupillary distance of the two face images is greater than a second preset threshold, and the other interpupillary distance is smaller than the second preset threshold, the influence of the interpupillary distance is large, according to a reference value determination mode aiming at the interpupillary distance in advance, the reference value of the interpupillary distance of the face image C is determined to be 0, and the reference value of the interpupillary distance of the face image D is determined to be 13. If the interpupillary distance of the face image C is not changed, the interpupillary distance of the face image D is 40, the interpupillary distance difference value of the two face images is 8, the two face images are face images with small interpupillary distance, the reference value of the interpupillary distance of the face image C is 0, and the reference value of the interpupillary distance of the face image D is 16 according to a reference value determining mode preset aiming at the interpupillary distance. If the interpupillary distance of the face image C is not changed, the interpupillary distance of the face image D is 60, the difference value of the interpupillary distances of the two face images is 28, the difference of the face images is large, the reference value of the interpupillary distance of the face image C is 0 according to a reference value determining mode set aiming at the interpupillary distance in advance, and the reference value of the interpupillary distance of the face image D is 14. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
The reference value determination method for setting the deflection angle provides that the larger the deflection angle is, the smaller the reference value is, and the specific determination method is as follows: and taking the deflection angle difference value of the two face images as a reference value of the deflection angle of the face image with a small deflection angle, and taking the reference value of the deflection angle of the other face image as a preset value, wherein the preset value is a non-negative number smaller than the deflection angle difference value of the two face images. Of course, an integral multiple of the difference of the deflection angles of the two face images can be used as a reference value of the deflection angle of the face image with a small deflection angle, and a value obtained by adding a fixed value or subtracting a fixed value to the difference of the deflection angles of the two face images can be used as a reference value of the deflection angle of the face image with a small deflection angle. Illustratively, the deflection angle of the obtained face image C is 10 degrees, the deflection angle of the face image D is 26 degrees, the difference between the deflection angles of the two face images is 16 degrees, the reference value of the deflection angle of the obtained face image C is 16 degrees, and the reference value of the deflection angle of the obtained face image D is 0.
It should be noted that the reference value determining manner for the pitch angle may refer to the reference value determining manner for the yaw angle, and the manner of obtaining the reference value for the pitch angle of the face image is similar to the manner of obtaining the reference value for the yaw angle of the face image, which is not described herein again.
The way to obtain the reference value of the sharpness is: and aiming at each face image in the alternative image group, acquiring the definition of the face image, determining the definition value range to which the definition of the face image belongs according to the corresponding relation between the preset definition value range and the definition reference value, and taking the reference value corresponding to the determined definition value range as the definition reference value of the face image. Illustratively, the value range of the definition is 0 to 100, and when the value range of the definition is 0 to 20, the corresponding reference value of the definition is 1; when the definition value range is 21-40, the corresponding definition reference value is 2; when the definition value range is 41-60, the corresponding definition reference value is 3; when the definition value range is 61-80, the corresponding definition reference value is 4; when the definition value range is 81-100, the corresponding definition reference value is 5. If the definition of the face image C is 60, the reference value of the definition of the face image C is 3, and if the definition of the face image D is 80, the reference value of the definition of the face image D is 4.
The way of obtaining the reference value of the occlusion degree is as follows: and aiming at each face image in the alternative image group, obtaining the shielding degree of the face image, and taking the reference value corresponding to the determined shielding degree as the reference value of the shielding degree of the face image according to the preset corresponding relation between the shielding degree and the reference value of the shielding degree. Illustratively, the occlusion degree of the face image C is severe occlusion, and the reference value set in advance for severe occlusion is 3, and then the reference value of the occlusion degree of the face image C is 3. The shielding degree of the face image D is slightly shielded, the reference value set for the slight shielding in advance is 4, and then the reference value of the shielding degree of the face image D is 4.
The second step is that: and for each face image in the alternative image group, carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset for the third face attribute, and obtaining a first comprehensive score of the face image.
It should be noted that there is only one first weight combination preset for the third face attribute, and the first weight combination includes the weight of each face attribute in the second type of face attributes. The weights in the first weight combination are in one-to-one correspondence with the face attributes in the second type of face attributes.
Illustratively, the determined first weight combinations are 20%, 30%, 15%, 25% and 10%, the reference values of the face attributes of the face image C are 0, 16, 10, 3 and 3, the reference values of the face attributes of the face image D are 13, 0, 4 and 4, respectively, the first composite score of the face image C is 7.35, and the first composite score of the face image D is 4. The above-mentioned examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
In the embodiment of the invention, the reference value of the face image is determined based on the face attribute value, and different face attributes are compared on the same horizontal line, so that the calculated first comprehensive score is more accurate, and the quality of the face image can be accurately reflected.
As an embodiment of the present invention, in a case that it is determined that the face image in the candidate image group does not satisfy the second preset screening condition, the method may further include:
step C1: judging whether the face images in the alternative image group meet a third preset screening condition, if so, executing a step C2; wherein the third preset screening condition is: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is larger than a fourth preset threshold value.
If the face image in the candidate image group does not satisfy the second preset screening condition, the face image in the candidate image group does not satisfy the first preset screening condition, and in order to determine the screening rule of the candidate image group, it is further necessary to determine whether the face image in the candidate image group satisfies the third screening condition.
Step C2: and determining the first screening rule as the screening rule aiming at the alternative image group.
If the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference between every two target grades is greater than a fourth preset threshold, the influence of the second face attribute on the quality of the face image, namely the influence of the target grade on the quality of the face image, is mainly considered at this time. If the level difference value between every two target levels in the alternative image group is larger than the fourth preset threshold value and the quality difference of the face images is larger, the face images can be directly screened according to the target levels, the face images with the highest target levels are screened, screening is simple and rapid, and screening time can be saved.
As an embodiment of the present invention, in a case that it is determined that the face image in the candidate image group does not satisfy the third preset screening condition, the method may further include:
determining a preset third screening rule as a screening rule aiming at the alternative image group; and the third screening rule is a rule for screening the face image with the highest second comprehensive score.
In the embodiment of the present invention, the third filtering rule may be a filtering rule set in advance for a condition that does not satisfy a third preset filtering condition. The alternative image group satisfying the face image and satisfying the third filtering rule may be an alternative image group having the same target level of the face image, or may be an adjacent level in the level corresponding to the second face attribute of the face image.
The second composite score is calculated by the following steps:
1. determining a third type face attribute value of each face image in the alternative image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for the face attributes in the first type face attribute and the second type face attribute.
It should be noted that the face attributes in the third type of face attributes do not belong to the first type of face attributes, nor do they belong to the second type of face attributes. The face attribute in the third type of face attribute is preset.
As an embodiment of the present invention, the third type of facial attributes includes whether it is yin-yang face, open and closed eyes, and open and closed mouth.
It is understood that yin-yang face refers to a face with a part of the face having a brightness greater than a preset brightness value, or a part of the face having a brightness less than another preset brightness value. Illustratively, light is irradiated on the face of one side, so that the face of the one side is brighter and the face of the other side is darker, so that the face of the other side is unclear and has a shadow, and the face at this time can be called a yin-yang face. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
Whether the face is yin and yang, the mouth is opened and closed, and the eyes are opened and closed, a predetermined calculation rule can be adopted to calculate and determine corresponding values, a trained neural network can also be used to determine corresponding values, and the like, which are not listed one by one. If the facial attribute values are obtained through the trained neural network, whether the three facial attributes of yin and yang face, mouth opening and closing and eyes opening and closing respectively correspond to one neural network or not is judged, the functions of each neural network are different, but the network structures can be the same.
2. And determining a second weight combination of each face image in the alternative image group according to the relation between the preset target level and the second weight combination.
In the embodiment of the invention, each face image in the alternative image group has a target level, and the second weight combination of each face image can be determined according to the relationship between the preset target level and the second weight combination.
3. And aiming at each face image in the alternative image group, obtaining a second comprehensive score of the face image according to the determined second weight combination of the face image and each face attribute value in a fourth type face attribute value, wherein the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute comprises the first type face attribute, the second type face attribute and the face attribute in the third type face attribute.
As an embodiment of the present invention, the fourth type of face attribute includes sharpness, brightness, degree of occlusion, deflection angle, pitch angle, interpupillary distance, whether it is yin-yang face, open and closed eyes, and open and closed mouth.
In the embodiment of the present invention, the fourth type of face attributes includes the above-mentioned face attributes other than positive and negative faces.
And obtaining a second comprehensive score of the face image according to the determined second weight combination and each face attribute value in the fourth type of face attribute values, wherein the second comprehensive score is obtained based on 8 face attribute values in the fourth type of face attribute values, and each face attribute value reflects the quality of one face attribute. The second comprehensive score is obtained by comprehensively evaluating indexes of different dimensions of the quality of the face image, and can reflect the quality of each face image.
As an embodiment of the present invention, the step of performing, for each face image in the candidate image group, a weighting calculation according to each face attribute value in the fourth type of face attribute value and the second weight combination of the determined face image, to obtain a first composite score of the face image may include:
step D1: and determining the value of each face attribute in the fourth type of face attributes of the face image according to the mapping relation between the value corresponding to each face attribute in the preset fourth type of face attributes and the value, wherein each type of face attribute corresponds to the same value range.
The values corresponding to each face attribute are different in nature, and some face attributes have larger values and some face attributes have smaller values. In order to evaluate the quality of each face image more accurately, in the embodiment of the invention, the values corresponding to different face attributes are normalized to the same value range, and the values corresponding to different face attributes are in the same range, so that the values corresponding to different face attributes can be compared.
Illustratively, the value range is 0-9, the interpupillary distance is 50, and the value of the interpupillary distance can be determined to be 8 according to a preset mapping relation. Of course, the above examples are merely specific examples of the embodiments of the present invention, and are not intended to be limiting.
It should be noted that, for whether the yin-yang face is present, there are only two values, where the yin-yang face corresponds to one value, the non-yin-yang face corresponds to another value, and the value corresponding to the yin-yang face is smaller than the value corresponding to the non-yin-yang face; similarly, for the open and closed eyes, only two values are provided, one value corresponds to the open eye, the other value corresponds to the closed eye, and the value corresponding to the open eye is larger than the value corresponding to the closed eye; similarly, for the open mouth and the closed mouth, only two values are provided, wherein the open mouth corresponds to one value, the closed mouth corresponds to the other value, and the value corresponding to the open mouth is larger than the value corresponding to the closed mouth.
Step D2: and according to the determined second weight combination of the face image, carrying out weighted calculation on the value of each face attribute corresponding to the face image to obtain a second comprehensive score of the face image.
The principle of obtaining the second composite score is the same as that of obtaining the first composite score, and the details are not repeated herein.
In the embodiment of the invention, the specific situation of the face image in each alternative image group is considered, the screening rule in each alternative image group is determined in a targeted manner, the determined screening rule is more accurate, the face image to be utilized with better quality can be screened out, and the accuracy of face recognition is improved.
The following describes a face image selection method provided in the embodiment of the present invention with reference to specific examples.
Firstly, the grade corresponding to each face attribute and the preset condition of each grade are explained: the corresponding grade of the positive face and the negative face is A01, and the preset condition of A01 is a negative face; the grade corresponding to the definition is A02, and the preset condition of A02 is that the definition value is less than 30; the corresponding grade of the brightness is A03, and the preset condition of A03 is that the definition value is less than 55 or more than or equal to 200; the grades corresponding to the shielding degrees are A04 and A05, the preset condition of A04 is mouth-nose shielding, and the preset condition of A05 is binocular shielding;
the grade corresponding to the deflection angle is A06-A13, the preset condition of A06 is that the absolute value of the deflection angle is more than 35 or the absolute value of the pitch angle is more than 30, and A06 is also the grade corresponding to the pitch angle; the preset condition of A07 is that the absolute value of the deflection angle is less than or equal to 35 degrees and greater than 30 degrees; the preset condition of A08 is that the absolute value of the deflection angle is less than or equal to 30 degrees and greater than 25 degrees; the preset condition of A09 is that the absolute value of the deflection angle is less than or equal to 25 degrees and greater than 20 degrees; the preset condition of A10 is that the absolute value of the deflection angle is less than or equal to 20 degrees and greater than 15 degrees; the preset condition of A11 is that the absolute value of the deflection angle is less than or equal to 15 degrees and greater than 10 degrees; the preset condition of A12 is that the absolute value of the deflection angle is less than or equal to 10 degrees and greater than 5 degrees; the preset condition of a13 is that the absolute value of the deflection angle is 5 degrees or less and 0 degree or more.
The obtained face image group A comprises 5 face images which are respectively face images 1-5, and the face images 1-5 are sequenced according to the acquisition time. And (4) screening positive and negative faces of the face image 1, and determining that the face image is a negative face, wherein the target grade of the face image 1 is A01. Screening positive and negative faces of the face image 2, and determining the face image 2 as a positive face, and determining the definition value of the face image 2, wherein the determined definition value is 80; then determining the brightness value of the face image 2, wherein the determined brightness value is 100; at this time, if the target level of the face image 2 is not confirmed, the shielding degree of the face image 2 needs to be determined, the determined shielding degree is slight shielding, the deflection angle of the face image 2 needs to be determined, the deflection angle is 10 degrees, and the pitch angle of the face image 2 needs to be determined to be 4 degrees, then the target level of the face image 2 that can be confirmed is a 12. According to the above method of confirming the target level of the face image 2, it is determined that the target level of the face image 3 is a08, the target level of the face image 4 is a04, and the target level of the face image 5 is a 09.
After the target levels of 5 human face images are determined, an alternative image group 1 is constructed, the alternative image group 1 comprises a human face image 1 and a human face image 2, the human face image in the alternative image group 1 is determined to meet a first preset screening condition, the screening rule in the alternative image 1 is a first preset screening rule, the alternative image group 1 is screened through the first preset screening rule, and the obtained image to be utilized is the human face image 2.
After the alternative image 1 is screened, a face image 2 and a face image 3 are utilized to construct an alternative image group 2, the face image in the alternative image group 2 does not meet a first preset screening condition, the pupil distance between the face image 2 and the face image 3 needs to be determined, the pupil distance of the face image 2 is 45, the pupil distance of the face image 3 is 32, a first preset threshold value is 15, a second preset threshold value is 2, a third preset threshold value is 40, the alternative image group 2 meets the second screening condition, the face attribute reference values of the face image 2 are 13, 10, 0, 3 and 4, the face attribute reference values of the face image 3 are 0, 5, 4 and 5, the first weight combination is 20%, 30%, 10%, 25% and 15%, the first comprehensive score of the face image 2 is 6.95, and the second type attribute values of the face image 3 are 1.95 respectively. The obtained face image to be utilized is the face image 2.
After the alternative images 2 are screened, the alternative image group 3 is constructed by using the face images 2 and the face images 4, and if the alternative image group 3 meets a first preset screening condition, the obtained face image to be used is the face image 2.
After the alternative images 3 are screened, the alternative image group 4 is constructed by using the face images 2 and the face images 5, the face images in the alternative image group 4 do not meet a first preset screening condition, and if the interpupillary distance of the face images 5 is determined to be 50, the face images in the alternative image group 4 do not meet a second preset screening condition. If the fourth preset threshold is 2, it may be determined that the face image in the alternative image group meets the third preset screening condition, and then the preset first screening rule is determined as the screening rule for the alternative image group 4, and the obtained face image to be utilized is the face image 2. At this time, the face image 2 is a target face image of the face image group a.
Corresponding to the above method embodiment, an embodiment of the present invention provides a face image selection apparatus, and referring to fig. 5, the apparatus includes:
an obtaining module 501, configured to obtain a face image group, where the face image group includes a plurality of face images;
a first determining module 502, configured to determine a target level of each face image based on a first type face attribute value of each face image in the face image group, where the first type face attribute value is a value corresponding to a first type face attribute, and the first type face attribute includes at least one face attribute;
a constructing module 503, configured to construct a plurality of alternative image groups corresponding to the face image group, where the plurality of alternative image groups cover all face images in the face image group, and each alternative image group includes at least two face images;
a first obtaining module 504, configured to determine, after each candidate image group is formed, a screening rule for each candidate image group based on a target level of each face image in the candidate image group, and screen, according to the corresponding screening rule, the face images in the candidate image group to obtain a face image to be utilized;
a second determining module 505, configured to determine, based on all to-be-utilized face images corresponding to the face image group, a target face image corresponding to the face image group.
In the embodiment of the invention, the screening rule of the alternative image group is determined according to the target grade of the face image in the alternative image group, the face image to be utilized is determined according to the screening rule, and then the target image is determined. Compared with the prior art, the selection is more targeted, different screening rules can be utilized to screen different alternative image groups, the face images with better quality are screened, and the accuracy of face recognition is further improved.
As an embodiment of the present invention, the building module 503 may include:
the first construction submodule is used for selecting a first preset number of face images from the face image group and constructing a first alternative image group;
the second construction sub-module is used for determining the target number of the face images which are not selected currently in the face image group for any non-first alternative image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and the face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
As an embodiment of the present invention, the constructing module 503 is configured to divide the facial images in the facial image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
Optionally, the first obtaining module is configured to determine a target level of each facial image in the facial image group according to the following manner:
the determining unit is used for determining a first face attribute value of the current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes;
judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade;
when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image;
and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
As an embodiment of the present invention, the first type of face attributes may include positive and negative faces, sharpness, brightness, degree of occlusion, deflection angle, and pitch angle.
As an embodiment of the present invention, each face attribute in the first class of face attributes has a correspondence with at least one level;
the first obtaining module 504 may include:
the judging submodule is used for judging whether the face images in the alternative image group meet a first preset screening condition, wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes;
and the first determining submodule is used for determining a preset first screening rule as the screening rule aiming at the alternative image group under the condition that the judgment result of the judging submodule is satisfied, wherein the first screening rule is a rule for screening the face image with the highest target level.
As an embodiment of the present invention, the apparatus may further include:
a third determining module, configured to determine a third face attribute value of each face image in the candidate image group when the determination result of the determining sub-module is not satisfied; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes;
the first judging module is used for judging whether the face images in the alternative image group meet a second preset screening condition, wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value;
a fourth determining module, configured to determine a preset second screening rule as the screening rule for the candidate image group if the determination result of the first determining sub-module is satisfied; the second screening rule is a rule for screening the face image with the highest first comprehensive score;
a second obtaining module, configured to obtain, for each face image in the candidate image group, a reference value of each face attribute in second types of face attributes of the face image, where the second types of face attributes include a third face attribute, and a reference value of a face attribute is determined based on the face attribute value;
and the third obtaining module is used for carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset aiming at the third face attribute aiming at each face image in the alternative image group so as to obtain a first comprehensive score of the face image.
As an embodiment of the present invention, the apparatus may further include:
a second judging module, configured to, when the judgment result of the first judging module is not satisfied, judge whether the face image in the candidate image group satisfies a third preset screening condition, where the third preset screening condition is: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is greater than a fourth preset threshold value;
a fifth determining module, configured to determine the first filtering rule as the filtering rule for the candidate image group if the determination result of the first image determining module is yes.
As an embodiment of the present invention, the apparatus may further include:
a sixth determining module, configured to determine a preset third filtering rule as the filtering rule for the candidate image group when the determination result of the second determining module is not satisfied; the third screening rule is a rule for screening the face image with the highest second comprehensive score;
a seventh determining module, configured to determine a third type face attribute value of each face image in the candidate image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for a face attribute in the first type face attribute and the second type face attribute;
the eighth determining module is used for determining a second weight combination of each face image in the alternative image group according to the relation between the preset target level and the second weight combination;
a fourth obtaining module, configured to obtain, for each face image in the candidate image group, a second comprehensive score of the face image according to each face attribute value in a fourth type face attribute value and a second weight combination of the determined face image, where the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute includes a face attribute in the first type face attribute, the second type face attribute, and the third type face attribute.
As an embodiment of the present invention, the fourth obtaining module may include:
the second determining submodule is used for determining the value of each face attribute in the fourth type face attributes of the face image according to the mapping relation between the value corresponding to each face attribute in the preset fourth type face attributes and the value, wherein each type of face attribute corresponds to the same value range;
and the obtaining submodule is used for carrying out weighted calculation on the value of each face attribute corresponding to the face image according to the determined second weight combination of the face image to obtain a second comprehensive score of the face image.
As an embodiment of the present invention, the second face attribute may be a deflection angle;
the third face attribute may be a pupil distance;
the second type of face attributes can comprise a pupil distance, a pitch angle, a shielding degree and a deflection angle;
the third type of facial attributes may include whether the face is yin-yang, open and closed eyes and open and closed mouth;
the fourth type of facial attributes may include sharpness, brightness, degree of occlusion, deflection angle, pitch angle, interpupillary distance, whether yin-yang face, open and close eyes, and open and close mouth.
The embodiment of the present invention further provides a computer device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
acquiring a face image group, wherein the face image group comprises a plurality of face images;
determining a target grade of each face image based on a first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to a first type face attribute, and the first type face attribute comprises at least one face attribute;
constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain a face image to be utilized;
and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
The specific implementation of the face image selection method executed by the computer device is the same as the various implementations mentioned in the foregoing method embodiments, and details are not repeated here.
The communication bus mentioned in the above computer device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the computer device and other devices.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In the embodiment of the invention, the screening rule of the alternative image group is determined according to the target grade of the face image in the alternative image group, the face image to be utilized is determined according to the screening rule, and then the target image is determined. Compared with the prior art, the selection is more targeted, different screening rules can be utilized to screen different alternative image groups, the face images with better quality are screened, and the accuracy of face recognition is further improved.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program is executed by a processor to implement the face image selection method in any one of the above embodiments.
In the embodiment of the invention, the screening rule of the alternative image group is determined according to the target grade of the face image in the alternative image group, the face image to be utilized is determined according to the screening rule, and then the target image is determined. Compared with the prior art, the selection is more targeted, different screening rules can be utilized to screen different alternative image groups, the face images with better quality are screened, and the accuracy of face recognition is further improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (23)

1. A method for selecting a face image, the method comprising:
acquiring a face image group, wherein the face image group comprises a plurality of face images;
determining a target grade of each face image based on a first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to a first type face attribute, and the first type face attribute comprises at least one face attribute;
constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
after each alternative image group is formed, determining a screening rule aiming at each face image in the alternative image group based on the target grade of each face image in the alternative image group, and screening the face images in the alternative image group through the corresponding screening rule to obtain a face image to be utilized;
and determining a target face image corresponding to the face image group based on all face images to be utilized corresponding to the face image group.
2. The method according to claim 1, wherein the step of constructing a plurality of candidate image groups corresponding to the face image group comprises:
selecting a first preset number of face images from the face image group to construct a first alternative image group;
for any non-first alternative image group, determining the target number of face images which are not selected currently in the face image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
3. The method according to claim 1, wherein the step of constructing a plurality of candidate image groups corresponding to the face image group comprises:
dividing the face images in the face image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
4. The method of claim 1, wherein the step of determining the target level of each face image based on the first type of face attribute value of each face image in the face image group comprises:
determining the target grade of each face image in the face image group according to the following modes:
determining a first face attribute value of a current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes;
judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade;
when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image;
and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
5. The method of claim 4, wherein the first type of face attributes comprise positive and negative faces, sharpness, brightness, degree of occlusion, yaw angle, and pitch angle.
6. The method of any of claims 1-5, wherein each of the first type of face attributes has a correspondence to at least one level;
the step of determining a screening rule for the candidate image group based on the target level of each face image in the candidate image group includes:
judging whether the face images in the alternative image group meet a first preset screening condition, wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes;
and if so, determining a preset first screening rule as a screening rule aiming at the alternative image group, wherein the first screening rule is a rule for screening the face image with the highest target grade.
7. The method according to claim 6, wherein in a case that the face image in the candidate image group is judged not to satisfy the first preset screening condition, the method further comprises:
determining a third face attribute value of each face image in the alternative image group; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes;
judging whether the face images in the alternative image group meet a second preset screening condition, wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value;
if yes, determining a preset second screening rule as a screening rule aiming at the alternative image group; the second screening rule is a rule for screening the face image with the highest first comprehensive score;
the first composite score is calculated by the following steps:
obtaining a reference value of each face attribute in second type face attributes of the face image aiming at each face image in the alternative image group, wherein the second type face attributes comprise third face attributes, and the reference value of the face attributes is determined based on a value corresponding to the face attributes;
and for each face image in the alternative image group, carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset for the third face attribute, and obtaining a first comprehensive score of the face image.
8. The method according to claim 7, wherein in a case that the face image in the candidate image group is judged not to satisfy the second preset screening condition, the method further comprises:
judging whether the face images in the alternative image group meet a third preset screening condition, wherein the third preset screening condition is as follows: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is greater than a fourth preset threshold value;
and if so, determining the first screening rule as the screening rule aiming at the alternative image group.
9. The method according to claim 8, wherein in a case that it is determined that the face image in the candidate image group does not satisfy the third preset filtering condition, the method further comprises:
determining a preset third screening rule as a screening rule aiming at the alternative image group; the third screening rule is a rule for screening the face image with the highest second comprehensive score;
the second composite score is calculated by the following steps:
determining a third type face attribute value of each face image in the alternative image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for a face attribute in the first type face attribute and the second type face attribute;
determining a second weight combination of each face image in the alternative image group according to the relation between a preset target level and the second weight combination;
and aiming at each face image in the alternative image group, obtaining a second comprehensive score of the face image according to the determined second weight combination of the face image and each face attribute value in a fourth type face attribute value, wherein the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute comprises the first type face attribute, the second type face attribute and the face attribute in the third type face attribute.
10. The method according to claim 9, wherein the step of performing a weighted calculation on each face image in the candidate image group according to the determined second weight combination of the face image and each face attribute value in the fourth class of face attribute values to obtain a second composite score of the face image comprises:
determining the value of each face attribute in a fourth type of face attributes of the face image according to the mapping relation between the value and the value corresponding to each face attribute in the preset fourth type of face attributes, wherein each type of face attribute corresponds to the same value range;
and according to the determined second weight combination of the face image, carrying out weighted calculation on the value of each face attribute corresponding to the face image to obtain a second comprehensive score of the face image.
11. The method of claim 10, wherein the second face attribute is a yaw angle;
the third face attribute is a pupil distance;
the second type of face attributes comprise a pupil distance, a pitch angle, a shielding degree and a deflection angle;
the third type of face attributes comprise whether the face is yin-yang face, open and close eyes and open and close mouth;
the fourth type of face attributes comprise definition, brightness, shielding degree, deflection angle, pitch angle, interpupillary distance, whether the face is yin-yang, eyes are opened and closed, and mouth is opened and closed.
12. A face image selection apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a face image group, and the face image group comprises a plurality of face images;
the first determining module is used for determining the target grade of each face image based on the first type face attribute value of each face image in the face image group, wherein the first type face attribute value is a value corresponding to the first type face attribute, and the first type face attribute comprises at least one face attribute;
the construction module is used for constructing a plurality of alternative image groups corresponding to the face image group, wherein the plurality of alternative image groups cover all face images in the face image group, and each alternative image group comprises at least two face images;
the first obtaining module is used for determining a screening rule aiming at each alternative image group based on the target grade of each face image in the alternative image group after each alternative image group is formed, and screening the face images in the alternative image group through the corresponding screening rule to obtain the face images to be utilized;
and the second determining module is used for determining a target face image corresponding to the face image group based on all the face images to be utilized corresponding to the face image group.
13. The apparatus of claim 12, wherein the building block comprises:
the first construction submodule is used for selecting a first preset number of face images from the face image group and constructing a first alternative image group;
the second construction sub-module is used for determining the target number of the face images which are not selected currently in the face image group for any non-first alternative image group, if the target number is larger than a second preset number, selecting the face images with the second preset number from the face images which are not selected currently in the face image group, and constructing an alternative image group by using the selected face images with the second preset number and the face images to be utilized which are obtained by screening from the previous alternative image group; and if the number of the targets is not more than a second preset number, constructing an alternative image group by using all face images which are not selected currently in the face image group and face images to be utilized which are obtained by screening from the previous alternative image group.
14. The apparatus according to claim 12, wherein the construction module is specifically configured to divide the facial images in the facial image group into a plurality of alternative image groups;
alternatively, the first and second electrodes may be,
and equally dividing the face image in the face image group into a plurality of alternative image groups.
15. The apparatus of claim 12, wherein the first obtaining module is configured to determine the target level of each face image in the face image group as follows:
determining a first face attribute value of a current face image; the first face attribute value is a value corresponding to a first face attribute, and the first face attribute is one of the first class of face attributes;
judging whether the first face attribute value meets a preset condition corresponding to the first face attribute, wherein the first face attribute corresponds to at least one preset condition, the preset condition corresponding to the first face attribute is an attribute value range set for one grade of the first face attribute, and one preset condition corresponds to one grade;
when the judgment result is yes, determining the grade corresponding to the met preset condition as the target grade of the current face image;
and when the judgment result is negative, selecting an unused face attribute from the first class of face attributes, replacing the first face attribute with the selected face attribute, and returning to the step of determining the first face attribute value of the current face image.
16. The apparatus of claim 15, wherein the first type of face attributes comprise positive and negative faces, sharpness, brightness, degree of occlusion, yaw angle, and pitch angle.
17. The apparatus according to any one of claims 12-16, wherein each of the first type of facial attributes has a correspondence to at least one level;
the first obtaining module includes:
the judging submodule is used for judging whether the face images in the alternative image group meet a first preset screening condition, wherein the first preset screening condition is as follows: the target grades of the face images in the alternative image group are different and are not the grades corresponding to the second face attributes at the same time; the second face attribute is one of the first class of face attributes;
and the first determining submodule is used for determining a preset first screening rule as the screening rule aiming at the alternative image group under the condition that the judgment result of the judging submodule is satisfied, wherein the first screening rule is a rule for screening the face image with the highest target level.
18. The apparatus of claim 17, further comprising:
a third determining module, configured to determine a third face attribute value of each face image in the candidate image group when the determination result of the determining sub-module is not satisfied; the third face attribute value is a value corresponding to a third face attribute, and the third face attribute is a face attribute except for a face attribute in the first class of face attributes;
the first judging module is used for judging whether the face images in the alternative image group meet a second preset screening condition, wherein the second preset screening condition is as follows: the target grade of the face images in the alternative image group is simultaneously the grade corresponding to the second face attribute, the maximum difference value of the difference values between the third face attribute values of every two face images is larger than a first preset threshold value, and the number of the third face images is smaller than a second preset threshold value; the third face image is a face image of which the third face attribute value is smaller than a third preset threshold value;
the fourth determining module is used for determining a preset second screening rule as the screening rule aiming at the alternative image group under the condition that the judgment result of the first judging module is satisfied; the second screening rule is a rule for screening the face image with the highest first comprehensive score;
a second obtaining module, configured to obtain, for each face image in the candidate image group, a reference value of each face attribute in second types of face attributes of the face image, where the second types of face attributes include a third face attribute, and the reference value of the face attribute is determined based on a value corresponding to the face attribute;
and the third obtaining module is used for carrying out weighted calculation on the reference value of each face attribute of the face image according to a first weight combination which is preset aiming at the third face attribute aiming at each face image in the alternative image group so as to obtain a first comprehensive score of the face image.
19. The apparatus of claim 18, further comprising:
a second judging module, configured to, when the judgment result of the first judging module is not satisfied, judge whether the face image in the candidate image group satisfies a third preset screening condition, where the third preset screening condition is: the target grade of the face image in the alternative image group is the grade corresponding to the second face attribute, and the grade difference value between every two target grades is greater than a fourth preset threshold value;
and a fifth determining module, configured to determine the first filtering rule as the filtering rule for the candidate image group if the determination result of the second determining module is yes.
20. The apparatus of claim 19, further comprising:
a sixth determining module, configured to determine a preset third filtering rule as the filtering rule for the candidate image group when the determination result of the second determining module is not satisfied; the third screening rule is a rule for screening the face image with the highest second comprehensive score;
a seventh determining module, configured to determine a third type face attribute value of each face image in the candidate image group; the third type face attribute value is a value corresponding to a third type face attribute, and the third type face attribute is a face attribute except for a face attribute in the first type face attribute and the second type face attribute;
the eighth determining module is used for determining a second weight combination of each face image in the alternative image group according to the relation between the preset target level and the second weight combination;
a fourth obtaining module, configured to obtain, for each face image in the candidate image group, a second comprehensive score of the face image according to each face attribute value in a fourth type face attribute value and a second weight combination of the determined face image, where the fourth type face attribute value is a value corresponding to a fourth type face attribute, and the fourth type face attribute includes a face attribute in the first type face attribute, the second type face attribute, and the third type face attribute.
21. The apparatus of claim 20, wherein the fourth obtaining module comprises:
the second determining submodule is used for determining the value of each face attribute in the fourth type face attributes of the face image according to the mapping relation between the value corresponding to each face attribute in the preset fourth type face attributes and the value, wherein each type of face attribute corresponds to the same value range;
and the obtaining submodule is used for carrying out weighted calculation on the value of each face attribute corresponding to the face image according to the determined second weight combination of the face image to obtain a second comprehensive score of the face image.
22. The apparatus of claim 21, wherein the second face attribute is a yaw angle;
the third face attribute is a pupil distance;
the second type of face attributes comprise a pupil distance, a pitch angle, a shielding degree and a deflection angle;
the third type of face attributes comprise whether the face is yin-yang face, open and close eyes and open and close mouth;
the fourth type of face attributes comprise definition, brightness, shielding degree, deflection angle, pitch angle, interpupillary distance, whether the face is yin-yang, eyes are opened and closed, and mouth is opened and closed.
23. A computer device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-11.
CN201710692300.4A 2017-08-14 2017-08-14 Face image selection method and device and computer equipment Active CN109389019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710692300.4A CN109389019B (en) 2017-08-14 2017-08-14 Face image selection method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710692300.4A CN109389019B (en) 2017-08-14 2017-08-14 Face image selection method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN109389019A CN109389019A (en) 2019-02-26
CN109389019B true CN109389019B (en) 2021-11-05

Family

ID=65415682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710692300.4A Active CN109389019B (en) 2017-08-14 2017-08-14 Face image selection method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN109389019B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767757B (en) * 2019-03-29 2023-11-17 杭州海康威视数字技术股份有限公司 Identity information determining method and device
CN110287361B (en) * 2019-06-28 2021-06-22 北京奇艺世纪科技有限公司 Figure picture screening method and device
CN110321843B (en) * 2019-07-04 2021-11-09 杭州视洞科技有限公司 Face optimization method based on deep learning
CN110807767A (en) * 2019-10-24 2020-02-18 北京旷视科技有限公司 Target image screening method and target image screening device
CN110807486B (en) * 2019-10-31 2022-09-02 北京达佳互联信息技术有限公司 Method and device for generating category label, electronic equipment and storage medium
CN111382681B (en) * 2020-02-28 2023-11-14 浙江大华技术股份有限公司 Face registration method, device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011100240A (en) * 2009-11-05 2011-05-19 Nippon Telegr & Teleph Corp <Ntt> Representative image extraction method, representative image extraction device, and representative image extraction program
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN105138962A (en) * 2015-07-28 2015-12-09 小米科技有限责任公司 Image display method and image display device
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN105472239A (en) * 2015-11-17 2016-04-06 小米科技有限责任公司 Photo processing method and photo processing device
CN106446851A (en) * 2016-09-30 2017-02-22 厦门大图智能科技有限公司 Visible light based human face optimal selection method and system
CN106528879A (en) * 2016-12-14 2017-03-22 北京小米移动软件有限公司 Picture processing method and device
CN106815575A (en) * 2017-01-22 2017-06-09 上海银晨智能识别科技有限公司 The optimum decision system and its method of Face datection result set

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271567A (en) * 2007-03-20 2008-09-24 凌阳科技股份有限公司 Image comparison method and system
CN101398832A (en) * 2007-09-30 2009-04-01 国际商业机器公司 Image searching method and system by utilizing human face detection
CN103136533B (en) * 2011-11-28 2015-11-25 汉王科技股份有限公司 Based on face identification method and the device of dynamic threshold
US8861804B1 (en) * 2012-06-15 2014-10-14 Shutterfly, Inc. Assisted photo-tagging with facial recognition models
CN104168378B (en) * 2014-08-19 2018-06-05 上海卓易科技股份有限公司 A kind of picture group technology and device based on recognition of face
CN104299001B (en) * 2014-10-11 2018-08-07 小米科技有限责任公司 Generate the method and device of photograph album
CN105243098B (en) * 2015-09-16 2018-10-26 小米科技有限责任公司 The clustering method and device of facial image
CN106611151B (en) * 2015-10-23 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of face identification method and device
CN105335714B (en) * 2015-10-28 2019-06-14 小米科技有限责任公司 Photo processing method, device and equipment
CN105631408B (en) * 2015-12-21 2019-12-27 小米科技有限责任公司 Face photo album processing method and device based on video
CN106156749A (en) * 2016-07-25 2016-11-23 福建星网锐捷安防科技有限公司 Method for detecting human face based on selective search and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011100240A (en) * 2009-11-05 2011-05-19 Nippon Telegr & Teleph Corp <Ntt> Representative image extraction method, representative image extraction device, and representative image extraction program
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN105138962A (en) * 2015-07-28 2015-12-09 小米科技有限责任公司 Image display method and image display device
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN105472239A (en) * 2015-11-17 2016-04-06 小米科技有限责任公司 Photo processing method and photo processing device
CN106446851A (en) * 2016-09-30 2017-02-22 厦门大图智能科技有限公司 Visible light based human face optimal selection method and system
CN106528879A (en) * 2016-12-14 2017-03-22 北京小米移动软件有限公司 Picture processing method and device
CN106815575A (en) * 2017-01-22 2017-06-09 上海银晨智能识别科技有限公司 The optimum decision system and its method of Face datection result set

Also Published As

Publication number Publication date
CN109389019A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109389019B (en) Face image selection method and device and computer equipment
US20200364443A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
CN109492577B (en) Gesture recognition method and device and electronic equipment
JP6330385B2 (en) Image processing apparatus, image processing method, and program
CN110807385A (en) Target detection method and device, electronic equipment and storage medium
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN110941594A (en) Splitting method and device of video file, electronic equipment and storage medium
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
CN109426785B (en) Human body target identity recognition method and device
CN110858286A (en) Image processing method and device for target recognition
US8842889B1 (en) System and method for automatic face recognition
JP2019057815A (en) Monitoring system
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
CN111428552B (en) Black eye recognition method and device, computer equipment and storage medium
CN112418009A (en) Image quality detection method, terminal device and storage medium
CN113793336A (en) Method, device and equipment for detecting blood cells and readable storage medium
CN109389105A (en) A kind of iris detection and viewpoint classification method based on multitask
CN109146923B (en) Processing method and system for target tracking broken frame
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN112257567B (en) Training of behavior recognition network, behavior recognition method and related equipment
CN112464765B (en) Safety helmet detection method based on single-pixel characteristic amplification and application thereof
CN111382638A (en) Image detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant