US20220044005A1 - Methods and apparatuses for determining face image quality, electronic devices, and computer storage media - Google Patents

Methods and apparatuses for determining face image quality, electronic devices, and computer storage media Download PDF

Info

Publication number
US20220044005A1
US20220044005A1 US17/452,062 US202117452062A US2022044005A1 US 20220044005 A1 US20220044005 A1 US 20220044005A1 US 202117452062 A US202117452062 A US 202117452062A US 2022044005 A1 US2022044005 A1 US 2022044005A1
Authority
US
United States
Prior art keywords
face
score
obtaining
size
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/452,062
Inventor
Lifei XU
Chendi YU
Wenzhi Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to US17/452,062 priority Critical patent/US20220044005A1/en
Assigned to SHENZHEN SENSETIME TECHNOLOGY CO., LTD. reassignment SHENZHEN SENSETIME TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIU, WENZHI, XU, Lifei, YU, Chendi
Publication of US20220044005A1 publication Critical patent/US20220044005A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06K9/00228
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • Face recognition technologies have a great improvement in performance in recent years. Face recognition in non-extreme scenes may reach a level close to that of artificial recognition. Face recognition technologies are more widely applied to various scenes in life.
  • the present disclosure relates to computer vision technologies, and in particular, to methods and apparatuses for determining face image quality, electronic devices, and computer storage media.
  • Embodiments of the present disclosure provide technical solutions for determining face image quality.
  • a first obtaining module configured to obtain at least one of pose angle information of a face in an image or size information of the face
  • a second obtaining module configured to obtain quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • An electronic device provided according to still another aspect of the embodiments of the present disclosure includes the apparatus for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • Another electronic device includes a processor and a memory for storing instructions executable by the processor; where execution of the instructions by the processor causes the processor to perform: obtaining at least one of pose angle information of a face in an image or size information of the face; and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • a non-transitory computer storage medium configured to store computer-readable instructions, where execution of the instructions by the processor causes the processor to perform: obtaining at least one of pose angle information of a face in an image or size information of the face; and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • a computer program provided according to yet another aspect of the embodiments of the present disclosure including a computer instruction, where when the computer instruction runs in a processor of a device, the processor executes operations corresponding to method for determining face image quality according to any one of the foregoing embodiments of the present application.
  • pose angle information and/or size information of a face in an image are obtained, and quality of the face in the image is obtained on the basis of the pose angle information and/or the size information of the face.
  • FIG. 1 illustrates a flowchart of one embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 2 illustrates a flowchart of another embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 3 illustrates a flowchart of still another embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 4 illustrates a flowchart of one specific application embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 5 illustrates a schematic structural diagram of one embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 6 illustrates a schematic structural diagram of another embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 7 illustrates a schematic structural diagram of still another embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 8 illustrates a schematic structural diagram of one embodiment of an electronic device of the present disclosure.
  • the embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, and servers, which may operate with numerous other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use together with the electronic devices such as terminal devices, computer systems, and servers include, but are not limited to, Personal Computer (PC) systems, server computer systems, thin clients, thick clients, handheld or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any one of the foregoing systems.
  • PC Personal Computer
  • server computer systems thin clients, thick clients, handheld or laptop devices
  • microprocessor-based systems set top boxes
  • programmable consumer electronics network PCs
  • minicomputer systems minicomputer systems
  • mainframe computer systems mainframe computer systems
  • distributed cloud computing environments that include any one of the foregoing systems.
  • the electronic devices such as terminal devices, computer systems, and servers may be described in the general context of computer system-executable instructions (for example, program modules) executed by the computer system.
  • the program modules may include routines, programs, target programs, components, logics, data structures, and the like for performing specific tasks or implementing specific abstract data types.
  • the computer systems/servers may be implemented in the distributed cloud computing environments in which tasks are performed by remote processing devices linked via a communication network.
  • the program modules may be located in local or remote computing system storage media including storage devices.
  • FIG. 1 is a flowchart of one embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 1 , the method for determining face image quality of this embodiment includes:
  • pose angle information and/or size information of a face in an image are obtained.
  • a pose angle of the face is a head pose angle of human, including a yaw angle and a pitch angle of the face in head normalized spherical coordinates (i.e., an image acquisition coordinate system), where the yaw angle is used for indicating a side face angle of the face in a horizontal direction, and the pitch angle is used for indicating head-down or head-up angle of the face in a vertical direction.
  • the face size is fixed, the smaller the yaw angle and the pitch angle are, the more front-facing the face is, the easier the face recognition is, and the higher the face recognition accuracy is.
  • both the yaw angle and the pitch angle are zero, the face is the most front-facing and the face recognition accuracy is the highest.
  • the face size is also a face pixel size, and the bigger the face is, the higher the definition is, the easier the face recognition is, and the higher the face recognition accuracy is.
  • Quality information of the face in the image is obtained on the basis of the pose angle information and/or the size information of the face.
  • the false recognition rate is also related to the quality of the face image. If the quality of face image is poor, for example, the side face angle is excessively large and the face pixel is excessively small, the face recognition accuracy generally decreases, and the false recognition rate is relatively high. In actual scenes, most of false recognition and missing recognition are caused due to the fact that the quality of the face image is not high enough. Therefore, the method for determining face image quality which draws enough attention is helpful for improving the face recognition rate and is very important.
  • the existing image quality evaluation methods may be divided into two categories: subjective evaluation methods and objective evaluation methods.
  • subjective evaluation methods With the improvement of the automation level, the subjective evaluation methods requiring manual participation in many fields have a lot of inconvenience, high costs and long cycles, and therefore the objective evaluation methods are gradually developed.
  • the method for determining face image quality has not attracted enough attention, the objective evaluation method for face image quality is not yet mature, and the evaluation result for face image quality is inaccurate enough.
  • evaluation criteria for good face image quality should enable the face to be easily recognized, for example, the face is easy to be recognized when needing to meet conditions such as high definition, big face, and front-facing face.
  • the definition of the face image is influenced by two aspects: one is that the image captured by a camera is blurred, and the other is that the face image itself is excessively small. Since the size of the face image needs to be uniformly scaled to a standard size before face image recognition, when a small face image is amplified to the standard size, fuzziness caused by pixel interpolation may exist.
  • the image captured thereby is clear. Therefore, disregarding the case that the image captured by the camera is not sharp, the definition of the face image and the face size are positively related. The bigger the face is, the higher the definition is.
  • the face definition may be evaluated by using the face size.
  • the face image quality is evaluated on the basis of key factors affecting the face recognition result (for example, face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size.
  • the front-facing degree of the face is determined based on the pose angle of the face
  • the face definition is determined based on the face size
  • the face image quality is evaluated according to the pose angle information and the size information of the face.
  • the face image quality is objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with directly obtaining the face definition in the image, the method improves the operation efficiency and increases the real-time performance of the face quality evaluation.
  • the obtaining the pose angle information of the face in the image may be implemented in the following mode:
  • a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box are obtained.
  • face detection may be performed on the image to obtain the face detection bounding box
  • key point for example, corners of eyes and mouth
  • positioning is performed on the face in the face detection bounding box to obtain key point coordinates of the face
  • the pose angle information of the face is obtained according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face.
  • the obtaining the size information of the face in the image may be implemented in the following mode: obtaining the size information of the face according to the size of the face detection bounding box, where the size of the face detection bounding box includes length and/or width of the face detection bounding box.
  • indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face for reflecting whether the face is front-facing, and face size for reflecting face definition and face size; and a method for evaluating face image quality according to pose angle information and size information of a face is obtained.
  • the face image quality may be objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, in the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with directly obtaining the face definition in the image, the method facilitates improving the operation efficiency and increasing the real-time performance of face quality evaluation.
  • operation 104 may include: obtaining the score of the pose angle of the face according to the pose angle information of the face; obtaining the score of the face size according to the size information of the face; and obtaining a quality score of the face in the image according to the score of the pose angle of the face and the score of the face size.
  • FIG. 2 is a flowchart of another embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 2 , the method for determining face image quality of this embodiment includes:
  • the face detection bounding box includes a face image detected from the image.
  • face detection may be performed on the image by means of a face detection algorithm to obtain the face detection bounding box;
  • key point positioning is performed on the face in the face detection bounding box by means of a key point detection algorithm to obtain the key point coordinates of the face.
  • the pose angle information of the face is obtained according to the key point coordinates of the face, and the size information of the face is obtained according to the size of the face detection bounding box.
  • the size of the face detection bounding box includes length and/or width of the face detection bounding box.
  • the size of the face detection bounding box is the face size.
  • a pose angle of the face is a head pose angle of the human, including a yaw angle and a pitch angle of the face in head normalized spherical coordinates (i.e., an image acquisition coordinate system), where the yaw angle is used for indicating the angle of the side face of the face in a horizontal direction, and the pitch angle is used for indicating the head-down or head-up angle of the face in a vertical direction.
  • the face size is also the face pixel size, and the bigger the face is, the higher the definition is, the easier the face recognition is, and the higher the face recognition accuracy is.
  • the score of the pose angle of the face is obtained according to the pose angle information of the face, and the score of the face size is obtained according to the size information of the face.
  • the score of the pose angle of the face may be obtained in the following mode: according to the yaw angle and the pitch angle of the face, obtaining the score Q yaw of the yaw angle yaw of the face by calculation based on
  • the score of the face size may be obtained in the following mode: obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box.
  • the length, width, and area of face detection bounding box correspond to the size of the face image. Therefore, the score of the face size may be determined on the basis of at least one of length, width, or area of the face detection bounding box.
  • the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box may be: selecting a smaller value min in the length and the width of the face detection bounding box;
  • the face size may be better determined by means of the smaller value in the length and the width of the face detection bounding box, so that the score of the face size is obtained by calculation on the basis of the smaller value in the length and the width of the face detection bounding box, and the face size may be reflected more objectively.
  • the quality score of the face in the image is obtained according to the score of the pose angle of the face and the score of the face size.
  • operation 208 may be implemented in the following mode:
  • the quality score of the face in the image is obtained by calculation according to the score of the yaw angle and its weight, the score of the pitch angle and its weight, and the score of the face size and its weight.
  • the weight of the score of the yaw angle, the weight of the score of the pitch angle, and the weight of the score of the face size may be preset, and may be adjusted according to actual requirements. In general, the yaw angle has the greatest influence on the accuracy of the face recognition result. In a specific application, the weight of the score of the yaw angle may be set to be greater than the weight of the score of the pitch angle and the weight of the score of the size of the face, so that the obtained quality score of the face in the image may more accurately and objectively reflect the quality of the face in one image.
  • the face image quality is evaluated on the basis of key factors affecting the face recognition result (face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size.
  • the front-facing degree of the face is determined based on the pose angle of the face
  • the face definition is determined based on the face size
  • the score of the pose angle of the face and the score of the face size are further obtained.
  • the quality score of the face in the image is obtained according to the score of the pose angle of the face and the score of the face size, so as to more accurately and objectively evaluate the quality of the face in the image, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the method improves the operation efficiency and increases the real-time performance of the face quality evaluation.
  • FIG. 3 is a flowchart of still another embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 3 , the method for determining face image quality of this embodiment includes:
  • a face detection bounding box in the image key point coordinates of the face determined according to the face detection bounding box, and a confidence score of the key point coordinates are obtained.
  • the confidence score of the key point coordinates is used for indicating the accuracy of the key point coordinates of the face, and the greater the numerical value of the confidence score is, the more accurate the key point coordinates of the face are.
  • operation 302 may be implemented through a pre-trained first neural network.
  • the first neural network After receiving an input image, the first neural network outputs the face detection bounding box, the key point coordinates of the face determined according to the face detection bounding box, and the confidence score of the key point coordinates by performing face detection and key point detection on the image.
  • the confidence score of the key point coordinates may be determined by the first neural network on the basis of the performance of the first neural network and the size of the face detection bounding box according to a preset mode. The better the performance of the first neural network is, and the larger the face detection bounding box is (i.e., the face image is relatively large, and the face is relatively clear), the higher the accuracy of the determined key point coordinates of the face is.
  • the pose angle information of the face is obtained according to the key point coordinates of the face, and the size information of the face is obtained according to the size of the face detection bounding box, where the pose angle information of the face includes a yaw angle and a pitch angle of the face.
  • the score of the pose angle of the face is obtained according to the pose angle information of the face, and the score of the face size is obtained according to the size information of the face.
  • the score of the pose angle of the face may be obtained in the following mode:
  • the score of the face size may be obtained in the following mode: obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box.
  • the length, width, and area of face detection bounding box correspond to the size of the face image. Therefore, the score of the face size may be determined on the basis of at least one of length, width, or area of the face detection bounding box.
  • the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box may be: selecting a smaller value min in the length and the width of the face detection bounding box; and obtaining the score Q rect of the face size by calculation based on
  • the face size may be better determined by means of the smaller value in the length and the width of the face detection bounding box, so that the score of the face size is obtained by calculation on the basis of the smaller value in the length and the width of the face detection bounding box, and the face size may be reflected more objectively.
  • the pose angle of the face is corrected by using the confidence score of the key point coordinates.
  • the corrected score Q yaw of the yaw angle and the corrected score Q pitch of the pitch angle are obtained by calculation based on
  • operation 308 may be performed simultaneously with the operation of obtaining the score of the face size, before or after the operation of obtaining the score of the face size, and no execution time limit exists between the two operations.
  • the quality score of the face in the image is obtained according to the corrected score of the pose angle of the face and the score of the face size.
  • the score of the pose angle of the face obtained by calculation is correspondingly corrected according to the confidence score of the key point coordinates of the face, so as to eliminate the inaccuracy of evaluation of the pose angle information of the face due to the inaccuracy of the key point coordinates, and the influence on the final determination of the result of the face image quality, and improve the accuracy and reliability of determining the result of the face image quality.
  • FIG. 4 is a flowchart of a specific application embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 4 , the method for determining face image quality of this embodiment includes:
  • face detection is performed on the image to obtain a face detection bounding box.
  • Key point positioning is performed on the face in the face detection bounding box to obtain key point coordinates of the face and a confidence score of the key point coordinates.
  • the confidence score of the key point coordinates is used for indicating the accuracy of the key point coordinates of the face.
  • operations 402 - 404 may be implemented through a pre-trained first neural network.
  • the first neural network After receiving an input image, the first neural network outputs the face detection bounding box, the key point coordinates of the face, and the confidence score of the key point coordinates by performing face detection and key point detection on the image.
  • the confidence score of the key point coordinates may be determined by the first neural network on the basis of the performance of the first neural network and the size of the face detection bounding box according to a preset mode. The better the performance of the first neural network is, and the larger the face detection bounding box is (i.e., the face image is relatively large, and the face is relatively clear), the higher the accuracy of the determined key point coordinates of the face is.
  • the pose angle information of the face is obtained according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face.
  • operation 406 may be implemented by a pre-trained second neural network. After receiving the key point coordinates of the face, the second neural network outputs the yaw angle and the pitch angle of the face by calculating the key point coordinates of the face.
  • the score Q yaw of the yaw angle yaw of the face is obtained by calculation based on
  • the score of the pose angle of the face is corrected by using the confidence score of the key point coordinates.
  • the corrected score Q yaw of the yaw angle and the corrected score Q pitch of the pitch angle are obtained by calculations based on
  • the size of the face detection bounding box is obtained, including length and width of the face detection bounding box.
  • the quality of the face in the image is obtained by calculation according to the corrected score of the yaw angle and its weight, the corrected score of the pitch angle and its weight, and the score of the face size and its weight.
  • Q is the quality of the face in the image
  • Q yaw represents the corrected score of the yaw angle (“yaw”)
  • Q pitch represents the corrected score of the pitch angle (“pitch”)
  • Q rect represents the score of the face size
  • w 1 , w 2 , and w 3 respectively represent the weight of the score of the yaw angle, the weight of the score of the pitch angle, and the weight of the score of the face size.
  • the yaw angle has the greatest influence on the accuracy of the face recognition result
  • the value of w 1 may be set to 0.6; both the weight w 2 of the score of the pitch angle and the weight w 3 of the score of the size of the face may be set to 0.2, and may also be adjusted according to actual conditions.
  • the foregoing embodiments of the method for determining face image quality according to the present disclosure are executed for any one of multiple images of the same face respectively, so as to obtain the quality score of the face in the multiple images.
  • Yet another embodiment of the method for determining face image quality according to the present disclosure may further include: selecting, according to the quality information of the face in the multiple images, selecting at least one image with high face quality for face detection.
  • the images with poor face quality are removed, and the images with high face quality are selected for face detection and recognition. Because the selected images with high face quality have a high face recognition rate, the accuracy of face recognition may be improved, the operation data volume of the face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for determining face image quality according to the present disclosure.
  • the apparatus for determining face image quality of this embodiment may be configured to implement the foregoing embodiments of the method for determining face image quality according to the present disclosure.
  • the apparatus for determining face image quality of this embodiment includes: a first obtaining module 502 and a second obtaining module 504 .
  • the first obtaining module 502 is configured to obtain pose angle information and size information of a face in an image
  • the second obtaining module 504 is configured to obtain quality information of the face in the image on the basis of the pose angle information and the size information of the face.
  • the face image quality is evaluated on the basis of key factors affecting the face recognition result (for example, face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size.
  • the quality of the face in the image is determined according to the pose angle of the face and the face size.
  • the face image quality may be objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the method facilitates improving the operation efficiency and increasing the real-time performance of the face quality evaluation.
  • FIG. 6 is a schematic structural diagram of another embodiment of an apparatus for determining face image quality according to the present disclosure.
  • the first obtaining module 502 specifically includes: a face detection sub-module 602 , a key point detection sub-module 604 , and a first obtaining sub-module 606 .
  • the face detection sub-module 602 is configured to obtain a face detection bounding box in the image, where the face detection bounding box is configured to determine the face in the image. According to one or more embodiments of the present disclosure, the face detection sub-module 602 may be configured to perform face detection on the image to obtain the face detection bounding box.
  • the key point detection sub-module 604 is configured to obtain key point coordinates of the face determined according to the face detection bounding box. According to one or more embodiments of the present disclosure, the key point detection sub-module 604 may be configured to perform key point positioning on the face image determined according to the face detection bounding box to obtain the key point coordinates of the face.
  • the first obtaining sub-module 606 is configured to obtain pose angle information of the face according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face, and to obtain size information of the face according to the size of the face detection bounding box, where the size of the face detection bounding box includes length and/or width of the face detection bounding box.
  • the face detection sub-module 602 is configured to perform face detection on an image to obtain the face detection bounding box, where the face detection bounding box includes the image of the face, called: a face image.
  • the key point detection sub-module 604 is configured to perform key point positioning on the face image determined according to the face detection bounding box to obtain the key point coordinates of the face.
  • the second obtaining module 504 may include a second obtaining sub-module 608 , a third obtaining sub-module 610 , and a fourth obtaining sub-module 612 .
  • the second obtaining sub-module 608 is configured to obtain the score of the pose angle of the face according to the pose angle information of the face.
  • the second obtaining module 608 may obtain the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box in the following mode: selecting a smaller value min in the length and the width of the face detection bounding box; and obtaining the score Q rect of the face size by calculation based on
  • the third obtaining sub-module 610 is configured to obtain the score of the face size according to the size information of the face.
  • the third obtaining sub-module 610 is configured to obtain the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box.
  • the fourth obtaining sub-module 612 is configured to obtain the quality score of the face in the image according to the score of the pose angle of the face and the score of the face size. According to one or more embodiments of the present disclosure, the fourth obtaining sub-module 612 is configured to obtain the quality of the face in the image by calculation according to the score of the yaw angle and its weight, the score of the pitch angle and its weight, and the score of the face size and its weight. In an actual application, because the yaw angle of the face has the greatest influence on the accuracy of the face recognition result, the weight of the score of the yaw angle may be set to be greater than the weight of the score of the pitch angle and the weight of the score of the face size.
  • FIG. 7 is a schematic structural diagram of still another embodiment of an apparatus for determining face image quality according to the present disclosure. As shown in FIG. 7 , compared with the apparatus for determining face image quality according to the foregoing embodiments of the present disclosure, the apparatus for determining face image quality in this embodiment further includes: a fourth obtaining module 506 and a correction module 508 .
  • the fourth obtaining module 506 is configured to obtain the confidence score of the key point coordinates, where the confidence score of the key point coordinates is configured to represent an accuracy rate of the key point coordinates of the face.
  • the fourth obtaining module 506 may be integrated with the key point detection sub-module 604 , and the two may be implemented by means of a neural network.
  • the correction module 508 is configured to correct, by using the confidence score of the key point coordinates, the score of pose angle of the face obtained by the second obtaining sub-module 608 .
  • the correction module 508 is configured to obtain, by using the confidence score of the key point coordinates, the corrected score Q yaw of the yaw angle and the corrected score Q pitch of the pitch angle by calculation based on
  • the fourth obtaining sub-module 612 is configured to obtain the quality of the face in the image according to the corrected score of the pose angle of the face and the score of the face size.
  • the embodiments of the present disclosure further provide an electronic device, including the apparatus for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • an electronic device including the apparatus for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • the embodiments of the electronic device further include a selection module and a face detection module.
  • the selection module is configured to select, according to quality information of a face in multiple images output by the apparatus for determining face image quality, at least one image with high face quality;
  • the face detection module is configured to perform face detection on the selected at least one image.
  • the images with poor face quality are removed, and the images with high face quality are selected for face detection and recognition. Because the selected images with high face quality have a high face recognition rate, the accuracy rate of face recognition may be improved, the operation data volume of face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • the embodiments of the present disclosure further provide another electronic device, including: a memory, configured to store executable instructions; and a processor, configured to communicate with the memory to execute the executable instructions so as to complete operations of the method for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • the electronic device may be a mobile terminal, a PC, a tablet computer, a server, and the like.
  • the embodiments of the present disclosure further provide a computer storage medium, which is configured to store computer-readable instructions.
  • the instructions When the instructions are executed, the operations of the method for determining face image quality according to any one of the foregoing embodiments of the present disclosure are implemented.
  • FIG. 8 is a schematic structural diagram of one embodiment of an electronic device according to the present disclosure.
  • the electronic device includes one or more processors, and a communication part, etc.
  • the one or more processors are, for example, one or more Central Processing Units (CPUs) 801 and/or one or more Graphic Processing Units (GPUs) 813 , and the processors may execute appropriate actions and processing according to executable instructions stored in a Read-Only Memory (ROM) 802 or executable instructions loaded from a storage section 808 to a Random Access Memory (RAM) 803 .
  • the communication part 812 may include, but is not limited to, a network card.
  • the network card may include, but is not limited to, an InfiniBand (IB) network card.
  • IB InfiniBand
  • the processor may communicate with the ROM 802 and/or the RAM 803 to execute executable instructions, is connected to the communication part 812 by means of a bus 804 , and communicates with other target devices by means of the communication part 812 , so as to complete corresponding operations of any of the methods provided by the embodiments of the present disclosure, for example, obtaining pose angle information and size information of a face in an image, and obtaining quality information of the face in the image on the basis of the pose angle information and the size information of the face.
  • the RAM 803 may further store various programs and data required during an operation of the apparatus.
  • the CPU 801 , the ROM 802 , and the RAM 803 are connected to each other via the bus 804 .
  • the ROM 802 is an optional module.
  • the RAM 803 stores executable instructions, or writes executable instructions to the ROM 802 during running.
  • the executable instructions cause the CPU 801 to execute the operations of the communication method.
  • An Input/Output (I/O) interface 805 is also connected to the bus 804 .
  • the communication part 812 may be integrated, or may be set as having multiple sub-modules (for example, multiple IB network cards) connected to the bus.
  • the following components are connected to the I/O interface 805 : an input section 806 including a keyboard, a mouse and the like; an output section 807 including a Cathode-Ray Tube (CRT), a Liquid Crystal Display (LCD), a speaker and the like; a storage section 808 including a hard disk and the like; and a communication section 809 of a network interface card including an LAN card, a modem and the like.
  • the communication section 809 executes communication processing through a network such as the Internet.
  • a drive 810 is also connected to the I/O interface 805 according to requirements.
  • a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 810 according to requirements, so that a computer program read from the removable medium may be installed on the storage section 808 according to requirements.
  • FIG. 8 is merely an optional implementation mode.
  • the number and types of the components in FIG. 8 may be selected, decreased, increased, or replaced according to actual needs.
  • Different functional components may be separated or integrated or the like.
  • the GPU 813 and the CPU 801 may be separated, or the GPU 813 may be integrated on the CPU 801 , and the communication part may be separated from or integrated on the CPU 801 or the GPU 813 or the like.
  • a process described above with reference to a flowchart according to the embodiments of the present disclosure may be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product.
  • the computer program product includes a computer program tangibly included in a machine-readable medium.
  • the computer program includes a program code for executing a method shown in the flowchart.
  • the program code may include corresponding instructions for correspondingly executing the operations of the method provided by the embodiments of the present disclosure, for example, an instruction for obtaining pose angle information and size information of a face in an image, and an instruction for obtaining quality information of the face in the image on the basis of the pose angle information and the size information of the face.
  • the computer program may be downloaded and installed from the network through the communication part 809 , and/or is installed from the removable medium 811 .
  • the computer program executes the foregoing functions defined in the method of the present disclosure.
  • the embodiments of the present disclosure may be optionally applied to: residential area monitoring or security monitoring fields, a capture machine, or products related to face recognition. Face detection is performed on an image collected by a camera (i.e., the image in the embodiments of the present disclosure), and a face image is recognized. In order to improve the accuracy rate of face recognition, reduce the false recognition rate and the missing recognition rate, and avoid unnecessary recognition, it is necessary to first provide images to an apparatus or a device for determining face image quality, the images are screened and filtered, so as to obtain high-quality face images. By evaluating the face image quality, the images having a big side face or deeply head-down or having an extremely low face pixel (i.e., an extremely small face size) may be screened out due to difficulty in accurate recognition.
  • the quality of a face in various images may be obtained, and the images having low face quality and unsuitable for face recognition are effectively filtered out, so as to reduce the number of face recognitions and improve the face recognition efficiency.
  • the embodiments of the present disclosure are applied to an embedded device for face recognition which is time-consuming, the effect is more obvious.
  • the embodiments of the present disclosure have at least the following beneficial technical effects: according to the embodiments of the present disclosure, face image requirements facilitating face recognition are fully considered, a pose angle of a face is evaluated and evaluation indexes are designed in combination of the face size, the face image quality is comprehensively evaluated based on the combination of a yaw angle and a pitch angle of the face and the face size, and conditions which may cause inaccurate evaluation of the pose angle of the face are corrected.
  • the method is high in real-time performance and easy to apply, and the accuracy and reliability of the evaluation method are ensured.
  • the method facilitates improving the operation efficiency and increasing the real-time performance of the face quality evaluation.
  • the accuracy rate of face recognition may be improved, the operation data volume of face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • the foregoing program may be stored in a computer-readable storage medium; when the program is executed, operations including the foregoing method embodiments are executed.
  • the foregoing storage medium includes various media capable of storing program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • the methods and the apparatuses of the present disclosure may be implemented in many manners.
  • the methods and apparatuses of the present disclosure may be implemented by using software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the foregoing sequences of operations of the methods are merely for description, and are not intended to limit the operations of the methods of the present disclosure.
  • the present disclosure may be implemented as programs recorded in a recording medium.
  • the programs include machine-readable instructions for implementing the methods according to the present disclosure. Therefore, the present disclosure further covers the recording medium storing the programs for performing the methods according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for determining face image quality includes: obtaining pose angle information and/or size information of a face in an image; and obtaining quality information of the face in the image on the basis of the pose angle information and/or the size information of the face.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of U.S. patent application Ser. No. 16/655,235 filed on Oct. 17, 2019, which is a continuation of International Application No. PCT/CN2018/087915 filed on May 22, 2018, which claims priority to Chinese Patent Application No. 201710405232.9 filed May 31, 2017. The disclosures of the above-referenced applications are incorporated herein by reference in their entirety.
  • BACKGROUND
  • With the development of computer vision technologies, face recognition technologies have a great improvement in performance in recent years. Face recognition in non-extreme scenes may reach a level close to that of artificial recognition. Face recognition technologies are more widely applied to various scenes in life.
  • SUMMARY
  • The present disclosure relates to computer vision technologies, and in particular, to methods and apparatuses for determining face image quality, electronic devices, and computer storage media.
  • Embodiments of the present disclosure provide technical solutions for determining face image quality.
  • A method for determining face image quality provided according to one aspect of the embodiments of the present disclosure includes:
  • obtaining at least one of pose angle information of a face in an image or size information of the face; and
  • obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • An apparatus for determining face image quality provided according to another aspect of the embodiments of the present disclosure includes:
  • a first obtaining module, configured to obtain at least one of pose angle information of a face in an image or size information of the face; and
  • a second obtaining module, configured to obtain quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • An electronic device provided according to still another aspect of the embodiments of the present disclosure includes the apparatus for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • Another electronic device provided according to still another aspect of the embodiments of the present disclosure includes a processor and a memory for storing instructions executable by the processor; where execution of the instructions by the processor causes the processor to perform: obtaining at least one of pose angle information of a face in an image or size information of the face; and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • A non-transitory computer storage medium provided according to yet another aspect of the embodiments of the present disclosure is configured to store computer-readable instructions, where execution of the instructions by the processor causes the processor to perform: obtaining at least one of pose angle information of a face in an image or size information of the face; and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face.
  • A computer program provided according to yet another aspect of the embodiments of the present disclosure, including a computer instruction, where when the computer instruction runs in a processor of a device, the processor executes operations corresponding to method for determining face image quality according to any one of the foregoing embodiments of the present application.
  • On the basis of methods and apparatuses for determining face image quality, the electronic devices, and the computer storage media provided according to the foregoing embodiments of the present disclosure, pose angle information and/or size information of a face in an image are obtained, and quality of the face in the image is obtained on the basis of the pose angle information and/or the size information of the face.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings constituting a part of the specification describe the embodiments of the present disclosure and are intended to explain the principles of the present disclosure together with the descriptions.
  • According to the following detailed descriptions, the present disclosure may be understood more clearly with reference to the accompanying drawings.
  • FIG. 1 illustrates a flowchart of one embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 2 illustrates a flowchart of another embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 3 illustrates a flowchart of still another embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 4 illustrates a flowchart of one specific application embodiment of a method for determining face image quality according to the present disclosure.
  • FIG. 5 illustrates a schematic structural diagram of one embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 6 illustrates a schematic structural diagram of another embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 7 illustrates a schematic structural diagram of still another embodiment of an apparatus for determining face image quality according to the present disclosure.
  • FIG. 8 illustrates a schematic structural diagram of one embodiment of an electronic device of the present disclosure.
  • DETAILED DESCRIPTION
  • Various exemplary embodiments of the present disclosure are now described in detail with reference to the accompanying drawings. It should be noted that, unless otherwise stated specifically, relative arrangement of the components and operations, the numerical expressions, and the values set forth in the embodiments are not intended to limit the scope of the present disclosure.
  • In addition, it should be understood that, for ease of description, the size of each part shown in the accompanying drawings is not drawn in actual proportion.
  • The following descriptions of at least one exemplary embodiment are merely illustrative actually, and are not intended to limit the present disclosure and the applications or uses thereof.
  • Technologies, methods and devices known to a person of ordinary skill in the related art may not be discussed in detail, but such technologies, methods and devices should be considered as a part of the specification in appropriate situations.
  • It should be noted that similar reference numerals and letters in the following accompanying drawings represent similar items. Therefore, once an item is defined in an accompanying drawing, the item does not need to be further discussed in the subsequent accompanying drawings.
  • The embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, and servers, which may operate with numerous other general-purpose or special-purpose computing system environments or configurations. Examples of well-known terminal devices, computing systems, environments, and/or configurations suitable for use together with the electronic devices such as terminal devices, computer systems, and servers include, but are not limited to, Personal Computer (PC) systems, server computer systems, thin clients, thick clients, handheld or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any one of the foregoing systems.
  • The electronic devices such as terminal devices, computer systems, and servers may be described in the general context of computer system-executable instructions (for example, program modules) executed by the computer system. Generally, the program modules may include routines, programs, target programs, components, logics, data structures, and the like for performing specific tasks or implementing specific abstract data types. The computer systems/servers may be implemented in the distributed cloud computing environments in which tasks are performed by remote processing devices linked via a communication network. In the distributed computing environments, the program modules may be located in local or remote computing system storage media including storage devices.
  • FIG. 1 is a flowchart of one embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 1, the method for determining face image quality of this embodiment includes:
  • 102: pose angle information and/or size information of a face in an image are obtained.
  • A pose angle of the face is a head pose angle of human, including a yaw angle and a pitch angle of the face in head normalized spherical coordinates (i.e., an image acquisition coordinate system), where the yaw angle is used for indicating a side face angle of the face in a horizontal direction, and the pitch angle is used for indicating head-down or head-up angle of the face in a vertical direction. Under the condition that the face size is fixed, the smaller the yaw angle and the pitch angle are, the more front-facing the face is, the easier the face recognition is, and the higher the face recognition accuracy is. When both the yaw angle and the pitch angle are zero, the face is the most front-facing and the face recognition accuracy is the highest.
  • The face size is also a face pixel size, and the bigger the face is, the higher the definition is, the easier the face recognition is, and the higher the face recognition accuracy is.
  • 104: Quality information of the face in the image is obtained on the basis of the pose angle information and/or the size information of the face.
  • The higher the quality of the face in the image is, the better the quality of the face in the image is; on the contrary, the lower the quality of the face in the image is, the worse the quality of the face in the image is.
  • At present, false recognition may easily occur in a face recognition process. In addition to a recognition algorithm model, the false recognition rate is also related to the quality of the face image. If the quality of face image is poor, for example, the side face angle is excessively large and the face pixel is excessively small, the face recognition accuracy generally decreases, and the false recognition rate is relatively high. In actual scenes, most of false recognition and missing recognition are caused due to the fact that the quality of the face image is not high enough. Therefore, the method for determining face image quality which draws enough attention is helpful for improving the face recognition rate and is very important.
  • The existing image quality evaluation methods may be divided into two categories: subjective evaluation methods and objective evaluation methods. With the improvement of the automation level, the subjective evaluation methods requiring manual participation in many fields have a lot of inconvenience, high costs and long cycles, and therefore the objective evaluation methods are gradually developed. Currently, the method for determining face image quality has not attracted enough attention, the objective evaluation method for face image quality is not yet mature, and the evaluation result for face image quality is inaccurate enough.
  • In order to evaluate the face image quality, face image quality evaluation indexes need to be established, and evaluation criteria for good face image quality need to be defined. In order to improve the face recognition rate, defining evaluation criteria for good face image quality should enable the face to be easily recognized, for example, the face is easy to be recognized when needing to meet conditions such as high definition, big face, and front-facing face. In actual application scenes, the definition of the face image is influenced by two aspects: one is that the image captured by a camera is blurred, and the other is that the face image itself is excessively small. Since the size of the face image needs to be uniformly scaled to a standard size before face image recognition, when a small face image is amplified to the standard size, fuzziness caused by pixel interpolation may exist. In general, after a proper camera is selected according to the application scene, the image captured thereby is clear. Therefore, disregarding the case that the image captured by the camera is not sharp, the definition of the face image and the face size are positively related. The bigger the face is, the higher the definition is. The face definition may be evaluated by using the face size.
  • According to the method for determining face image quality in the embodiments of the present disclosure, from the perspective of facilitating face recognition, the face image quality is evaluated on the basis of key factors affecting the face recognition result (for example, face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size. The front-facing degree of the face is determined based on the pose angle of the face, the face definition is determined based on the face size, and the face image quality is evaluated according to the pose angle information and the size information of the face. According to the technical solutions for determining face image quality in the embodiments of the present disclosure, the face image quality is objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with directly obtaining the face definition in the image, the method improves the operation efficiency and increases the real-time performance of the face quality evaluation.
  • According to one or more embodiments of the present disclosure, in operation 102, the obtaining the pose angle information of the face in the image may be implemented in the following mode:
  • a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box are obtained. For example, face detection may be performed on the image to obtain the face detection bounding box, and key point (for example, corners of eyes and mouth) positioning is performed on the face in the face detection bounding box to obtain key point coordinates of the face; the pose angle information of the face is obtained according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face.
  • According to one or more embodiments of the present disclosure, in operation 102, the obtaining the size information of the face in the image may be implemented in the following mode: obtaining the size information of the face according to the size of the face detection bounding box, where the size of the face detection bounding box includes length and/or width of the face detection bounding box.
  • The technical solutions provided according to the embodiments of the present disclosure include the following beneficial effects:
  • Through the method for evaluating face image quality based on key factors affecting a face recognition result (face definition, face size, and whether the face is front-facing), indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face for reflecting whether the face is front-facing, and face size for reflecting face definition and face size; and a method for evaluating face image quality according to pose angle information and size information of a face is obtained. According to the technical solutions for determining face image quality in the embodiments of the present disclosure, the face image quality may be objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, in the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with directly obtaining the face definition in the image, the method facilitates improving the operation efficiency and increasing the real-time performance of face quality evaluation.
  • It should be understood that the above general description and the following detailed description are merely exemplary and explanatory and are not intended to limit the present disclosure. The following further describes in detail the technical solutions of the present disclosure with reference to the accompanying drawings and embodiments.
  • According to one or more embodiments of the present disclosure, operation 104 may include: obtaining the score of the pose angle of the face according to the pose angle information of the face; obtaining the score of the face size according to the size information of the face; and obtaining a quality score of the face in the image according to the score of the pose angle of the face and the score of the face size.
  • FIG. 2 is a flowchart of another embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 2, the method for determining face image quality of this embodiment includes:
  • 202: a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box are obtained.
  • The face detection bounding box includes a face image detected from the image.
  • For example, face detection may be performed on the image by means of a face detection algorithm to obtain the face detection bounding box;
  • for example, key point positioning is performed on the face in the face detection bounding box by means of a key point detection algorithm to obtain the key point coordinates of the face.
  • 204: The pose angle information of the face is obtained according to the key point coordinates of the face, and the size information of the face is obtained according to the size of the face detection bounding box.
  • The size of the face detection bounding box includes length and/or width of the face detection bounding box. In a specific example, the size of the face detection bounding box is the face size. A pose angle of the face is a head pose angle of the human, including a yaw angle and a pitch angle of the face in head normalized spherical coordinates (i.e., an image acquisition coordinate system), where the yaw angle is used for indicating the angle of the side face of the face in a horizontal direction, and the pitch angle is used for indicating the head-down or head-up angle of the face in a vertical direction. Under the condition that the face size is fixed, the smaller the yaw angle and the pitch angle are, the more front-facing the face is, the easier the face recognition is, and the higher the face recognition accuracy is. When both the yaw angle and the pitch angle are zero, the face is the most front-facing and the face recognition accuracy is the highest.
  • The face size is also the face pixel size, and the bigger the face is, the higher the definition is, the easier the face recognition is, and the higher the face recognition accuracy is.
  • 206: The score of the pose angle of the face is obtained according to the pose angle information of the face, and the score of the face size is obtained according to the size information of the face.
  • According to one or more embodiments of the present disclosure, the score of the pose angle of the face may be obtained in the following mode: according to the yaw angle and the pitch angle of the face, obtaining the score Qyaw of the yaw angle yaw of the face by calculation based on
  • Q yaw = e - 1 0 * yaw 2 9 0 * 9 0
  • and obtaining the score Q pitch of the pitch angle (“pitch”) of the face by calculation based on
  • Q pitch = e - 1 0 * pitch 2 9 0 * 9 0
  • According to one or more embodiments of the present disclosure, the score of the face size may be obtained in the following mode: obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box. The length, width, and area of face detection bounding box correspond to the size of the face image. Therefore, the score of the face size may be determined on the basis of at least one of length, width, or area of the face detection bounding box.
  • Further exemplarily, the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, for example, may be: selecting a smaller value min in the length and the width of the face detection bounding box;
  • and obtaining the score Qrect of the face size by calculation based on
  • Q rect = 1 1 + e - 2 * ( min - 50 ) 75
  • according to the smaller value min in the length and the width.
  • The face size may be better determined by means of the smaller value in the length and the width of the face detection bounding box, so that the score of the face size is obtained by calculation on the basis of the smaller value in the length and the width of the face detection bounding box, and the face size may be reflected more objectively.
  • 208: The quality score of the face in the image is obtained according to the score of the pose angle of the face and the score of the face size.
  • The higher the quality score of the face in the image is, the better the quality of the face in the image is; on the contrary, the lower the quality score of the face in the image is, the worse the quality of the face in the image is.
  • According to one or more embodiments of the present disclosure, operation 208 may be implemented in the following mode:
  • The quality score of the face in the image is obtained by calculation according to the score of the yaw angle and its weight, the score of the pitch angle and its weight, and the score of the face size and its weight.
  • The weight of the score of the yaw angle, the weight of the score of the pitch angle, and the weight of the score of the face size may be preset, and may be adjusted according to actual requirements. In general, the yaw angle has the greatest influence on the accuracy of the face recognition result. In a specific application, the weight of the score of the yaw angle may be set to be greater than the weight of the score of the pitch angle and the weight of the score of the size of the face, so that the obtained quality score of the face in the image may more accurately and objectively reflect the quality of the face in one image.
  • According to the method for determining face image quality in the embodiments of the present disclosure, from the perspective of facilitating face recognition, the face image quality is evaluated on the basis of key factors affecting the face recognition result (face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size. The front-facing degree of the face is determined based on the pose angle of the face, the face definition is determined based on the face size, the score of the pose angle of the face and the score of the face size are further obtained. The quality score of the face in the image is obtained according to the score of the pose angle of the face and the score of the face size, so as to more accurately and objectively evaluate the quality of the face in the image, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the method improves the operation efficiency and increases the real-time performance of the face quality evaluation.
  • FIG. 3 is a flowchart of still another embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 3, the method for determining face image quality of this embodiment includes:
  • 302: a face detection bounding box in the image, key point coordinates of the face determined according to the face detection bounding box, and a confidence score of the key point coordinates are obtained.
  • The confidence score of the key point coordinates is used for indicating the accuracy of the key point coordinates of the face, and the greater the numerical value of the confidence score is, the more accurate the key point coordinates of the face are.
  • According to one or more embodiments of the present disclosure, operation 302 may be implemented through a pre-trained first neural network. After receiving an input image, the first neural network outputs the face detection bounding box, the key point coordinates of the face determined according to the face detection bounding box, and the confidence score of the key point coordinates by performing face detection and key point detection on the image. The confidence score of the key point coordinates may be determined by the first neural network on the basis of the performance of the first neural network and the size of the face detection bounding box according to a preset mode. The better the performance of the first neural network is, and the larger the face detection bounding box is (i.e., the face image is relatively large, and the face is relatively clear), the higher the accuracy of the determined key point coordinates of the face is.
  • 304: The pose angle information of the face is obtained according to the key point coordinates of the face, and the size information of the face is obtained according to the size of the face detection bounding box, where the pose angle information of the face includes a yaw angle and a pitch angle of the face.
  • 306: The score of the pose angle of the face is obtained according to the pose angle information of the face, and the score of the face size is obtained according to the size information of the face.
  • According to one or more embodiments of the present disclosure, the score of the pose angle of the face may be obtained in the following mode:
  • according to the yaw angle and the pitch angle of the face, obtaining the score Qyaw of the yaw angle yaw of the face by calculation based on
  • Q yaw = e - 1 0 * yaw 2 9 0 * 9 0 ,
  • and obtaining the score Qpitch of the pitch angle (“pitch”) of the face by calculation based on
  • Q pitch = e - 1 0 * pitch 2 9 0 * 9 0 .
  • According to one or more embodiments of the present disclosure, the score of the face size may be obtained in the following mode: obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box.
  • The length, width, and area of face detection bounding box correspond to the size of the face image. Therefore, the score of the face size may be determined on the basis of at least one of length, width, or area of the face detection bounding box.
  • Further exemplarily, the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, for examples, may be: selecting a smaller value min in the length and the width of the face detection bounding box; and obtaining the score Qrect of the face size by calculation based on
  • Q rect = 1 1 + e - 2 * ( min - 50 ) 75
  • according to the smaller value min in the length and the width.
  • The face size may be better determined by means of the smaller value in the length and the width of the face detection bounding box, so that the score of the face size is obtained by calculation on the basis of the smaller value in the length and the width of the face detection bounding box, and the face size may be reflected more objectively.
  • 308: The pose angle of the face is corrected by using the confidence score of the key point coordinates.
  • Exemplarily, by using the confidence score of the key point coordinates, the corrected score Qyaw of the yaw angle and the corrected score Q pitch of the pitch angle are obtained by calculation based on
  • Q yaw = a * e - 1 0 * yaw 2 9 0 * 9 0 and Q pitch = a * e - 1 0 * pitch 2 9 0 * 9 0
  • respectively, wherein
  • a = { Q align ( Q align < 0.4 ) 1 ( Q align > 0.4 ) ,
  • and Qalign represents the confidence score of the key point coordinates. According to one or more embodiments of the present disclosure, operation 308 may be performed simultaneously with the operation of obtaining the score of the face size, before or after the operation of obtaining the score of the face size, and no execution time limit exists between the two operations.
  • 310: The quality score of the face in the image is obtained according to the corrected score of the pose angle of the face and the score of the face size.
  • When the key point coordinates of the face are inaccurate, the pose angle information of the face obtained based on the key point coordinates is also inaccurate. In order to solve the problem of the inaccuracy of evaluation of the pose angle information of the face due to the inaccuracy of the key point coordinates, in the embodiments of the present disclosure, the score of the pose angle of the face obtained by calculation is correspondingly corrected according to the confidence score of the key point coordinates of the face, so as to eliminate the inaccuracy of evaluation of the pose angle information of the face due to the inaccuracy of the key point coordinates, and the influence on the final determination of the result of the face image quality, and improve the accuracy and reliability of determining the result of the face image quality.
  • FIG. 4 is a flowchart of a specific application embodiment of a method for determining face image quality according to the present disclosure. As shown in FIG. 4, the method for determining face image quality of this embodiment includes:
  • 402: face detection is performed on the image to obtain a face detection bounding box.
  • 404: Key point positioning is performed on the face in the face detection bounding box to obtain key point coordinates of the face and a confidence score of the key point coordinates. The confidence score of the key point coordinates is used for indicating the accuracy of the key point coordinates of the face.
  • According to one or more embodiments of the present disclosure, operations 402-404 may be implemented through a pre-trained first neural network. After receiving an input image, the first neural network outputs the face detection bounding box, the key point coordinates of the face, and the confidence score of the key point coordinates by performing face detection and key point detection on the image. The confidence score of the key point coordinates may be determined by the first neural network on the basis of the performance of the first neural network and the size of the face detection bounding box according to a preset mode. The better the performance of the first neural network is, and the larger the face detection bounding box is (i.e., the face image is relatively large, and the face is relatively clear), the higher the accuracy of the determined key point coordinates of the face is.
  • Then, operations 406 and 406′ are executed respectively.
  • 406: The pose angle information of the face is obtained according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face. According to one or more embodiments of the present disclosure, operation 406 may be implemented by a pre-trained second neural network. After receiving the key point coordinates of the face, the second neural network outputs the yaw angle and the pitch angle of the face by calculating the key point coordinates of the face.
  • 408: According to the yaw angle and the pitch angle of the face, the score Qyaw of the yaw angle yaw of the face is obtained by calculation based on
  • Q yaw = e - 1 0 * yaw 2 9 0 * 9 0 ,
  • and the score Qpitch of the pitch angle (“pitch”) of the face is obtained by calculation based on
  • Q pitch = e - 1 0 * pitch 2 9 0 * 9 0 .
  • 410: The score of the pose angle of the face is corrected by using the confidence score of the key point coordinates.
  • Exemplarily, by using the confidence score of the key point coordinates, the corrected score Qyaw of the yaw angle and the corrected score Qpitch of the pitch angle are obtained by calculations based on
  • Q yaw = a * e - 1 0 * yaw 2 9 0 * 9 0 and Q pitch = a * e - 1 0 * pitch 2 9 0 * 9 0
  • respectively, where
  • a = { Q align ( Q align < 0.4 ) 1 ( Q align > 0.4 ) ,
  • and Qalign represents the confidence score of the key point coordinates.
  • Then, operation 412 is executed.
  • 406′: The size of the face detection bounding box is obtained, including length and width of the face detection bounding box.
  • 408′: A smaller value min in the length and the width of the face detection bounding box is selected.
  • 410′: The score Qrect of the face size is obtained by calculation based on
  • Q rect = 1 1 + e - 2 * ( min - 50 ) 75
  • according to the smaller value min in the length and the width.
  • No sequential relation limit of execution time exists between operations 406-410 and operations 406′-410′, which may be executed in any time and sequence.
  • 412: The quality of the face in the image is obtained by calculation according to the corrected score of the yaw angle and its weight, the corrected score of the pitch angle and its weight, and the score of the face size and its weight.
  • For example, the quality of the face in the image may be obtained by calculation based on Q=w1*Qyaw+w2*Qpitch+3w*Qrect.
  • Q is the quality of the face in the image, Qyaw represents the corrected score of the yaw angle (“yaw”), Q pitch represents the corrected score of the pitch angle (“pitch”), and Qrect represents the score of the face size. w1, w2, and w3 respectively represent the weight of the score of the yaw angle, the weight of the score of the pitch angle, and the weight of the score of the face size. Generally, the yaw angle has the greatest influence on the accuracy of the face recognition result, and the value of w1 may be set to 0.6; both the weight w2 of the score of the pitch angle and the weight w3 of the score of the size of the face may be set to 0.2, and may also be adjusted according to actual conditions.
  • Further, the foregoing embodiments of the method for determining face image quality according to the present disclosure are executed for any one of multiple images of the same face respectively, so as to obtain the quality score of the face in the multiple images. Yet another embodiment of the method for determining face image quality according to the present disclosure may further include: selecting, according to the quality information of the face in the multiple images, selecting at least one image with high face quality for face detection.
  • On the basis of the embodiments, the images with poor face quality are removed, and the images with high face quality are selected for face detection and recognition. Because the selected images with high face quality have a high face recognition rate, the accuracy of face recognition may be improved, the operation data volume of the face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for determining face image quality according to the present disclosure. The apparatus for determining face image quality of this embodiment may be configured to implement the foregoing embodiments of the method for determining face image quality according to the present disclosure. As shown in FIG. 5, the apparatus for determining face image quality of this embodiment includes: a first obtaining module 502 and a second obtaining module 504.
  • The first obtaining module 502 is configured to obtain pose angle information and size information of a face in an image;
  • The second obtaining module 504 is configured to obtain quality information of the face in the image on the basis of the pose angle information and the size information of the face.
  • On the basis of the apparatus for determining face image quality provided according to the foregoing embodiments of the present disclosure, the face image quality is evaluated on the basis of key factors affecting the face recognition result (for example, face definition, face size, and whether the face is front-facing), and indexes for evaluating the key factors affecting the face recognition result are obtained: a pose angle of the face and a face size. The quality of the face in the image is determined according to the pose angle of the face and the face size. According to the technical solutions for determining face image quality in the embodiments of the present disclosure, the face image quality may be objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the method facilitates improving the operation efficiency and increasing the real-time performance of the face quality evaluation.
  • FIG. 6 is a schematic structural diagram of another embodiment of an apparatus for determining face image quality according to the present disclosure. As shown in FIG. 6, in this embodiment, the first obtaining module 502 specifically includes: a face detection sub-module 602, a key point detection sub-module 604, and a first obtaining sub-module 606.
  • The face detection sub-module 602 is configured to obtain a face detection bounding box in the image, where the face detection bounding box is configured to determine the face in the image. According to one or more embodiments of the present disclosure, the face detection sub-module 602 may be configured to perform face detection on the image to obtain the face detection bounding box.
  • The key point detection sub-module 604 is configured to obtain key point coordinates of the face determined according to the face detection bounding box. According to one or more embodiments of the present disclosure, the key point detection sub-module 604 may be configured to perform key point positioning on the face image determined according to the face detection bounding box to obtain the key point coordinates of the face.
  • The first obtaining sub-module 606 is configured to obtain pose angle information of the face according to the key point coordinates of the face, where the pose angle information of the face includes a yaw angle and a pitch angle of the face, and to obtain size information of the face according to the size of the face detection bounding box, where the size of the face detection bounding box includes length and/or width of the face detection bounding box.
  • In addition, in the embodiments of the apparatus for determining face image quality, the face detection sub-module 602 is configured to perform face detection on an image to obtain the face detection bounding box, where the face detection bounding box includes the image of the face, called: a face image. Accordingly, the key point detection sub-module 604 is configured to perform key point positioning on the face image determined according to the face detection bounding box to obtain the key point coordinates of the face.
  • In addition, with reference to FIG. 6 again, in still another embodiment of the apparatus for determining face image quality according to the present disclosure, the second obtaining module 504 may include a second obtaining sub-module 608, a third obtaining sub-module 610, and a fourth obtaining sub-module 612.
  • The second obtaining sub-module 608 is configured to obtain the score of the pose angle of the face according to the pose angle information of the face.
  • According to one or more embodiments of the present disclosure, the second obtaining sub-module 608 is configured to obtain, according to the yaw angle and the pitch angle of the face, the score Qyaw of the yaw angle (“yaw”) of the face by calculation based on
  • Q yaw = e - 1 0 * yaw 2 9 0 * 9 0 ,
  • and the score Q pitch of the pitch angle (“pitch”) of the face by calculation based on
  • Q pitch = e - 1 0 * pitch 2 9 0 * 9 0 .
  • Further exemplarily, the second obtaining module 608 may obtain the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box in the following mode: selecting a smaller value min in the length and the width of the face detection bounding box; and obtaining the score Qrect of the face size by calculation based on
  • Q rect = 1 1 + e - 2 * ( min - 50 ) 75
  • according to the smaller value min in the length and the width.
  • The third obtaining sub-module 610 is configured to obtain the score of the face size according to the size information of the face.
  • According to one or more embodiments of the present disclosure, the third obtaining sub-module 610 is configured to obtain the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, where the area of the face detection bounding box is obtained by the product of the length and the width of face detection bounding box.
  • The fourth obtaining sub-module 612 is configured to obtain the quality score of the face in the image according to the score of the pose angle of the face and the score of the face size. According to one or more embodiments of the present disclosure, the fourth obtaining sub-module 612 is configured to obtain the quality of the face in the image by calculation according to the score of the yaw angle and its weight, the score of the pitch angle and its weight, and the score of the face size and its weight. In an actual application, because the yaw angle of the face has the greatest influence on the accuracy of the face recognition result, the weight of the score of the yaw angle may be set to be greater than the weight of the score of the pitch angle and the weight of the score of the face size.
  • FIG. 7 is a schematic structural diagram of still another embodiment of an apparatus for determining face image quality according to the present disclosure. As shown in FIG. 7, compared with the apparatus for determining face image quality according to the foregoing embodiments of the present disclosure, the apparatus for determining face image quality in this embodiment further includes: a fourth obtaining module 506 and a correction module 508.
  • The fourth obtaining module 506 is configured to obtain the confidence score of the key point coordinates, where the confidence score of the key point coordinates is configured to represent an accuracy rate of the key point coordinates of the face.
  • Exemplarily, the fourth obtaining module 506 may be integrated with the key point detection sub-module 604, and the two may be implemented by means of a neural network.
  • The correction module 508 is configured to correct, by using the confidence score of the key point coordinates, the score of pose angle of the face obtained by the second obtaining sub-module 608.
  • According to one or more embodiments of the present disclosure, the correction module 508 is configured to obtain, by using the confidence score of the key point coordinates, the corrected score Qyaw of the yaw angle and the corrected score Qpitch of the pitch angle by calculation based on
  • Q yaw = a * e - 1 0 * yaw 2 9 0 * 9 0 and Q pitch = a * e - 1 0 * pitch 2 9 0 * 9 0
  • respectively, where
  • a = { Q align ( Q align < 0.4 ) 1 ( Q align > 0.4 ) ,
  • and Qalign represents the confidence score of the key point coordinates.
  • Accordingly, in the embodiments, the fourth obtaining sub-module 612 is configured to obtain the quality of the face in the image according to the corrected score of the pose angle of the face and the score of the face size.
  • The embodiments of the present disclosure further provide an electronic device, including the apparatus for determining face image quality according to any one of the foregoing embodiments of the present disclosure. By obtaining indexes for evaluating the key factors affecting the face recognition result: a pose angle of the face and face size, and evaluating the face image quality according to the pose angle information and the size information of the face, the face image quality is objectively evaluated, and the accuracy rate of the evaluation result of the face image quality is high; in addition, according to the embodiments of the present disclosure, by obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the device facilitates improving the operation efficiency and increasing the real-time performance of the face quality evaluation.
  • Further, the embodiments of the electronic device further include a selection module and a face detection module.
  • The selection module is configured to select, according to quality information of a face in multiple images output by the apparatus for determining face image quality, at least one image with high face quality;
  • the face detection module is configured to perform face detection on the selected at least one image.
  • On the basis of the embodiments, the images with poor face quality are removed, and the images with high face quality are selected for face detection and recognition. Because the selected images with high face quality have a high face recognition rate, the accuracy rate of face recognition may be improved, the operation data volume of face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • The embodiments of the present disclosure further provide another electronic device, including: a memory, configured to store executable instructions; and a processor, configured to communicate with the memory to execute the executable instructions so as to complete operations of the method for determining face image quality according to any one of the foregoing embodiments of the present disclosure.
  • The electronic device according to the foregoing embodiments of the present disclosure, for example, may be a mobile terminal, a PC, a tablet computer, a server, and the like.
  • The embodiments of the present disclosure further provide a computer storage medium, which is configured to store computer-readable instructions. When the instructions are executed, the operations of the method for determining face image quality according to any one of the foregoing embodiments of the present disclosure are implemented.
  • FIG. 8 is a schematic structural diagram of one embodiment of an electronic device according to the present disclosure. With reference to FIG. 8 below, a schematic structural diagram of an electronic device suitable for implementing the terminal device or the server of the embodiments of the present disclosure is shown. As shown in FIG. 8, the electronic device includes one or more processors, and a communication part, etc. The one or more processors are, for example, one or more Central Processing Units (CPUs) 801 and/or one or more Graphic Processing Units (GPUs) 813, and the processors may execute appropriate actions and processing according to executable instructions stored in a Read-Only Memory (ROM) 802 or executable instructions loaded from a storage section 808 to a Random Access Memory (RAM) 803. The communication part 812 may include, but is not limited to, a network card. The network card may include, but is not limited to, an InfiniBand (IB) network card.
  • The processor may communicate with the ROM 802 and/or the RAM 803 to execute executable instructions, is connected to the communication part 812 by means of a bus 804, and communicates with other target devices by means of the communication part 812, so as to complete corresponding operations of any of the methods provided by the embodiments of the present disclosure, for example, obtaining pose angle information and size information of a face in an image, and obtaining quality information of the face in the image on the basis of the pose angle information and the size information of the face.
  • In addition, the RAM 803 may further store various programs and data required during an operation of the apparatus. The CPU 801, the ROM 802, and the RAM 803 are connected to each other via the bus 804. In the presence of the RAM 803, the ROM 802 is an optional module. The RAM 803 stores executable instructions, or writes executable instructions to the ROM 802 during running. The executable instructions cause the CPU 801 to execute the operations of the communication method. An Input/Output (I/O) interface 805 is also connected to the bus 804. The communication part 812 may be integrated, or may be set as having multiple sub-modules (for example, multiple IB network cards) connected to the bus.
  • The following components are connected to the I/O interface 805: an input section 806 including a keyboard, a mouse and the like; an output section 807 including a Cathode-Ray Tube (CRT), a Liquid Crystal Display (LCD), a speaker and the like; a storage section 808 including a hard disk and the like; and a communication section 809 of a network interface card including an LAN card, a modem and the like. The communication section 809 executes communication processing through a network such as the Internet. A drive 810 is also connected to the I/O interface 805 according to requirements. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 810 according to requirements, so that a computer program read from the removable medium may be installed on the storage section 808 according to requirements.
  • It should be noted that, the architecture shown in FIG. 8 is merely an optional implementation mode. During specific practice, the number and types of the components in FIG. 8 may be selected, decreased, increased, or replaced according to actual needs. Different functional components may be separated or integrated or the like. For example, the GPU 813 and the CPU 801 may be separated, or the GPU 813 may be integrated on the CPU 801, and the communication part may be separated from or integrated on the CPU 801 or the GPU 813 or the like. These alternative implementations all fall within the scope of protection of the present disclosure.
  • Particularly, a process described above with reference to a flowchart according to the embodiments of the present disclosure may be implemented as a computer software program. For example, the embodiments of the present disclosure include a computer program product. The computer program product includes a computer program tangibly included in a machine-readable medium. The computer program includes a program code for executing a method shown in the flowchart. The program code may include corresponding instructions for correspondingly executing the operations of the method provided by the embodiments of the present disclosure, for example, an instruction for obtaining pose angle information and size information of a face in an image, and an instruction for obtaining quality information of the face in the image on the basis of the pose angle information and the size information of the face. In such embodiments, the computer program may be downloaded and installed from the network through the communication part 809, and/or is installed from the removable medium 811. When executed by the CPU 801, the computer program executes the foregoing functions defined in the method of the present disclosure.
  • The embodiments of the present disclosure may be optionally applied to: residential area monitoring or security monitoring fields, a capture machine, or products related to face recognition. Face detection is performed on an image collected by a camera (i.e., the image in the embodiments of the present disclosure), and a face image is recognized. In order to improve the accuracy rate of face recognition, reduce the false recognition rate and the missing recognition rate, and avoid unnecessary recognition, it is necessary to first provide images to an apparatus or a device for determining face image quality, the images are screened and filtered, so as to obtain high-quality face images. By evaluating the face image quality, the images having a big side face or deeply head-down or having an extremely low face pixel (i.e., an extremely small face size) may be screened out due to difficulty in accurate recognition. Through the methods, apparatuses, or devices for determining face image quality according to the embodiments of the present disclosure, the quality of a face in various images may be obtained, and the images having low face quality and unsuitable for face recognition are effectively filtered out, so as to reduce the number of face recognitions and improve the face recognition efficiency. In a scene where the embodiments of the present disclosure are applied to an embedded device for face recognition which is time-consuming, the effect is more obvious.
  • The embodiments of the present disclosure have at least the following beneficial technical effects: according to the embodiments of the present disclosure, face image requirements facilitating face recognition are fully considered, a pose angle of a face is evaluated and evaluation indexes are designed in combination of the face size, the face image quality is comprehensively evaluated based on the combination of a yaw angle and a pitch angle of the face and the face size, and conditions which may cause inaccurate evaluation of the pose angle of the face are corrected. The method is high in real-time performance and easy to apply, and the accuracy and reliability of the evaluation method are ensured. By obtaining the size information of the face to reflect the face definition affecting the face recognition result instead of directly obtaining the face definition in the image, as compared with the directly obtaining the face definition in the image, the method facilitates improving the operation efficiency and increasing the real-time performance of the face quality evaluation. By removing the images with poor face quality and selecting the images with high face quality for face detection and recognition, the accuracy rate of face recognition may be improved, the operation data volume of face recognition may be reduced, and the face recognition speed of a valid image may be improved.
  • A person of ordinary skill in the art may understand that: all or some operations for implementing the foregoing method embodiments are achieved by a program by instructing related hardware; the foregoing program may be stored in a computer-readable storage medium; when the program is executed, operations including the foregoing method embodiments are executed. Moreover, the foregoing storage medium includes various media capable of storing program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • Various embodiments in this description are all described in a progressive manner, for same or similar parts in the embodiments, refer to these embodiments, and each embodiment focuses on a difference from other embodiments. The system embodiments correspond to the method embodiments substantially and therefore are only described briefly, and for the associated part, refer to the descriptions of the method embodiments.
  • The methods and the apparatuses of the present disclosure may be implemented in many manners. For example, the methods and apparatuses of the present disclosure may be implemented by using software, hardware, firmware, or any combination of software, hardware, and firmware. Unless otherwise specially stated, the foregoing sequences of operations of the methods are merely for description, and are not intended to limit the operations of the methods of the present disclosure. In addition, in some embodiments, the present disclosure may be implemented as programs recorded in a recording medium. The programs include machine-readable instructions for implementing the methods according to the present disclosure. Therefore, the present disclosure further covers the recording medium storing the programs for performing the methods according to the present disclosure.
  • The descriptions of the present disclosure are provided for the purpose of examples and description, and are not intended to be exhaustive or limit the present disclosure to the disclosed form. Many modifications and changes are obvious to a person of ordinary skill in the art. The embodiments are selected and described to better describe a principle and an actual application of the present disclosure, and to make a person of ordinary skill in the art understand the present disclosure, so as to design various embodiments with various modifications applicable to particular use.

Claims (20)

What is claimed is:
1. A method for determining face image quality, comprising:
obtaining at least one of pose angle information of a face in an image or size information of the face; and
obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face,
wherein the obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face comprises:
obtaining, according to the pose angle information of the face, a score of a pose angle of the face;
obtaining, according to the size information of the face, a score of a face size; and
obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image,
wherein the obtaining, according to the size information of the face, a score of a face size comprises:
obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, the area of the face detection bounding box being obtained by a product of the length and width of face detection bounding box,
wherein the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box comprises:
selecting a smaller value min in the length and width of the face detection bounding box; and
obtaining, according to the smaller value min, the score Qrect of the face size by calculation.
2. The method according to claim 1, wherein the obtaining the pose angle information of the face in the image comprises:
obtaining a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box; and
obtaining, according to the key point coordinates of the face, the pose angle information of the face, the pose angle information of the face comprising a yaw angle of the face and a pitch angle of the face.
3. The method according to claim 2, wherein the obtaining a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box comprises:
performing face detection on the image;
obtaining the face detection bounding box; and
performing key point positioning on the face in the face detection bounding box;
obtaining the key point coordinates of the face.
4. The method according to claim 2, wherein the obtaining the size information of the face comprises:
obtaining, according to a size of the face detection bounding box, the size information of the face, the size of the face detection bounding box comprising at least one of length of the face detection bounding box or width of the face detection bounding box.
5. The method according to claim 1, wherein the obtaining a score of a pose angle of the face according to the pose angle information of the face comprises:
obtaining, according to the yaw angle and pitch angle of the face, a score Qyaw of the yaw angle yaw of the face and a score Qpitch of the pitch angle (“pitch”) of the face by calculation.
6. The method according to claim 1, wherein the obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image comprises:
obtaining, according to a score of the yaw angle and a weight of the score of the yaw angle, a score of the pitch angle and a weight of the score of the pitch angle, and the score of the face size and a weight of the score of the face size, the quality score of the face in the image by calculation.
7. The method according to claim 6, wherein the weight of the score of the yaw angle is greater than the weight of the score of the pitch angle and the weight of the score of the face size.
8. The method according to claim 2, wherein the method further comprises:
obtaining a confidence score of the key point coordinates of the face, the confidence score of the key point coordinates being used for indicating accuracy of the key point coordinates of the face;
after obtaining the score of the pose angle of the face, the method further comprises:
correcting the score of the pose angle of the face by using the confidence score of the key point coordinates; and
the obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image comprises:
obtaining, according to the corrected score of the pose angle of the face and the score of the face size, the quality score of the face in the image.
9. The method according to claim 8, wherein the correcting the score of the pose angle of the face by using the confidence score of the key point coordinates comprises:
determining a correction parameter a for the score Qyaw of the yaw angle (“yaw”) of the face and the score Qpitch of the pitch angle (“pitch”) of the face by using the confidence score of the key point coordinates, and
respectively calculating product of the correction parameter a and the Qyaw as well as product of the correction parameter a and the Qpitch, the product of the correction parameter a and the Qyaw being used as a corrected score of the yaw angle and the product of the correction parameter a and the Qpitch being used as a corrected score of the pitch angle;
wherein in the case that the confidence score of the key point coordinates is smaller than a predetermined value, a value of the a is a first value, in the case that the confidence score of the key point coordinates is greater than or equal to the predetermined value, the value of the a is a second value, and the first value is smaller than the second value.
10. The method according to claim 1, wherein the obtaining at least one of pose angle information of a face in an image or size information of the face and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face comprises:
obtaining at least one of: pose angle information of a face in at least one image in multiple images or size information of the face; and
obtaining quality information of the face in the at least one image in multiple images on the basis of at least one of the pose angle information of the face or the size information of the face.
the method further comprises: selecting, according to the quality information of the face in the at least one image in multiple images, at least one image with high face quality for face detection.
11. An electronic device, comprising:
a processor; and
a memory for storing instructions executable by the processor;
wherein execution of the instructions by the processor causes the processor to perform:
obtaining at least one of pose angle information of a face in an image or size information of the face; and
obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face,
wherein the obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face comprises:
obtaining, according to the pose angle information of the face, a score of a pose angle of the face;
obtaining, according to the size information of the face, a score of a face size; and
obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image,
wherein the obtaining, according to the size information of the face, a score of a face size comprises:
obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, the area of the face detection bounding box being obtained by a product of the length and width of face detection bounding box,
wherein the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box comprises:
selecting a smaller value min in the length and width of the face detection bounding box; and
obtaining, according to the smaller value min, the score Qrect of the face size by calculation.
12. The electronic device according to claim 11, wherein the obtaining the pose angle information of the face in the image comprises:
obtaining a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box; and
obtaining, according to the key point coordinates of the face, the pose angle information of the face, the pose angle information of the face comprising a yaw angle of the face and a pitch angle of the face;
wherein the obtaining the size information of the face comprises:
obtaining, according to a size of the face detection bounding box, the size information of the face, the size of the face detection bounding box comprising at least one of length of the face detection bounding box or width of the face detection bounding box.
13. The electronic device according to claim 12, wherein the obtaining a face detection bounding box in the image and key point coordinates of the face determined according to the face detection bounding box comprises:
performing face detection on the image;
obtaining the face detection bounding box; and
performing key point positioning on the face in the face detection bounding box;
obtaining the key point coordinates of the face.
14. The electronic device according to claim 11, wherein the obtaining a score of a pose angle of the face according to the pose angle information of the face comprises:
obtaining, according to the yaw angle and pitch angle of the face, a score Qyaw of the yaw angle yaw of the face and a score Qpitch of the pitch angle (“pitch”) of the face by calculation.
15. The electronic device according to claim 11, wherein the obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image comprises:
obtaining, according to a score of the yaw angle and a weight of the score of the yaw angle, a score of the pitch angle and a weight of the score of the pitch angle, and the score of the face size and a weight of the score of the face size, the quality score of the face in the image by calculation.
16. The electronic device according to claim 15, wherein the weight of the score of the yaw angle is greater than the weight of the score of the pitch angle and the weight of the score of the face size.
17. The electronic device according to claim 11, wherein execution of the instructions by the processor further causes the processor to perform:
obtaining a confidence score of key point coordinates of the face, the confidence score of the key point coordinates being used for indicating accuracy of the key point coordinates of the face;
after obtaining the score of the pose angle of the face, execution of the instructions by the processor causes the processor to further perform:
correcting the score of the pose angle of the face by using the confidence score of the key point coordinates; and
the obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image comprises:
obtaining, according to the corrected score of the pose angle of the face and the score of the face size, the quality score of the face in the image.
18. The electronic device according to claim 17, wherein the correcting the score of the pose angle of the face by using the confidence score of the key point coordinates comprises:
determining a correction parameter a for the score Qyaw of the yaw angle (“yaw”)of the face and the score Qpitch of the pitch angle (“pitch”) of the face by using the confidence score of the key point coordinates, and
respectively calculating product of the correction parameter a and the Qyaw as well as product of the correction parameter a and the Qpitch, the product of the correction parameter a and the Qyaw being used as a corrected score of the yaw angle and the product of the correction parameter a and the Qpitch being used as a corrected score of the pitch angle;
wherein in the case that the confidence score of the key point coordinates is smaller than a predetermined value, a value of the a is a first value, in the case that the confidence score of the key point coordinates is greater than or equal to the predetermined value, the value of the a is a second value, and the first value is smaller than the second value.
19. The electronic device according to claim 11, wherein the obtaining at least one of pose angle information of a face in an image or size information of the face and obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face comprises:
obtaining at least one of: pose angle information of a face in at least one image in multiple images or size information of the face; and
obtaining quality information of the face in the at least one image in multiple images on the basis of at least one of the pose angle information of the face or the size information of the face,
wherein execution of the instructions by the processor causes the processor to further perform: selecting, according to the quality information of the face in the at least one image in multiple images, at least one image with high face quality for face detection.
20. A non-transitory computer storage medium, configured to store computer-readable instructions, wherein execution of the instructions by the processor causes the processor to perform:
obtaining at least one of pose angle information of a face in an image or size information of the face; and
obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face,
wherein the obtaining quality information of the face in the image on the basis of at least one of the pose angle information of the face or the size information of the face comprises:
obtaining, according to the pose angle information of the face, a score of a pose angle of the face;
obtaining, according to the size information of the face, a score of a face size; and
obtaining, according to the score of the pose angle of the face and the score of the face size, a quality score of the face in the image,
wherein the obtaining, according to the size information of the face, a score of a face size comprises:
obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box, the area of the face detection bounding box being obtained by a product of the length and width of face detection bounding box,
wherein the obtaining the score of the face size on the basis of at least one of length, width, or area of the face detection bounding box comprises:
selecting a smaller value min in the length and width of the face detection bounding box; and
obtaining, according to the smaller value min, the score Qrect of the face size by calculation.
US17/452,062 2017-05-31 2021-10-23 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media Abandoned US20220044005A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/452,062 US20220044005A1 (en) 2017-05-31 2021-10-23 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
CN201710405232.9 2017-05-31
CN201710405232.9A CN108230293A (en) 2017-05-31 2017-05-31 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
PCT/CN2018/087915 WO2018219180A1 (en) 2017-05-31 2018-05-22 Method and apparatus for determining facial image quality, as well as electronic device and computer storage medium
US16/655,235 US11182589B2 (en) 2017-05-31 2019-10-17 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media
US17/452,062 US20220044005A1 (en) 2017-05-31 2021-10-23 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/655,235 Continuation US11182589B2 (en) 2017-05-31 2019-10-17 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media

Publications (1)

Publication Number Publication Date
US20220044005A1 true US20220044005A1 (en) 2022-02-10

Family

ID=62656626

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/655,235 Active 2038-09-02 US11182589B2 (en) 2017-05-31 2019-10-17 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media
US17/452,062 Abandoned US20220044005A1 (en) 2017-05-31 2021-10-23 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/655,235 Active 2038-09-02 US11182589B2 (en) 2017-05-31 2019-10-17 Methods and apparatuses for determining face image quality, electronic devices, and computer storage media

Country Status (6)

Country Link
US (2) US11182589B2 (en)
JP (1) JP6871416B2 (en)
KR (1) KR102320649B1 (en)
CN (1) CN108230293A (en)
SG (1) SG11201909737VA (en)
WO (1) WO2018219180A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837750B (en) * 2018-08-15 2023-11-03 华为技术有限公司 Face quality evaluation method and device
CN109101646B (en) * 2018-08-21 2020-12-18 北京深瞐科技有限公司 Data processing method, device, system and computer readable medium
CN110909568A (en) * 2018-09-17 2020-03-24 北京京东尚科信息技术有限公司 Image detection method, apparatus, electronic device, and medium for face recognition
CN111199165B (en) * 2018-10-31 2024-02-06 浙江宇视科技有限公司 Image processing method and device
CN109409962A (en) * 2018-11-08 2019-03-01 北京旷视科技有限公司 Image processing method, device, electronic equipment, computer readable storage medium
CN109376684B (en) * 2018-11-13 2021-04-06 广州市百果园信息技术有限公司 Face key point detection method and device, computer equipment and storage medium
CN109614910B (en) * 2018-12-04 2020-11-20 青岛小鸟看看科技有限公司 Face recognition method and device
CN109753886B (en) * 2018-12-17 2024-03-08 北京爱奇艺科技有限公司 Face image evaluation method, device and equipment
CN109816628B (en) * 2018-12-20 2021-09-14 深圳云天励飞技术有限公司 Face evaluation method and related product
CN109754461A (en) * 2018-12-29 2019-05-14 深圳云天励飞技术有限公司 Image processing method and related product
CN111739084B (en) * 2019-03-25 2023-12-05 上海幻电信息科技有限公司 Picture processing method, atlas processing method, computer device, and storage medium
CN110147744B (en) * 2019-05-09 2024-05-14 腾讯科技(深圳)有限公司 Face image quality assessment method, device and terminal
CN110796108B (en) * 2019-11-04 2022-05-17 北京锐安科技有限公司 Method, device and equipment for detecting face quality and storage medium
CN111028216A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image scoring method and device, storage medium and electronic equipment
CN111104909A (en) * 2019-12-20 2020-05-05 深圳市商汤科技有限公司 Image processing method, image processing device, computer equipment and storage medium
KR20210153826A (en) 2020-06-11 2021-12-20 삼성전자주식회사 Display apparatus and control method thereof
CN112307900A (en) * 2020-09-27 2021-02-02 北京迈格威科技有限公司 Method and device for evaluating facial image quality and electronic equipment
CN112200804A (en) * 2020-11-09 2021-01-08 北京地平线信息技术有限公司 Image detection method and device, computer readable storage medium and electronic equipment
CN112651321A (en) * 2020-12-21 2021-04-13 浙江商汤科技开发有限公司 File processing method and device and server
CN113297423A (en) * 2021-05-24 2021-08-24 深圳市优必选科技股份有限公司 Pushing method, pushing device and electronic equipment
CN113409287A (en) * 2021-06-28 2021-09-17 北大方正集团有限公司 Method, device and equipment for evaluating quality of face image and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060193494A1 (en) * 2001-12-31 2006-08-31 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
US20060233422A1 (en) * 2005-04-16 2006-10-19 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
US20080298637A1 (en) * 2003-05-30 2008-12-04 Microsoft Corporation Head Pose Assessment Methods and Systems
US20090303342A1 (en) * 2006-08-11 2009-12-10 Fotonation Ireland Limited Face tracking for controlling imaging parameters
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US20130251244A1 (en) * 2012-03-20 2013-09-26 Microsoft Corporation Real time head pose estimation
US20150049910A1 (en) * 2013-08-19 2015-02-19 Kodak Alaris Inc. Imaging workflow using facial and non-facial features
US20150110351A1 (en) * 2013-10-23 2015-04-23 Imagination Technologies Limited Face Detection
US20150117786A1 (en) * 2013-10-28 2015-04-30 Google Inc. Image cache for replacing portions of images
US20170124385A1 (en) * 2007-12-31 2017-05-04 Applied Recognition Inc. Face authentication to mitigate spoofing

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007028555A (en) * 2005-07-21 2007-02-01 Sony Corp Camera system, information processing device, information processing method, and computer program
JP2007206898A (en) * 2006-01-31 2007-08-16 Toshiba Corp Face authentication device and access management device
WO2008058253A2 (en) 2006-11-08 2008-05-15 Cryptometrics, Inc. System and method for parallel image processing
JP5127531B2 (en) * 2008-03-27 2013-01-23 セコム株式会社 Image monitoring device
JP5225317B2 (en) * 2010-04-06 2013-07-03 キヤノン株式会社 IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
TWI439951B (en) * 2010-11-08 2014-06-01 Inst Information Industry Facial gender identification system and method and computer program products thereof
JP5766564B2 (en) * 2011-09-15 2015-08-19 株式会社東芝 Face authentication apparatus and face authentication method
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103824068B (en) * 2014-03-19 2018-06-01 上海看看智能科技有限公司 Face payment authentication system and method
CN105046227B (en) * 2015-07-24 2018-07-31 上海依图网络科技有限公司 A kind of key frame acquisition methods for portrait video system
CN105631439B (en) * 2016-02-18 2019-11-08 北京旷视科技有限公司 Face image processing process and device
CN106295585B (en) * 2016-08-16 2019-01-11 深圳云天励飞技术有限公司 A kind of filtering selection method and system for taking into account real-time and face quality
CN106446851A (en) * 2016-09-30 2017-02-22 厦门大图智能科技有限公司 Visible light based human face optimal selection method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060193494A1 (en) * 2001-12-31 2006-08-31 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
US20080298637A1 (en) * 2003-05-30 2008-12-04 Microsoft Corporation Head Pose Assessment Methods and Systems
US20060233422A1 (en) * 2005-04-16 2006-10-19 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
US20090303342A1 (en) * 2006-08-11 2009-12-10 Fotonation Ireland Limited Face tracking for controlling imaging parameters
US20170124385A1 (en) * 2007-12-31 2017-05-04 Applied Recognition Inc. Face authentication to mitigate spoofing
US20130251244A1 (en) * 2012-03-20 2013-09-26 Microsoft Corporation Real time head pose estimation
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US20150049910A1 (en) * 2013-08-19 2015-02-19 Kodak Alaris Inc. Imaging workflow using facial and non-facial features
US20150110351A1 (en) * 2013-10-23 2015-04-23 Imagination Technologies Limited Face Detection
US20150117786A1 (en) * 2013-10-28 2015-04-30 Google Inc. Image cache for replacing portions of images

Also Published As

Publication number Publication date
KR102320649B1 (en) 2021-11-03
KR20190129947A (en) 2019-11-20
SG11201909737VA (en) 2019-11-28
WO2018219180A1 (en) 2018-12-06
JP2020517027A (en) 2020-06-11
CN108230293A (en) 2018-06-29
JP6871416B2 (en) 2021-05-12
US20200050835A1 (en) 2020-02-13
US11182589B2 (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US20220044005A1 (en) Methods and apparatuses for determining face image quality, electronic devices, and computer storage media
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
US8718321B2 (en) Method of image processing
US20230030267A1 (en) Method and apparatus for selecting face image, device, and storage medium
WO2020019765A1 (en) Depth estimation method and apparatus for binocular image, and device, program and medium
US10121089B2 (en) Object information extraction apparatus, object information extraction program, and object information extraction method
US10254831B2 (en) System and method for detecting a gaze of a viewer
US11232586B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
US11386710B2 (en) Eye state detection method, electronic device, detecting apparatus and computer readable storage medium
US20190066311A1 (en) Object tracking
CN108875474A (en) Assess the method, apparatus and computer storage medium of face recognition algorithms
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US11462052B2 (en) Image processing device, image processing method, and recording medium
CN112183356A (en) Driving behavior detection method and device and readable storage medium
CN110658918A (en) Method, apparatus, and medium for positioning an eye-tracking camera of video glasses
CN111738085B (en) System construction method and device for realizing automatic driving simultaneous positioning and mapping
JP2007026308A (en) Image processing method and image processor
CN111784660A (en) Method and system for analyzing face correcting degree of face image
CN115937950A (en) Multi-angle face data acquisition method, device, equipment and storage medium
CN113284115B (en) Steel coil tower shape identification method, system, medium and terminal
CN114066980A (en) Object detection method and device, electronic equipment and automatic driving vehicle
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
KR20130081126A (en) Method for hand-gesture recognition and apparatus thereof
US20240153274A1 (en) Artificial intelligence enabled distance event detection using image analysis
CN114973365A (en) Method, device and medium for correcting portrait deflection

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SHENZHEN SENSETIME TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XU, LIFEI;YU, CHENDI;LIU, WENZHI;REEL/FRAME:058466/0759

Effective date: 20190924

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION