CN110619628B - Face image quality assessment method - Google Patents

Face image quality assessment method Download PDF

Info

Publication number
CN110619628B
CN110619628B CN201910847009.9A CN201910847009A CN110619628B CN 110619628 B CN110619628 B CN 110619628B CN 201910847009 A CN201910847009 A CN 201910847009A CN 110619628 B CN110619628 B CN 110619628B
Authority
CN
China
Prior art keywords
face
score
portrait picture
calculating
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910847009.9A
Other languages
Chinese (zh)
Other versions
CN110619628A (en
Inventor
张振斌
楼燚航
白燕
张永祥
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boyun Vision Beijing Technology Co ltd
Original Assignee
Boyun Vision Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boyun Vision Beijing Technology Co ltd filed Critical Boyun Vision Beijing Technology Co ltd
Priority to CN201910847009.9A priority Critical patent/CN110619628B/en
Publication of CN110619628A publication Critical patent/CN110619628A/en
Application granted granted Critical
Publication of CN110619628B publication Critical patent/CN110619628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face image quality assessment method, which comprises the following steps: s1, acquiring a portrait picture to be evaluated; s2, carrying out fuzzy judgment on the portrait picture, and calculating a fuzzy amount score of the portrait picture through the pixel mean square error of the portrait picture; s3, judging the illumination quality of the portrait picture; s31, converting an RGB color space of the portrait picture into an HSV color space; s32, intercepting a face area in the portrait picture through a face detection algorithm; s33, calculating a brightness average value in the face area as a quantized value of face illumination; s34, calculating an illumination quality score through a quantized value of face illumination; s4, judging the face gesture of the portrait picture, and calculating a face gesture score through a face gesture angle value; and S5, giving different weights to the fuzzy quantity score, the illumination quality score and the face posture score of the portrait picture, and summing to obtain the quality evaluation score value of the face picture.

Description

Face image quality assessment method
Technical Field
The invention relates to the field of image processing, in particular to a face image quality assessment method.
Background
The face recognition technology has wide application prospect in the fields of video monitoring, public safety, access control authentication and the like. Face recognition technology also involves knowledge of multiple disciplines, and is highly focused and widely studied by various governments and companies. Face recognition is also a very challenging problem in nature. The quality of the face picture has a great influence on the face detection effect. Face recognition belongs to non-contact biological recognition, and causes that a plurality of factors influence the quality of face pictures.
There are also many algorithms for face image quality assessment. Facial image quality evaluation algorithm based on texture feature fusion: the selection of high quality face images of the same person from a video sequence is a key step in face recognition technology. In order to improve the reliability of face image evaluation, firstly, extracting texture features such as HOG, GIST, GABOR, LBP and the like aiming at the face image; then training a classifier according to the labeling data to realize single feature score evaluation; fusing the multi-feature score values into feature vectors; finally, a new feature vector is obtained through the polynomial kernel function dimension increase, and an SVMs classifier is trained according to the feature to regress the face image quality score. The method based on feature fusion can effectively improve the effect of face image quality evaluation, and particularly the algorithm of HOG-GIST feature combination has good efficiency, and the target face can obtain reliable evaluation results under different postures and shielding conditions. However, the index is too small to cover all cases.
The face image quality evaluation method fused with the second-level evaluation index comprises the following steps: and aiming at the interference of the gesture and the illumination on the face, evaluating the symmetry degree of the face based on the histogram distance of the subareas, wherein the evaluation is used for evaluating the influence of the asymmetrical illumination and the gesture on the quality of the face. And aiming at an evaluation strategy of combining the first-level evaluation of the quality of the original image containing the human face with the second-level evaluation of the effective area of the human face, the feedback information of the first-level evaluation can effectively guide the construction and improvement of an image acquisition environment, and a high-quality image source is provided for the detection and recognition of the later human face. The main evaluation indexes comprise physical parameters such as contrast, suitability, symmetry, definition, effective area of the face and the like. But each weight index is difficult to determine and needs to be adjusted in a large database.
Face image quality evaluation based on convolutional neural network: aiming at the problem of low recognition rate caused by lower quality of face images in the face recognition process, firstly, a convolutional neural network model is established, and deep semantic information of the face image quality is extracted; then collecting face images in an unconstrained environment, and filtering by a traditional image processing method and manual screening to obtain a data set for training model parameters; secondly, obtaining a mapping relation from the fitting face image to the category through accelerating training on a Graphic Processor (GPU); and finally, taking the probability of the input high-quality image category as the quality score of the image, and establishing a quality scoring mechanism of the face image. Compared with the VGG-16 network, the accuracy of the model is reduced by 0.21 percent, but the parameter scale is reduced by 98 percent, and the model operation efficiency is greatly improved; meanwhile, the model has stronger discrimination capability in the aspects of face blurring, illumination, gesture and shielding, and has the following defects: the accuracy and adaptability of the model remain to be improved.
Disclosure of Invention
The invention aims to provide a face image quality evaluation method with higher accuracy aiming at the problems.
In order to achieve the above object, the technical scheme of the present invention is as follows:
a face image quality assessment method comprises the following steps:
s1, acquiring a portrait picture to be evaluated;
s2, carrying out fuzzy judgment on the portrait picture;
s21, detecting edge points of a portrait picture;
s22, calculating the mean square error of pixels in the portrait picture, and calculating the fuzzy amount score of the portrait picture through the mean square error of the pixels;
s3, judging the illumination quality of the portrait picture;
s31, converting an RGB color space of the portrait picture into an HSV color space;
s32, intercepting a face area in the portrait picture through a face detection algorithm;
s33, calculating a brightness average value in the face area as a quantized value of face illumination;
s34, calculating an illumination quality score through a quantized value of face illumination;
s4, judging the face gesture of the portrait picture;
s41, detecting face key point information in a portrait picture by adopting a deep learning algorithm;
s42, calculating a face attitude angle value through the face key point information;
s43, calculating a face gesture score through a face gesture angle value;
and S5, giving different weights to the fuzzy quantity score, the illumination quality score and the face posture score of the portrait picture, and summing to obtain the quality evaluation score value of the face picture.
Further, the edge points of the portrait picture in step S21 include the following steps:
s211, calculating the horizontal absolute difference of the current pixel point, wherein the formula is as follows:
D h (x,y)=|f(x,y+1)-f(x,y-1)|;
wherein f (x, y) represents the current pixel point, x is [1, M ], y is [1, N ];
s212, calculating the average value of the horizontal absolute differences of all pixel points in the portrait picture, wherein the formula is as follows:
Figure BDA0002195582680000041
s213, when the current pixel point is D h (x, y) is greater than D h-mean The current pixel point is a candidate edge point, and D is the current pixel point h (x, y) two adjacent points { C in the horizontal direction thereof h (x,y-1),C h (x, y+1) } is large, the current pixel point is confirmed as an edge point; the judgment formula of the edge points is as follows:
Figure BDA0002195582680000042
Figure BDA0002195582680000043
further, the calculation formula of the illumination quality score in step S34 is as follows:
Figure BDA0002195582680000044
wherein V is f Is a quantized value of the face illumination,
Figure BDA0002195582680000045
further, the step S42 of calculating the face pose angle value according to the face key point information includes the following steps:
s421, calculating a rotation vector of the face gesture through the face key point information;
s422, converting the rotation vector of the face gesture into a quaternion;
s423, converting the quaternion into an Euler angle, wherein the Euler angle is a face gesture angle value.
Further, in the step S43, a calculation formula for calculating the face pose score according to the face pose angle value is as follows:
Figure BDA0002195582680000046
wherein the weights are omega respectively 1 =0.5、ω 2 =1.2、ω 3 =1.1, the bias parameter is b=15; yaw is a shaking angle value; pitch is the nodding angle value; roll is the yaw angle value.
Further, the calculation formula of the quality evaluation score value in the step S5 is as follows:
score=ω 1 *score blur2 *score light3 *score pose
wherein the weights are omega respectively 1 =0.3、ω 2 =0.3、ω 3 =0.4;score blur For fuzzy magnitude scoring, a score light Score for illumination quality pose Scoring the face pose.
Compared with the prior art, the invention has the advantages and positive effects that:
the invention evaluates the quality of the face image through the design of fuzzy judgment, light and mild judgment and face posture judgment, the obtained quality evaluation result is more objective, the accuracy of image evaluation is effectively improved, the computer can accurately evaluate the image quality, and the subjective feeling index of the person on the image quality is expressed, thereby providing basis for the performance evaluation of an image processing system; on the other hand, the invention can improve the overall performance of the face recognition system, effectively improve errors caused by the excessively low quality of the input image, save the system matching time and further improve the working efficiency of the face recognition system.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a face key point;
FIG. 2 is a schematic view of a face pose;
fig. 3 is a flowchart of an application of picture quality assessment in a face recognition system.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, modifications, equivalents, improvements, etc., which are apparent to those skilled in the art without the benefit of this disclosure, are intended to be included within the scope of this invention.
As shown in fig. 1, 2 and 3, the invention provides a method and a system for evaluating quality of a portrait picture, which comprise fuzzy judgment, illumination quality judgment and face gesture judgment.
(1) The fuzzy judgment can be carried out through the face detection interface and based on the corresponding threshold value, the quality detection judgment is carried out, and the face quality is ensured to meet the requirement of the subsequent business operation.
In order to solve the problem that some gray images cannot adapt to, the fuzzy evaluation is divided into two steps: first, color image edge detection is performed, and then blur determination is performed. The blur estimation is determined here by calculating the difference between the current pixel and the mean of the pixels in the field. We use f (x, y) to represent the current pixel point, where x E [1, M ], y E [1, N ]. The level absolute difference is defined as follows:
D h (x,y)=|f(x,y+1)-f(x,y-1)|
the average value of the horizontal absolute differences of the whole picture is:
Figure BDA0002195582680000061
if D of the current pixel point h (x, y) is greater than D h-mean The pixel point is a candidate edge point C h (x, y), if C h (x, y) two adjacent points { C in the horizontal direction thereof h (x,y-1),C h (x, y+1) } is large, the pixel is confirmed as an edge point. Edge point E h The judgment of (x, y) is summarized as follows:
Figure BDA0002195582680000062
Figure BDA0002195582680000063
next we detect if the edges are blurred and we calculate the blur amount using the mean square error, if the picture has a higher variance, it has a wider frequency response range, representing a normal, correctly focused picture. But if a picture has a small variance, it has a narrow frequency response range, meaning that the number of edges in the picture is small. The more blurred the picture, the fewer its edges.
(2) And (3) judging the illumination quality: in actual scenes, the resulting map is due to differences in the scenes (particularly in non-limiting scenes)There may be a large difference in the image illumination. In the field of object recognition in images, illumination has a large influence on recognition accuracy, so that illumination quality determination is an important component in image quality evaluation. In the invention, the better illumination is considered to have a range, and the subsequent treatment can be negatively affected when the illumination exceeds a reasonable illumination range, such as over-illumination or over-darkness. In order to better acquire the brightness information from the image, the invention converts the image from a common RGB color space to an HSV color space, wherein H, S, V respectively represents hue, saturation and brightness, and brightness parameters can accurately measure the illumination condition of one image. In addition, in order to remove background interference, the invention can intercept the face region by using a face detection algorithm, and independently calculate the brightness average value in the region as the quantized value V of face illumination f . Because quality assessment requires determining whether the target image illumination is within a reasonable range and giving it a quantization score. This patent considers normalized V f When the image illumination is the optimal equilibrium state at=0.5, when V f The farther the equilibrium state is, the worse the illumination quality is, namely the smaller the score is. The illumination quality scoring formula is therefore as follows:
Figure BDA0002195582680000071
in practical evaluation, the light can be considered as balanced illumination within a certain range of 0.5. Thus, here the brightness mean and the mapping rationality of the quality score are adjusted by coefficient weighting, where
Figure BDA0002195582680000072
Thus, a reasonable quantitative assessment of the brightness of the image can be obtained.
(3) Face pose estimation: face pose estimation is mainly to obtain angle information of face orientation. The face pose information in the invention is represented by three Euler angles (Pitch, roll; yaw: head shaking; pitch: head nod; roll: head shaking). The key points of the face 68 (as shown in fig. 1) are detected by a deep learning algorithm, and then the model outputs three angle predicted values.
Any rotation of the three-dimensional space can be represented by rotating around a certain Axis of the three-dimensional space by a certain Angle, namely, the Axis-Angle representation method. Axis may be represented by a three-dimensional vector (x, y, z), theta may be represented by an angular value, and intuitively, a four-dimensional vector (theta, x, y, z) may represent any rotation in three dimensions. Note that the three-dimensional vector (x, y, z) is used here only to represent the directional orientation of axis, and thus the more compact representation is to represent the direction axis by one unit vector, and the angle value theta by the length of the three-dimensional vector. Thus, arbitrary rotations in three dimensions can be represented by a three-dimensional vector (theta x, theta y, theta z), provided that (x, y, z) is a unit vector. This is a representation of the Rotation Vector (Rotation Vector).
Quaternion (Quaternion) is also a common way of rotating representation. Assuming that (x, y, z) is a unit vector in the axis direction, theta is the angle rotated around axis, then the quaternion can be expressed as
Figure BDA0002195582680000081
I first converted the rotation vector into a quaternion because it is simpler to convert from a quaternion to an Euler angle formula. In 3D space, the rotation representing an object can be represented by three euler angles: pitch rotates about the X axis, called pitch angle. yaw rotates about the Y-axis, called yaw angle. The roll rotates about the Z-axis, called the roll angle (as shown in fig. 2). The order of these three angles has an effect on the rotation result.
1) Definition of quaternion
q=[w x y z] T
|q| 2 =w 2 +x 2 +y 2 +z 2 =1
A quaternion can be constructed by the rotation axis and the angle of rotation about that axis:
Figure BDA0002195582680000091
Figure BDA0002195582680000092
Figure BDA0002195582680000093
Figure BDA0002195582680000094
where α is the angle of rotation about the axis of rotation, cos (β x )cos(β y )cos(β z ) The component of the axis of rotation in the X-axis, Y-axis, Z-axis directions (thereby defining the axis of rotation).
2) Conversion of Euler angles to quaternions
Figure BDA0002195582680000095
3) Conversion of quaternion to euler angle
Figure BDA0002195582680000101
The results of arctan and arcsin are
Figure BDA0002195582680000102
This does not cover all orientations (the range of values for angle θ is satisfied) and therefore it is necessary to replace arctan with atan 2.
Figure BDA0002195582680000103
The key point detection algorithm outputs the key point position and Yaw, shaking head; pitch, nodding; roll, score of swing (skew) we calculate face pose score according to the following formula, weight ratio gets ω respectively 1 =0.5、ω 2 =1.2、ω 3 =1.1 and bias b=15.
Figure BDA0002195582680000104
According to the scores of blurring, illumination and gesture, different weights are given to the images, then the score of the quality of the face image is obtained, and omega is obtained in actual use 1 =0.3、ω 2 =0.3、ω 3 =0.4 affects with emphasis on face pose, then:
score=ω 1 *score blur2 *score light3 *score pose
wherein score blur For fuzzy magnitude scoring, a score light Score for illumination quality pose Scoring the face pose.
The invention evaluates the quality of the face image through the design of fuzzy judgment, light and mild judgment and face posture judgment, the obtained quality evaluation result is more objective, the accuracy of image evaluation is effectively improved, the computer can accurately evaluate the image quality, and the subjective feeling index of the person on the image quality is expressed, thereby providing basis for the performance evaluation of an image processing system; on the other hand, as shown in fig. 3, the invention can improve the overall performance of the face recognition system, effectively improve errors caused by the low quality of input images, save the system matching time and further improve the working efficiency of the face recognition system.

Claims (5)

1. A face image quality evaluation method is characterized in that: the method comprises the following steps:
s1, acquiring a portrait picture to be evaluated;
s2, carrying out fuzzy judgment on the portrait picture;
s21, detecting edge points of a portrait picture;
s22, calculating the mean square error of pixels in the portrait picture, and calculating the fuzzy amount score of the portrait picture through the mean square error of the pixels;
s3, judging the illumination quality of the portrait picture;
s31, converting an RGB color space of the portrait picture into an HSV color space;
s32, intercepting a face area in the portrait picture through a face detection algorithm;
s33, calculating a brightness average value in the face area as a quantized value of face illumination;
s34, calculating an illumination quality score through a quantized value of face illumination;
s4, judging the face gesture of the portrait picture;
s41, detecting face key point information in a portrait picture by adopting a deep learning algorithm;
s42, calculating a face attitude angle value through the face key point information;
s43, calculating a face gesture score through a face gesture angle value;
s5, giving different weights to the fuzzy quantity score, the illumination quality score and the face posture score of the portrait picture, and summing to obtain a quality evaluation score value of the face picture;
the edge points of the portrait picture in step S21 include the following steps:
s211, calculating the horizontal absolute difference of the current pixel point, wherein the formula is as follows:
D h (x,y)=|f(x,y+1)-f(x,y-1)|;
wherein f (x, y) represents the current pixel point, x is [1, M ], y is [1, N ];
s212, calculating the average value of the horizontal absolute differences of all pixel points in the portrait picture, wherein the formula is as follows:
Figure FDA0004052322060000021
s213, when the current pixel point is D h (x, y) is greater than D h-mean The current pixel point is a candidate edge point, and D is the current pixel point h (x, y) two adjacent points { C in the horizontal direction thereof h (x,y-1),C h (x, y+1) } is large, the current pixelThe point is identified as an edge point; the judgment formula of the edge points is as follows:
Figure FDA0004052322060000022
Figure FDA0004052322060000023
2. the face image quality evaluation method according to claim 1, wherein: the calculation formula of the illumination quality score in step S34 is as follows:
Figure FDA0004052322060000024
wherein V is f Is a quantized value of the face illumination,
Figure FDA0004052322060000025
3. the face image quality evaluation method according to claim 2, wherein: the step S42 of calculating the face pose angle value according to the face key point information includes the following steps:
s421, calculating a rotation vector of the face gesture through the face key point information;
s422, converting the rotation vector of the face gesture into a quaternion;
s423, converting the quaternion into an Euler angle, wherein the Euler angle is a face gesture angle value.
4. A face image quality assessment method according to claim 3, wherein: in the step S43, a calculation formula for calculating the face pose score according to the face pose angle value is as follows:
Figure FDA0004052322060000026
wherein the weights are omega respectively 1 =0.5、ω 2 =1.2、ω 3 =1.1, the bias parameter is b=15; yaw is a shaking angle value; pitch is the nodding angle value; roll is the yaw angle value.
5. The face image quality assessment method according to claim 4, wherein: the calculation formula of the quality evaluation score value in the step S5 is as follows:
score=ω 1 *score blur2 *score light3 *score pose
wherein the weights are omega respectively 1 =0.3、ω 1 =0.3、ω 3 =0.4;score blur For fuzzy magnitude scoring, a score light Score for illumination quality pose Scoring the face pose.
CN201910847009.9A 2019-09-09 2019-09-09 Face image quality assessment method Active CN110619628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910847009.9A CN110619628B (en) 2019-09-09 2019-09-09 Face image quality assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910847009.9A CN110619628B (en) 2019-09-09 2019-09-09 Face image quality assessment method

Publications (2)

Publication Number Publication Date
CN110619628A CN110619628A (en) 2019-12-27
CN110619628B true CN110619628B (en) 2023-05-09

Family

ID=68922947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910847009.9A Active CN110619628B (en) 2019-09-09 2019-09-09 Face image quality assessment method

Country Status (1)

Country Link
CN (1) CN110619628B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144366A (en) * 2019-12-31 2020-05-12 中国电子科技集团公司信息科学研究院 Strange face clustering method based on joint face quality assessment
CN111402229A (en) * 2020-03-16 2020-07-10 焦点科技股份有限公司 Image scoring method and system based on deep learning
CN111696090B (en) * 2020-06-08 2022-07-29 电子科技大学 Method for evaluating quality of face image in unconstrained environment
CN111754492A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN111862040B (en) * 2020-07-20 2023-10-31 中移(杭州)信息技术有限公司 Portrait picture quality evaluation method, device, equipment and storage medium
CN112200010A (en) * 2020-09-15 2021-01-08 青岛邃智信息科技有限公司 Face acquisition quality evaluation strategy in community monitoring scene
CN112149553A (en) * 2020-09-21 2020-12-29 西安工程大学 Examination cheating behavior identification method
CN112199530B (en) * 2020-10-22 2023-04-07 天津众颐科技有限责任公司 Multi-dimensional face library picture automatic updating method, system, equipment and medium
CN112597909A (en) * 2020-12-25 2021-04-02 北京芯翌智能信息技术有限公司 Method and equipment for evaluating quality of face picture
CN113040757B (en) * 2021-03-02 2022-12-20 江西台德智慧科技有限公司 Head posture monitoring method and device, head intelligent wearable device and storage medium
CN113283319A (en) * 2021-05-13 2021-08-20 Oppo广东移动通信有限公司 Method and device for evaluating face ambiguity, medium and electronic equipment
CN113435248A (en) * 2021-05-18 2021-09-24 武汉天喻信息产业股份有限公司 Mask face recognition base enhancement method, device, equipment and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9582853B1 (en) * 2015-08-03 2017-02-28 Intel Corporation Method and system of demosaicing bayer-type image data for image processing
CN109086734B (en) * 2018-08-16 2021-04-02 新智数字科技有限公司 Method and device for positioning pupil image in human eye image
CN109544523B (en) * 2018-11-14 2021-01-01 北京智芯原动科技有限公司 Method and device for evaluating quality of face image based on multi-attribute face comparison
CN110147744B (en) * 2019-05-09 2024-05-14 腾讯科技(深圳)有限公司 Face image quality assessment method, device and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张汉萍.三维人脸激光测距特征的提取与识别.激光杂志.2018,(第07期),全文. *
李文杰.基于多角度的人脸特征识别方法.计算机产品与流通.2018,(第09期),全文. *

Also Published As

Publication number Publication date
CN110619628A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN110619628B (en) Face image quality assessment method
CN109684924B (en) Face living body detection method and device
CN109684925B (en) Depth image-based human face living body detection method and device
JP6411510B2 (en) System and method for identifying faces in unconstrained media
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
US9098760B2 (en) Face recognizing apparatus and face recognizing method
Chang et al. Tracking Multiple People Under Occlusion Using Multiple Cameras.
US20070242856A1 (en) Object Recognition Method and Apparatus Therefor
CN108776983A (en) Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
US20210064851A1 (en) Age recognition method, storage medium and electronic device
CN110175530A (en) A kind of image methods of marking and system based on face
CN108416291B (en) Face detection and recognition method, device and system
KR20110064117A (en) Method for determining frontal pose of face
Zhang et al. Hybrid support vector machines for robust object tracking
CN111291701B (en) Sight tracking method based on image gradient and ellipse fitting algorithm
Lu et al. Learning attention map from images
CN111460976B (en) Data-driven real-time hand motion assessment method based on RGB video
CN111832405A (en) Face recognition method based on HOG and depth residual error network
WO2024045350A1 (en) Eye movement based liveness detection method and system based on deep learning
Xu et al. Identity-constrained noise modeling with metric learning for face anti-spoofing
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
JP2005309765A (en) Image recognition device, image extraction device, image extraction method and program
Chen et al. Exploring depth information for head detection with depth images
JP2008003749A (en) Feature point detection device, method, and program
TWI427545B (en) Face recognition method based on sift features and head pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant