CN111582150A - Method and device for evaluating face quality and computer storage medium - Google Patents
Method and device for evaluating face quality and computer storage medium Download PDFInfo
- Publication number
- CN111582150A CN111582150A CN202010376484.5A CN202010376484A CN111582150A CN 111582150 A CN111582150 A CN 111582150A CN 202010376484 A CN202010376484 A CN 202010376484A CN 111582150 A CN111582150 A CN 111582150A
- Authority
- CN
- China
- Prior art keywords
- feature data
- face
- feature
- training set
- quality evaluation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a method and a device for evaluating the face quality and a computer storage medium, wherein the evaluation method comprises the following steps: acquiring a plurality of face images; extracting the characteristics of the obtained face image by using a convolutional neural network algorithm to be used as a characteristic training set; the method comprises the steps of inputting feature data in a feature training set into an image quality evaluation model, outputting a quality evaluation value, carrying out face recognition on the feature data of which the quality evaluation value meets a preset condition, carrying out face recognition on the feature data of which the quality evaluation value meets the preset condition, and modifying a loss function on the basis of the face recognition so that a face recognition network has a face quality evaluation function.
Description
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to a method and an apparatus for face quality assessment, and a computer storage medium.
Background
The process of face recognition is to map a face image to a feature vector, calculate the similarity by using the cosine distance between the feature vectors, and reduce the distance in the class and increase the distance between the classes in the training process. For a certain face recognition network, two similar pictures are input, and even if the pictures are non-face pictures, two feature vectors with higher similarity can be obtained. For example, two blurred face pictures are input, and although they have a high similarity, people do not determine that they represent the same person because they can judge that the blurred pictures are not suitable for face recognition.
Factors influencing the face recognition performance include illumination, definition, side face, expression, occlusion and the like, and the factors can be expressed by the face quality. The face quality is to predict whether a face picture is suitable for face recognition. For training of the face quality, training data is a great bottleneck, and because many factors have influence on the face quality, people are difficult to evaluate the influence degree among different factors, so that the face image quality is difficult to label. The existing method calculates similarity based on a face recognition model, for example, a best quality image is selected from images of each person, then similarity between other face images of each person and the best quality image is calculated according to a pre-training face recognition model, the similarity is used as a quality label, and training of face quality is performed based on the data. The similarity calculated based on the existing face recognition model cannot necessarily represent the quality, for example, 2 face pictures are all clear front faces, but the quality is high due to long time intervals, but the similarity is not necessarily high. This causes errors in the face quality label.
Disclosure of Invention
In view of the above technical problems, the present invention provides a method, an apparatus and a computer storage medium for face quality assessment, which simultaneously estimate the quality during the face recognition process, and do not need a face image quality label during the training process, thereby facilitating the implementation.
The embodiment of the invention provides a method for evaluating the quality of a human face, which comprises the following steps: acquiring a plurality of face images; extracting the characteristics of the obtained face image by using a convolutional neural network algorithm to be used as a characteristic training set; inputting the feature data in the feature training set into an image quality evaluation model, and outputting a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the feature data of a preset sample image and the feature data in the feature training set, and training based on a loss function according to Gaussian distribution and probability density; and performing face recognition on the feature data of which the quality evaluation value meets the preset condition.
Optionally, the image quality evaluation model is obtained by a machine learning method based on loss function training according to gaussian distribution and probability density of feature data, and the method includes: if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density; calculating feature data xiA probability of belonging to identity ID j; and obtaining a loss function according to the probability density and the probability.
Optionally, the value of the loss function is used to characterize a difference between the feature data of the sample image and the feature data in the feature training set of the same person.
Optionally, the step of obtaining a loss function according to the probability density and the probability further includes: and calculating cosine values between the feature data of each sample image and the feature data in the corresponding feature training set.
The invention also provides a device for evaluating the face quality, which comprises: the acquisition unit is used for acquiring a plurality of face images; the feature extraction unit is used for extracting features of the obtained face image by using a convolutional neural network algorithm to be used as a feature training set; the evaluation unit is used for inputting the feature data in the feature training set into an image quality evaluation model and outputting a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the feature data of a preset sample image and the feature data in the feature training set and training based on a loss function according to Gaussian distribution and probability density; and the identification unit is used for carrying out face identification on the characteristic data of which the quality evaluation value meets the preset condition.
Preferably, the apparatus further comprises: the training unit is used for training the obtained image quality evaluation model based on the loss function by using the characteristic data of the preset sample image and the characteristic data in the characteristic training set by adopting a machine learning method according to Gaussian distribution and probability density; the training unit is based onTraining to obtain the image quality evaluation model: if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density; calculating feature data xiA probability of belonging to identity ID j; and obtaining a loss function according to the probability density and the probability.
Preferably, the value of the loss function is used to characterize a difference between the feature data of the sample image and the feature data in the feature training set of the same person.
The invention also provides a computer storage medium, which when executed by a processor implements the steps of the method of any one of the preceding claims.
In the technical scheme provided by the embodiment of the invention, a convolutional neural network algorithm is utilized to extract a feature training set, feature data in the feature training set is input into an image quality evaluation model, a quality evaluation value is output, and face recognition is carried out on the feature data of which the quality evaluation value meets a preset condition.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a method for face quality assessment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for face quality assessment according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a face quality evaluation apparatus according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention provides a method for evaluating face quality, which is operated on an electronic device, such as a terminal device or a server, and please refer to fig. 1, where the method includes the following steps:
in step S10, a plurality of face images are acquired. In this embodiment, a plurality of face images may be directly input to the terminal device or the server, and the face image to be evaluated may be a face image acquired in various environments, may be a visible light face image or a non-visible light face image, may be a clear face image or a motion blur/defocus blur face image, may be a face image acquired in a matching state or a non-matching state of a photographic subject, may also be a face image containing noise, or the like.
And step S20, extracting the characteristics of the obtained face image by using a convolutional neural network algorithm to be used as a characteristic training set. Inputting a human face image to be evaluated into a convolutional neural network, extracting the features of the human face image through a computer, and taking the extracted features as a feature training set.
And step S30, inputting the characteristic data in the characteristic training set into an image quality evaluation model and outputting a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the characteristic data of a preset sample image and the characteristic data in the characteristic training set and training based on a loss function according to Gaussian distribution and probability density.
In one embodiment of the present invention, referring to fig. 2, the step S30 of training the image quality assessment model based on the loss function by using a machine learning method according to the gaussian distribution and the probability density of the feature data specifically includes:
step S31, if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density;
step S32, calculating characteristic data xiBelonging to identity IDA probability of j;
step S33, a loss function is obtained according to the probability density and the probability.
Specifically, the feature data of the face image is regarded as a gaussian difference, data of N persons are assumed to be in the training set, y ∈ {1, 2.. N } represents the identity ID of each feature data, and the feature data corresponding to one input image is assumed to be xiWith a corresponding identity ID of j, the corresponding feature obeying a Gaussian distribution N (f)i,σ2I) The vector of the sample image with the corresponding identity ID of j is wjIf the identity ID is j, the characteristic data of the sample image is xiIs given by equation (1):
where D is the feature dimension of the feature vector. In addition, assuming that the prior probability of each identity ID is equal, that is, a face image to be evaluated is randomly extracted, and the probabilities of belonging to each identity ID are the same, the feature data xiThe probability of belonging to identity ID j is:
The above formula can be expressed as:
above formula siCan be seen as image xiBut s isiThe value of (A) is not limited in scope and is not good as an evaluation index of the face quality. Therefore we assume siThe maximum value of (c) is taken as S, then,
si=S*di,di∈[0,1]
the above formula can thus be expressed as:
where d isiI.e. representing the face quality, the higher the face quality, here diThe closer to 1. f. ofiwj=cos(θij),
θijIs a vector fiAnd wjThe included angle therebetween. During training, the reference ArcFace adds a Margin penalty, and the penalty function is as follows:
the loss function in the above equation (6) introduces the quality estimation value d on the basis of ArcFaceiWhen training, the high-quality face is closer to the characteristic vector w of the sample image of the corresponding personjAnd thus the angle theta between the vectorsijSmaller, in order to lower the loss value, diThe value of (c) increases.
Features w for low quality faces and correspondents in trainingjThe degree of similarity is low, so the angle theta between the vectorsijLarger, in order to lower the loss value, diThe value of (c) is decreased.
Therefore, whether the quality is low due to factors such as occlusion, blurring and side faces, if the quality is difficult to recognize, when face recognition is trained, the included angle between the features of the person and the feature data of the sample image of the same person is large, and a small quality evaluation value can be obtained.
In training, with the low-quality face diThe value is reduced, and the training weight is also reduced; conversely, the high quality face has a higher weight. Such that the characteristic w of each personjAnd automatically closing the high-quality face.
In step S40, face recognition is performed on the feature data whose quality evaluation value satisfies a preset condition. And inputting the characteristic data meeting the preset conditions in the quality evaluation value output by the image quality evaluation model into a face recognition model, namely, performing face recognition in a convolutional neural network algorithm.
In the invention, the image quality evaluation model is used for evaluating the quality of the face image, and the quality evaluation result of the face image is obtained. The image quality evaluation model is obtained by adopting a machine learning method and utilizing a preset human face image set to be evaluated, namely a feature training set and a set of feature data of a corresponding sample image, and training based on a loss function, namely the value of the loss function is used for representing the difference between the feature data of the sample image of the same person and the quality evaluation value of the feature data in the feature training set. Here, the sample image is typically a well focused high resolution front visible image under a uniform light source.
The image quality estimation model is based on a model of a neural network, and a logistic regression model, a hidden markov model, or the like may be used.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart.
Referring to fig. 3, in an embodiment of the present invention, an apparatus 100 for face quality assessment is further provided, the apparatus including: the system comprises an acquisition unit 101, a feature extraction unit 102, an evaluation unit 103 and a recognition unit 104, wherein the acquisition unit 101 is used for acquiring a plurality of face images; the feature extraction unit 102 performs feature extraction on the acquired face image by using a convolutional neural network algorithm as a feature training set; the evaluation unit 103 inputs the feature data in the feature training set to an image quality evaluation model, and outputs a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the feature data of a preset sample image and the feature data in the feature training set, and training based on a loss function according to Gaussian distribution and probability density; the recognition unit 104 performs face recognition on the feature data of which the quality assessment value evaluation satisfies a preset condition.
In one embodiment of the present invention, the apparatus 100 for evaluating human face quality further includes a training unit, configured to train, by using a machine learning method, the image quality evaluation model obtained based on a loss function according to gaussian distribution and probability density by using feature data of a preset sample image and feature data in the feature training set; the training unit is used for training to obtain the image quality evaluation model based on the following steps:
if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density;
calculating feature data xiA probability of belonging to identity ID j;
and obtaining a loss function according to the probability density and the probability.
In an embodiment of the present invention, a computer storage medium is further provided, and when executed by a processor, the computer storage medium implements the steps of the method for face quality assessment according to any of the above embodiments. Specifically, the computer-readable medium may be contained in the device described in the above embodiment; or may be present separately and not assembled into the device.
According to the method and the device for evaluating the face quality, the loss function is modified on the basis of face recognition, so that a face recognition network has a face quality evaluation function.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A method for evaluating the quality of human faces is characterized by comprising the following steps:
acquiring a plurality of face images;
extracting the characteristics of the obtained face image by using a convolutional neural network algorithm to be used as a characteristic training set;
inputting the feature data in the feature training set into an image quality evaluation model, and outputting a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the feature data of a preset sample image and the feature data in the feature training set, and training based on a loss function according to Gaussian distribution and probability density;
and performing face recognition on the feature data of which the quality evaluation value meets the preset condition.
2. The method for evaluating the quality of the human face according to claim 1, wherein the image quality evaluation model is obtained by adopting a machine learning method and training based on a loss function according to Gaussian distribution and probability density of characteristic data, and the method comprises the following steps:
if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density;
calculating feature data xiA probability of belonging to identity ID j;
and obtaining a loss function according to the probability density and the probability.
3. The method of face quality assessment according to claim 2, wherein the value of the loss function is used to characterize the difference between the quality assessment values of the feature data of the sample image and the feature data in the feature training set of the same person.
4. The method of claim 3, wherein the step of obtaining a loss function according to the probability density and the probability further comprises:
and calculating cosine values between the feature data of each sample image and the feature data in the corresponding feature training set.
5. An apparatus for face quality assessment, the apparatus comprising:
the acquisition unit is used for acquiring a plurality of face images;
the feature extraction unit is used for extracting features of the obtained face image by using a convolutional neural network algorithm to be used as a feature training set;
the evaluation unit is used for inputting the feature data in the feature training set into an image quality evaluation model and outputting a quality evaluation value, wherein the image quality evaluation model is obtained by adopting a machine learning method, utilizing the feature data of a preset sample image and the feature data in the feature training set and training based on a loss function according to Gaussian distribution and probability density;
and the identification unit is used for carrying out face identification on the characteristic data of which the quality evaluation value meets the preset condition.
6. The apparatus for face quality assessment according to claim 5, further comprising:
the training unit is used for training the obtained image quality evaluation model based on the loss function by using the characteristic data of the preset sample image and the characteristic data in the characteristic training set by adopting a machine learning method according to Gaussian distribution and probability density;
the training unit is used for training to obtain the image quality evaluation model based on the following steps:
if the feature data x in the training setiThe identity ID of j is obtained according to Gaussian distributioniA probability density;
calculating feature data xiA probability of belonging to identity ID j;
and obtaining a loss function according to the probability density and the probability.
7. The apparatus for human face quality assessment according to claim 5, wherein the value of said loss function is used to characterize the difference between the quality assessment values of the feature data of the sample image and the feature data in the feature training set of the same person.
8. A computer storage medium, characterized in that the computer program realizes the steps of the method according to any of claims 1-4 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010376484.5A CN111582150B (en) | 2020-05-07 | 2020-05-07 | Face quality assessment method, device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010376484.5A CN111582150B (en) | 2020-05-07 | 2020-05-07 | Face quality assessment method, device and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111582150A true CN111582150A (en) | 2020-08-25 |
CN111582150B CN111582150B (en) | 2023-09-05 |
Family
ID=72113311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010376484.5A Active CN111582150B (en) | 2020-05-07 | 2020-05-07 | Face quality assessment method, device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111582150B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070744A (en) * | 2020-09-08 | 2020-12-11 | 济南博观智能科技有限公司 | Face recognition method, system, device and readable storage medium |
CN112270269A (en) * | 2020-10-30 | 2021-01-26 | 湖南快乐阳光互动娱乐传媒有限公司 | Method and device for evaluating quality of face image |
CN112381782A (en) * | 2020-11-11 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Human face image quality evaluation method and device, computer equipment and storage medium |
CN112634268A (en) * | 2021-01-11 | 2021-04-09 | 北京霍因科技有限公司 | Video quality evaluation method and device and electronic equipment |
CN112766164A (en) * | 2021-01-20 | 2021-05-07 | 深圳力维智联技术有限公司 | Face recognition model training method, device and equipment and readable storage medium |
CN113011468A (en) * | 2021-02-25 | 2021-06-22 | 上海皓桦科技股份有限公司 | Image feature extraction method and device |
CN113192028A (en) * | 2021-04-29 | 2021-07-30 | 北京的卢深视科技有限公司 | Quality evaluation method and device for face image, electronic equipment and storage medium |
CN113936320A (en) * | 2021-10-21 | 2022-01-14 | 北京的卢深视科技有限公司 | Face image quality evaluation method, electronic device and storage medium |
CN115050081A (en) * | 2022-08-12 | 2022-09-13 | 平安银行股份有限公司 | Expression sample generation method, expression recognition method and device and terminal equipment |
CN117275076A (en) * | 2023-11-16 | 2023-12-22 | 厦门瑞为信息技术有限公司 | Method for constructing face quality assessment model based on characteristics and application |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881867A (en) * | 2015-05-13 | 2015-09-02 | 华中科技大学 | Method for evaluating quality of remote sensing image based on character distribution |
CN108269254A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Image quality measure method and apparatus |
CN110046652A (en) * | 2019-03-18 | 2019-07-23 | 深圳神目信息技术有限公司 | Face method for evaluating quality, device, terminal and readable medium |
-
2020
- 2020-05-07 CN CN202010376484.5A patent/CN111582150B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881867A (en) * | 2015-05-13 | 2015-09-02 | 华中科技大学 | Method for evaluating quality of remote sensing image based on character distribution |
CN108269254A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | Image quality measure method and apparatus |
CN110046652A (en) * | 2019-03-18 | 2019-07-23 | 深圳神目信息技术有限公司 | Face method for evaluating quality, device, terminal and readable medium |
Non-Patent Citations (2)
Title |
---|
YICHUN SHI: "Probabilistic Face Embeddings", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, pages 6901 - 6910 * |
褚江: "基于色彩空间的无参考图像质量评价研究", pages 30 - 33 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070744B (en) * | 2020-09-08 | 2022-11-22 | 济南博观智能科技有限公司 | Face recognition method, system, device and readable storage medium |
CN112070744A (en) * | 2020-09-08 | 2020-12-11 | 济南博观智能科技有限公司 | Face recognition method, system, device and readable storage medium |
CN112270269A (en) * | 2020-10-30 | 2021-01-26 | 湖南快乐阳光互动娱乐传媒有限公司 | Method and device for evaluating quality of face image |
CN112381782A (en) * | 2020-11-11 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Human face image quality evaluation method and device, computer equipment and storage medium |
CN112634268A (en) * | 2021-01-11 | 2021-04-09 | 北京霍因科技有限公司 | Video quality evaluation method and device and electronic equipment |
CN112634268B (en) * | 2021-01-11 | 2024-01-05 | 北京霍因科技有限公司 | Video quality evaluation method and device and electronic equipment |
CN112766164A (en) * | 2021-01-20 | 2021-05-07 | 深圳力维智联技术有限公司 | Face recognition model training method, device and equipment and readable storage medium |
CN113011468A (en) * | 2021-02-25 | 2021-06-22 | 上海皓桦科技股份有限公司 | Image feature extraction method and device |
CN113192028A (en) * | 2021-04-29 | 2021-07-30 | 北京的卢深视科技有限公司 | Quality evaluation method and device for face image, electronic equipment and storage medium |
CN113192028B (en) * | 2021-04-29 | 2022-05-31 | 合肥的卢深视科技有限公司 | Quality evaluation method and device for face image, electronic equipment and storage medium |
CN113936320B (en) * | 2021-10-21 | 2022-03-25 | 北京的卢深视科技有限公司 | Face image quality evaluation method, electronic device and storage medium |
CN113936320A (en) * | 2021-10-21 | 2022-01-14 | 北京的卢深视科技有限公司 | Face image quality evaluation method, electronic device and storage medium |
CN115050081A (en) * | 2022-08-12 | 2022-09-13 | 平安银行股份有限公司 | Expression sample generation method, expression recognition method and device and terminal equipment |
CN115050081B (en) * | 2022-08-12 | 2022-11-25 | 平安银行股份有限公司 | Expression sample generation method, expression recognition method and device and terminal equipment |
CN117275076A (en) * | 2023-11-16 | 2023-12-22 | 厦门瑞为信息技术有限公司 | Method for constructing face quality assessment model based on characteristics and application |
CN117275076B (en) * | 2023-11-16 | 2024-02-27 | 厦门瑞为信息技术有限公司 | Method for constructing face quality assessment model based on characteristics and application |
Also Published As
Publication number | Publication date |
---|---|
CN111582150B (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111582150B (en) | Face quality assessment method, device and computer storage medium | |
CN112950581B (en) | Quality evaluation method and device and electronic equipment | |
CN110378235B (en) | Fuzzy face image recognition method and device and terminal equipment | |
CN108269254B (en) | Image quality evaluation method and device | |
Pang et al. | Classifying discriminative features for blur detection | |
CN109858375B (en) | Living body face detection method, terminal and computer readable storage medium | |
CN110569731A (en) | face recognition method and device and electronic equipment | |
Serra et al. | Bayesian K-SVD using fast variational inference | |
CN111340041B (en) | License plate recognition method and device based on deep learning | |
Yang et al. | No-reference quality assessment for screen content images using visual edge model and adaboosting neural network | |
CN112561879B (en) | Ambiguity evaluation model training method, image ambiguity evaluation method and image ambiguity evaluation device | |
Cheng et al. | A pre-saliency map based blind image quality assessment via convolutional neural networks | |
CN111401343B (en) | Method for identifying attributes of people in image and training method and device for identification model | |
CN117197904A (en) | Training method of human face living body detection model, human face living body detection method and human face living body detection device | |
CN113870254A (en) | Target object detection method and device, electronic equipment and storage medium | |
CN112215831A (en) | Method and system for evaluating quality of face image | |
CN114332983A (en) | Face image definition detection method, face image definition detection device, electronic equipment and medium | |
Wang | A learning-based human facial image quality evaluation method in video-based face recognition systems | |
CN114549502A (en) | Method and device for evaluating face quality, electronic equipment and storage medium | |
CN108399358A (en) | A kind of expression display methods and system in Video chat | |
CN114627534A (en) | Living body discrimination method, electronic device, and storage medium | |
CN114004974A (en) | Method and device for optimizing images shot in low-light environment | |
Yang et al. | Fine-grained image quality caption with hierarchical semantics degradation | |
CN111274898A (en) | Method and device for detecting group emotion and cohesion in video stream based on deep learning | |
CN113255472B (en) | Face quality evaluation method and system based on random embedding stability |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |