WO2022073453A1 - 一种面向身份识别的人脸质量感知方法及系统 - Google Patents

一种面向身份识别的人脸质量感知方法及系统 Download PDF

Info

Publication number
WO2022073453A1
WO2022073453A1 PCT/CN2021/121776 CN2021121776W WO2022073453A1 WO 2022073453 A1 WO2022073453 A1 WO 2022073453A1 CN 2021121776 W CN2021121776 W CN 2021121776W WO 2022073453 A1 WO2022073453 A1 WO 2022073453A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
quality
model
occluded
image
Prior art date
Application number
PCT/CN2021/121776
Other languages
English (en)
French (fr)
Inventor
王中元
王光成
黄宝金
韩镇
曾康利
Original Assignee
武汉大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉大学 filed Critical 武汉大学
Publication of WO2022073453A1 publication Critical patent/WO2022073453A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the invention belongs to the technical field of identity recognition, relates to a method and a system for automatically perceiving the quality of a face image, and in particular relates to a method and a system for perceiving the quality of a human face oriented to identity recognition.
  • face image quality evaluation is still in the exploratory stage.
  • the popular classification methods can be mainly divided into two types: (1) predict the quality of face images according to the existing image quality evaluation algorithms; (2) according to the characteristics of face images, starting from the factors affecting the quality of face images, research The relationship between the quality of face image and the accuracy of face recognition algorithm under the action of single or multiple factors.
  • the face image quality standards ISO/IEC 19794-5 and ISO/IEC 29794-5 published by the International Organization for Standardization describe standard face images in detail from multiple perspectives such as illumination, face pose, image focus and face occlusion, and It is pointed out that out of focus, non-frontal pose and asymmetric lighting are the most important reasons for the degradation of face image quality.
  • the current general face recognition system directly rejects the occluded face image as a low-quality image. Therefore, if the face quality evaluation model can automatically perceive the type of distortion, it will provide support for selecting a matching dedicated face recognition algorithm, thereby indirectly promoting the improvement of the efficiency of the face recognition system.
  • the quality of face images is affected by factors such as brightness, sharpness, contrast, and occlusion.
  • Existing face recognition systems regard occluded face images as low-quality face images and remove them.
  • only face images with occlusion can be collected.
  • the present invention proposes a face quality evaluation method and system for face recognition in actual scenes.
  • the technical scheme adopted by the method of the present invention is: an identity recognition-oriented face quality perception method, characterized in that it includes the following steps:
  • Step 1 Perform face detection on the input image to mark the face frame
  • Step 2 Based on the optimized meta-learning strategy, learn the prior knowledge of the human visual system to evaluate the image quality from the natural image evaluation task, and further fine-tune the quality prior model through face samples to obtain a face recognizability evaluation model;
  • Step 3 Establish an occluded face classification model based on a data-driven strategy
  • Step 4 Input the selected identifiable normal face and occluded face into the normal face and occluded face recognizers respectively for recognition.
  • the technical scheme adopted by the system of the present invention is: an identity recognition-oriented face quality perception system, which is characterized in that: it includes a first module, a second module, a third module and a fourth module;
  • the first module is used to perform face detection on the input image to mark the face frame
  • the second module is used to learn the prior knowledge of the human visual system to evaluate the image quality from the natural image evaluation task based on the optimized meta-learning strategy, and further fine-tune the quality prior model through the face samples to obtain the face recognizability. evaluation model;
  • the third module is used to establish an occluded face classification model based on a data-driven strategy
  • the fourth module is used to input the selected identifiable normal face and occluded face into the normal face and occluded face recognizers respectively for recognition.
  • the invention can automatically perceive the recognizable degree and the distortion type of the human face without manual intervention, and provides a basis for selecting a high-quality human face or selecting a face recognition algorithm matching the distortion type.
  • the technology of the present invention is used in combination with the face recognition system, which can help the face recognition system to select candidate faces with high recognition degree or activate a special face recognition module, thereby indirectly improving the efficiency of the face recognition system.
  • FIG. 1 is a flowchart of a method according to an embodiment of the present invention.
  • an identity recognition-oriented face quality perception method includes the following steps:
  • Step 1 Perform face detection on the input image to mark the face frame
  • the present invention uses the existing high-performance face detection model RetinaFace to detect the face image from the input image.
  • Step 2 Based on the optimized meta-learning strategy, learn the prior knowledge of the human visual system to evaluate the image quality from the natural image evaluation task, and further fine-tune the quality prior model through a small number of face samples to obtain a face recognizability evaluation model;
  • Step 2.1 The face recognizability evaluation model in step 2 is mainly aimed at the brightness, contrast and sharpness distortions that affect the recognizability of faces. Since these distortions are also important distortions affecting the quality of natural images, the present invention adopts an optimization-based meta-learning strategy to learn the natural image evaluation task to obtain the prior knowledge of the human visual system to evaluate images.
  • the reason why the optimization-based meta-learning strategy is adopted is that the optimization-based meta-learning strategy can be applied to any network structure based on Stochastic Gradient Descent (SGD) optimization.
  • SGD Stochastic Gradient Descent
  • the natural image evaluation task data (support set and query set) used in Figure 1 is the commonly used natural image quality evaluation data set TID2013, which is used as a training task set for meta-learning to learn the prior knowledge of the human visual system to evaluate image quality .
  • the network structure adopted in the present invention is a common convolutional neural network plus a fully connected layer. Specifically, global average pooling is used to pool the output of the convolutional neural network to obtain a fully connected layer with a first layer dimension of 512, and an additional fully connected layer is added to generate the output of the deep regression network.
  • the present invention inputs the image I into the deep regression network to obtain the predicted quality score
  • the specific definition is as follows
  • represents the initial parameters of the deep regression network.
  • the average Euclidean distance is used as a loss function to optimize the error between the predicted image quality score and the true value.
  • the specific definition of the loss function is as follows
  • the present invention adopts the double-layer stochastic gradient descent method commonly used in the field of meta-learning to optimize the deep regression network, and uses the Adam optimizer to update the parameters of the deep regression network.
  • Step 2.2 After learning the quality prior model, use a small number of face image samples to fine-tune the quality prior model (that is, use a small number of face samples to further train and optimize the quality prior model) to obtain the final face recognizability Evaluate the model.
  • Step 3 Build an occluded face classification model based on a data-driven strategy. Use the trained VGGface model to extract face features, and use the SVM classifier to learn the extracted face features to classify normal and occluded faces;
  • Step 3.1 The normal and occluded face data sets shown in Figure 1 use the mainstream normal face recognition data set CASIA-Webface and the occluded face detection data set MAFA proposed by Ge Shiming of the Chinese Academy of Sciences, respectively, to retrain VGGface;
  • Step 3.2 Use the trained VGGface model to extract face features
  • Step 3.3 Use the SVM classifier based on the RBF kernel in the LIBSVM package to classify the face features, and output two types of normal and occluded.
  • Step 4 Input the selected identifiable normal face and occluded face into the normal face and occluded face recognizers respectively for recognition;
  • the ArcFace model proposed by Deng Jiankang et al. of Imperial College is used as the normal face recognizer
  • the occlusion face recognition model ID-GAN based on the repair idea proposed by Ge Shiming of the Chinese Academy of Sciences is used as the occlusion face recognizer.
  • the present invention also provides an identity recognition-oriented face quality perception system, comprising module one, module two, module three and module four;
  • Module 1 for performing face detection on the input image and marking the face frame
  • the second module is used to learn the prior knowledge of the human visual system to evaluate the image quality from the natural image evaluation task based on the optimized meta-learning strategy, and further fine-tune the quality prior model through a small number of face samples to obtain the face recognizability evaluation.
  • Model
  • Module 3 is used to establish an occluded face classification model based on a data-driven strategy
  • Module 4 is used to input the selected identifiable normal face and occluded face into the normal face and occluded face recognizers respectively for recognition.
  • the present invention includes two parts: (1) In addition to the occlusion factor, other factors affecting the quality of the face image are similar to natural images. Therefore, the present invention uses the meta-learning strategy to learn the prior knowledge of the human visual system to evaluate the image quality from the natural image quality evaluation task, and uses the face image to fine-tune the quality prior model to quickly obtain the face recognizability evaluation model. (2) The present invention further proposes an occluded face classification model based on data driving to classify the preprocessed images of the first part.

Abstract

一种面向身份识别的人脸质量感知方法及系统,采用元学习策略从自然图像质量评价任务中学习人类视觉系统评价图像质量的先验知识,并采用少量人脸图像样本对获得的质量先验模型进行微调以快速得到人脸图像可辨识度感知模型。该方法及系统采用预训练的VGGface模型提取人脸特征,并采用SVM分类器学习获得的人脸特征对遮挡人脸和正常人脸进行分类。该方法及系统能够自动感知人脸可辨识度及失真类型,为挑选高质量人脸或选取与失真类型匹配的人脸识别算法提供了依据。

Description

一种面向身份识别的人脸质量感知方法及系统 技术领域
本发明属于身份识别技术领域,涉及一种人脸图像质量自动感知方法及系统,具体涉及一种面向身份识别的人脸质量感知方法及系统。
技术背景
随着人脸识别技术的进一步成熟和社会认同度的提高,人脸识别被广泛运用于公共安全、智能控制、身份验证等多个领域。在可控条件下,人脸识别算法取得了极高的精确度。最近英国帝国理工学院邓建康等提出的ArcFace,在LFW上的测试精度达到99.83%。然而在实际场合的非受控或非配合条件下,比如拍摄人脸环境下的光照、拍摄距离、拍摄角度及被拍摄者的主动遮挡(佩戴帽子、口罩和墨镜等),这些不可控因素导致采集到的人脸图像辨识度低下,出现模糊、低照度、人脸不全或姿态不正等不利于人脸识别的现象,从而严重制约了实际环境下人脸识别系统的性能。
人脸图像质量评价作为图像质量评价的重要分支,其研究仍处于探索阶段。目前,流行的分法主要可分为两种:(1)依据现有图像质量评价算法预测人脸图像质量;(2)根据人脸图像的特性,从影响人脸图像质量的因素出发,研究单个或多个因素作用下的人脸图像质量与人脸识别算法精度之间的关系。国际标准组织公布的人脸图像质量标准ISO/IEC 19794-5和ISO/IEC 29794-5,从光照、人脸姿态、图像焦点和人脸遮挡等多角度详细地描述了标准人脸图像,并指出图像失焦、非正脸姿态和不对称光照是造成人脸图像质量下降的最重要原因。现有人脸图像质量评价方法大部分只依据单一影响因素预测人脸质量,比如亮度、对比度、清晰度、人脸角度以及遮挡程度等,或者通过结合几种不同的影响因素评估人脸图像质量。然而,这些算法只是单纯地评估人脸图像质量,而没有考虑下游人脸识别等任务的需求。
在许多实际场景中,我们只能采集到遮挡人脸图像,比如在新冠疫情下,为了防止感染,只能在带着口罩的情况下进行人脸验证,还有犯罪分子为了躲避天眼追踪通过佩戴帽子、口罩和墨镜等方式来遮掩面部信息。近年遮挡人脸检测和识别技术也得到了快速发展。中科院的葛仕明等构建了一个遮挡人脸数据集MAFA并进一步基于数据驱动提出了一种遮挡人脸检测方法LLE-CNNs。葛仕明 等还基于图像修复的思想和现有人脸识别器,提出了ID-GAN对遮挡人脸进行识别。清华大学的陈健生等提出一种可训练的网络MaskNet,MaskNet可以容易地集成到现有CNN网络且能够有效的从遮挡部分分离出有效的面部信息,提升现有人脸识别算法的鲁棒性。
当前通用的人脸识别系统直接将遮挡人脸图像当作低质量图像进行剔除。因此,如果人脸质量评价模型能够自动感知失真类型,将为选取匹配的专用人脸识别算法提供支撑,从而间接促进人脸识别系统效能的提升。
发明内容
人脸图像质量受亮度、清晰度、对比度和遮挡等因素的影响,现有人脸识别系统将遮挡人脸图像视作低质量人脸图像并进行剔除。然而在许多实际场景下只能采集到带有遮挡的人脸图像,为了更贴近于实际场景,本发明提出一种面向实际场景人脸识别的人脸质量评价方法及系统。
本发明的方法所采用的技术方案是:一种面向身份识别的人脸质量感知方法,其特征在于,包括以下步骤:
步骤1:对输入图像进行人脸检测标出人脸框;
步骤2:基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
步骤3:基于数据驱动的策略建立遮挡人脸分类模型;
采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
步骤4:将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别。
本发明的系统所采用的技术方案是:一种面向身份识别的人脸质量感知系统,其特征在于:包括模块一、模块二、模块三和模块四;
所述模块一,用于对输入图像进行人脸检测标出人脸框;
所述模块二,用于基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
所述模块三,用于基于数据驱动的策略建立遮挡人脸分类模型;
采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
所述模块四,用于将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别。
本发明具有以下优点和积极效果:
本发明能够在无人工干预的情况下自动感知人脸可辨识度及失真类型,为挑选高质量人脸或选取与失真类型匹配的人脸识别算法提供了依据。本发明技术与人脸识别系统结合使用,能够帮助人脸识别系统选用高辨识度候选人脸或者启动专用的人脸识别模块,从而间接提升人脸识别系统的效能。
附图说明
图1:本发明实施例的方法流程图。
具体实施方式
为了便于本领域普通技术人员理解和实施本发明,下面结合附图及实施例对本发明作进一步的详细描述,应当理解,此处所描述的实施示例仅用于说明和解释本发明,并不用于限定本发明。
请见图1,本发明提供的一种面向身份识别的人脸质量感知方法,包括以下步骤:
步骤1:对输入图像进行人脸检测标出人脸框;
本发明采用现有高性能人脸检测模型RetinaFace从输入图像中检测出人脸图像。
步骤2:基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过少量人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
步骤2.1:步骤2中的人脸可辨识度评估模型主要针对影响人脸可辨识度的亮度、对比度和清晰度失真。由于这些失真同样是影响自然图像质量的重要失真,因此本发明采用一种基于优化的元学习策略来学习自然图像评价任务获得人类视觉系统评价图像的先验知识。之所以采用基于优化的元学习策略是因为基于优化的元学习策略能够适用于任何基于随机梯度下降(Stochastic Gradient Descent, SGD)优化的网络结构。图1中所用的自然图像评价任务数据(支持集和查询集)为常用的自然图像质量评价数据集TID2013,该数据集作为元学习的训练任务集来学习人类视觉系统评价图像质量的先验知识。本发明所采用的网络结构为普通的卷积神经网络加全连接层。具体地,采用全局平均池化对卷积神经网络的输出进行池化操作得到第一层维度为512的全连接层,增加了额外的全连接层生成深度回归网络的输出。对于输入人脸图像I来说,本发明将图像I输入深度回归网络得到预测的质量分数
Figure PCTCN2021121776-appb-000001
具体的定义如下
Figure PCTCN2021121776-appb-000002
其中θ表示深度回归网络的初始参数。平均欧式距离作为损失函数来优化预测的图像质量分数与真值之间的误差,损失函数具体的定义如下
Figure PCTCN2021121776-appb-000003
其中y代表图像I的质量分数真值。为了更好的学习不同任务之间的泛化能力,本发明采用元学习领域常用的双层随机梯度下降法对深度回归网络进行优化,并使用Adam优化器对深度回归网络参数进行更新。
步骤2.2:学习到质量先验模型后,再使用少量人脸图像样本对质量先验模型进行微调(即使用少量人脸样本对质量先验模型做进一步训练优化)获得最终的人脸可辨识度评估模型。
步骤3:基于数据驱动的策略建立遮挡人脸分类模型。采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
步骤3.1:图1所示的正常和遮挡人脸数据集分别采用主流正常人脸识别数据集CASIA-Webface和中科院葛仕明等提出的遮挡人脸检测数据集MAFA,对VGGface重新训练;
步骤3.2:采用训练的VGGface模型来提取人脸特征;
步骤3.3:采用LIBSVM包中基于RBF核的SVM分类器对人脸特征分类,输出正常与遮挡两类。
步骤4:将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别;
具体采用帝国理工学院邓建康等提出的ArcFace模型作为正常人脸识别器,利用中科院葛仕明等提出的基于修复思想的遮挡人脸识别模型ID-GAN作为遮挡人脸识别器。
本发明还提供了一种面向身份识别的人脸质量感知系统,包括模块一、模块二、模块三和模块四;
模块一,用于对输入图像进行人脸检测标出人脸框;
模块二,用于基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过少量人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
模块三,用于基于数据驱动的策略建立遮挡人脸分类模型;
采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
模块四,用于将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别。
本发明包括两部分:(1)除了遮挡因素以外,其它影响人脸图像质量的因素类似于自然图像。因此本发明利用元学习策略从自然图像质量评价任务中学习人类视觉系统评价图像质量的先验知识,并采用人脸图像对质量先验模型微调快速得到人脸可辨识度评估模型。(2)本发明进一步基于数据驱动提出一种遮挡人脸分类模型对第一部分预处理之后的图像做分类。
应当理解的是,本说明书未详细阐述的部分均属于现有技术。
应当理解的是,上述针对较佳实施例的描述较为详细,并不能因此而认为是对本发明专利保护范围的限制,本领域的普通技术人员在本发明的启示下,在不脱离本发明权利要求所保护的范围情况下,还可以做出替换或变形,均落入本发明的保护范围之内,本发明的请求保护范围应以所附权利要求为准。

Claims (6)

  1. 一种面向身份识别的人脸质量感知方法,其特征在于,包括以下步骤:
    步骤1:对输入图像进行人脸检测标出人脸框;
    步骤2:基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
    步骤3:基于数据驱动的策略建立遮挡人脸分类模型;
    采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
    步骤4:将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别。
  2. 根据权利要求1所述的面向身份识别的人脸质量感知方法,其特征在于:步骤1中采用高性能人脸检测模型RetinaFace从输入图像中检测出人脸框。
  3. 根据权利要求1所述的面向身份识别的人脸质量感知方法,其特征在于:步骤2中,采用的网络结构为卷积神经网络加全连接层,采用全局平均池化对卷积神经网络的输出进行全局平均池化操作得到第一层维度为512的全连接层,此外增加了额外的全连接层生成深度回归网络的输出;对于输入人脸图像I,将图像I输入深度回归网络得到预测的质量分数
    Figure PCTCN2021121776-appb-100001
    Figure PCTCN2021121776-appb-100002
    其中θ表示深度回归网络的初始参数;
    采用平均欧式距离作为损失函数来优化预测的图像质量分数与真值之间的误差,损失函数具体的定义为:
    Figure PCTCN2021121776-appb-100003
    其中y代表图像I的质量分数真值;
    采用双层随机梯度下降法对深度回归网络进行优化,并使用Adam优化器对深度回归网络参数进行更新;
    学习到质量先验模型后,再使用少量人脸图像样本对质量先验模型进行微调获得最终的人脸可辨识度评估模型。
  4. 根据权利要求1所述的面向身份识别的人脸质量感知方法,其特征在于, 步骤3中所述基于数据驱动的策略建立遮挡人脸分类模型,具体包括以下子步骤:
    步骤3.1:通过正常人脸识别数据集CASIA-Webface和遮挡人脸检测数据集MAFA,重新训练VGGface模型;
    步骤3.2:利用训练后的VGGface模型提取人脸特征;
    步骤3.3:采用SVM分类器对提取的人脸特征向量进行二分类,输出正常与遮挡两类。
  5. 根据权利要求1-4任意一项所述的面向身份识别的人脸质量感知方法,其特征在于:步骤4中,分别采用ArcFace、ID-GAN模型作为正常人脸和遮挡人脸的识别器。
  6. 一种面向身份识别的人脸质量感知系统,其特征在于:包括模块一、模块二、模块三和模块四;
    所述模块一,用于对输入图像进行人脸检测标出人脸框;
    所述模块二,用于基于优化的元学习策略从自然图像评价任务中学习人类视觉系统评价图像质量的先验知识,并进一步通过人脸样本对质量先验模型进行微调获得人脸可辨识度评估模型;
    所述模块三,用于基于数据驱动的策略建立遮挡人脸分类模型;
    采用训练的VGGface模型提取人脸特征,并采用SVM分类器学习所提取的人脸特征对正常和遮挡人脸进行分类;
    所述模块四,用于将选取的可辨识正常人脸和遮挡人脸分别输入正常人脸和遮挡人脸识别器进行识别。
PCT/CN2021/121776 2020-10-10 2021-09-29 一种面向身份识别的人脸质量感知方法及系统 WO2022073453A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011077049.9 2020-10-10
CN202011077049.9A CN112287770B (zh) 2020-10-10 2020-10-10 一种面向身份识别的人脸质量感知方法及系统

Publications (1)

Publication Number Publication Date
WO2022073453A1 true WO2022073453A1 (zh) 2022-04-14

Family

ID=74422407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121776 WO2022073453A1 (zh) 2020-10-10 2021-09-29 一种面向身份识别的人脸质量感知方法及系统

Country Status (2)

Country Link
CN (1) CN112287770B (zh)
WO (1) WO2022073453A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008971A (zh) * 2019-12-24 2020-04-14 天津工业大学 一种合影图像的美学质量评价方法及实时拍摄指导系统
CN116740452A (zh) * 2023-06-19 2023-09-12 北京数美时代科技有限公司 一种基于图像修复的图像分类方法、系统和存储介质
CN116938611A (zh) * 2023-09-19 2023-10-24 苏州宏存芯捷科技有限公司 基于人工智能的信息验证方法及系统
CN116977220A (zh) * 2023-08-07 2023-10-31 中国矿业大学 基于图像质量启发的盲图像去运动模糊算法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287770B (zh) * 2020-10-10 2022-06-07 武汉大学 一种面向身份识别的人脸质量感知方法及系统
CN116721304B (zh) * 2023-08-10 2023-10-20 武汉大学 基于失真图像恢复指导的图像质量感知方法、系统及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679450A (zh) * 2017-08-25 2018-02-09 珠海多智科技有限公司 基于深度学习的遮挡条件下人脸识别方法
CN108052932A (zh) * 2018-01-10 2018-05-18 重庆邮电大学 一种遮挡自适应人脸识别方法
US20180293429A1 (en) * 2017-03-30 2018-10-11 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN110334615A (zh) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 一种有遮挡的人脸识别的方法
CN112287770A (zh) * 2020-10-10 2021-01-29 武汉大学 一种面向身份识别的人脸质量感知方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590452A (zh) * 2017-09-04 2018-01-16 武汉神目信息技术有限公司 一种基于步态与人脸融合的身份识别方法及装置
CN108960087A (zh) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 一种基于多维度评估标准的人脸图像质量评估方法及系统
CN110070010B (zh) * 2019-04-10 2022-06-14 武汉大学 一种基于行人重识别的人脸属性关联方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293429A1 (en) * 2017-03-30 2018-10-11 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN107679450A (zh) * 2017-08-25 2018-02-09 珠海多智科技有限公司 基于深度学习的遮挡条件下人脸识别方法
CN108052932A (zh) * 2018-01-10 2018-05-18 重庆邮电大学 一种遮挡自适应人脸识别方法
CN110334615A (zh) * 2019-06-20 2019-10-15 湖北亮诚光电科技有限公司 一种有遮挡的人脸识别的方法
CN112287770A (zh) * 2020-10-10 2021-01-29 武汉大学 一种面向身份识别的人脸质量感知方法及系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008971A (zh) * 2019-12-24 2020-04-14 天津工业大学 一种合影图像的美学质量评价方法及实时拍摄指导系统
CN116740452A (zh) * 2023-06-19 2023-09-12 北京数美时代科技有限公司 一种基于图像修复的图像分类方法、系统和存储介质
CN116740452B (zh) * 2023-06-19 2023-12-22 北京数美时代科技有限公司 一种基于图像修复的图像分类方法、系统和存储介质
CN116977220A (zh) * 2023-08-07 2023-10-31 中国矿业大学 基于图像质量启发的盲图像去运动模糊算法
CN116977220B (zh) * 2023-08-07 2024-02-13 中国矿业大学 基于图像质量启发的盲图像去运动模糊算法
CN116938611A (zh) * 2023-09-19 2023-10-24 苏州宏存芯捷科技有限公司 基于人工智能的信息验证方法及系统
CN116938611B (zh) * 2023-09-19 2023-12-12 苏州宏存芯捷科技有限公司 基于人工智能的信息验证方法及系统

Also Published As

Publication number Publication date
CN112287770B (zh) 2022-06-07
CN112287770A (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2022073453A1 (zh) 一种面向身份识别的人脸质量感知方法及系统
CN106096538B (zh) 基于定序神经网络模型的人脸识别方法及装置
KR101254181B1 (ko) 하이브리드 방식의 영상 데이터 전처리 기법 및 방사형 기저함수 기반 신경회로망을 이용한 얼굴 인식 방법
CN108268859A (zh) 一种基于深度学习的人脸表情识别方法
US20160078319A1 (en) Method, Apparatus and Computer Readable Recording Medium for Detecting a Location of a Face Feature Point Using an Adaboost Learning Algorithm
CN106250877A (zh) 近红外人脸识别方法及装置
CN111209907B (zh) 一种复杂光污染环境下产品特征图像人工智能识别方法
KR20130037734A (ko) 방사형 기저 함수 신경회로망 알고리즘을 이용한 실시간 얼굴 인식 시스템
WO2007089027A1 (en) Method and apparatus for automatic eyeglasses detection using a nose ridge mask
CN106650574A (zh) 基于PCANet的人脸识别方法
Mady et al. Efficient real time attendance system based on face detection case study “MEDIU staff”
CN111967592A (zh) 基于分离正负扰动生成对抗图像机器识别的方法
CN111126307A (zh) 联合稀疏表示神经网络的小样本人脸识别方法
Sanyal et al. Two stream deep convolutional neural network for eye state recognition and blink detection
CN111666845A (zh) 基于关键帧采样的小样本深度学习多模态手语识别方法
Chen et al. Robust gender recognition for uncontrolled environment of real-life images
WO2015037973A1 (en) A face identification method
CN112200008A (zh) 一种社区监控场景下人脸属性识别方法
Jacob Comparison of popular face detection and recognition techniques
WO2023160075A1 (zh) 图像修复方法、装置、设备和介质
Pathak et al. Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features
CN114677749A (zh) 一种基于限制搜索空间的人脸识别对抗样本生成方法
Vivekanandam et al. Face recognition from video frames using hidden markov model classification model based on modified random feature extraction
Yew et al. A study on face recognition in video surveillance system using multi-class support vector machines
Huang et al. Driver fatigue expression recognition research based on convolutional neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21876971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21876971

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.09.2023)