WO2020113563A1 - 人脸图像质量评估方法、装置、设备及存储介质 - Google Patents

人脸图像质量评估方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020113563A1
WO2020113563A1 PCT/CN2018/119812 CN2018119812W WO2020113563A1 WO 2020113563 A1 WO2020113563 A1 WO 2020113563A1 CN 2018119812 W CN2018119812 W CN 2018119812W WO 2020113563 A1 WO2020113563 A1 WO 2020113563A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
value
face
feature vector
evaluated
Prior art date
Application number
PCT/CN2018/119812
Other languages
English (en)
French (fr)
Inventor
吴晓民
Original Assignee
北京比特大陆科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京比特大陆科技有限公司 filed Critical 北京比特大陆科技有限公司
Priority to PCT/CN2018/119812 priority Critical patent/WO2020113563A1/zh
Priority to CN201880098339.6A priority patent/CN112889061A/zh
Publication of WO2020113563A1 publication Critical patent/WO2020113563A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the field of face recognition, for example, to a method, device, device and storage medium for evaluating the quality of face images.
  • Face image quality can be evaluated at different levels, such as the evaluation of the image’s global characteristics such as illumination and contrast.
  • the existing face image quality assessment technology usually uses traditional methods such as grayscale histogram, image color, and edge detection to judge.
  • the multiple factors that are measured are the image quality rather than the quality of the face image.
  • the image quality is inaccurate, with high complexity and low efficiency.
  • An embodiment of the present disclosure provides a method for evaluating the quality of face images, including:
  • the quality level of the face image of the picture to be evaluated is determined according to the L2norm value of the face feature vector of the picture to be evaluated and the quality level boundary information.
  • An embodiment of the present disclosure also provides a face image quality evaluation device, including:
  • the feature extraction module is used to extract the facial feature vector of the image to be evaluated using the facial recognition model
  • a calculation module used to calculate the L2norm value of the face feature vector of the picture to be evaluated
  • the quality evaluation module is configured to determine the face image quality level of the picture to be evaluated according to the L2norm value of the face feature vector of the picture to be evaluated and the quality level boundary information.
  • An embodiment of the present disclosure also provides a computer device, including the above-mentioned face image quality evaluation device.
  • An embodiment of the present disclosure also provides a computer-readable storage medium that stores computer-executable instructions that are configured to perform the above-described method for evaluating the quality of face images.
  • An embodiment of the present disclosure also provides a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the The computer executes the above-mentioned face image quality evaluation method.
  • An embodiment of the present disclosure also provides an electronic device, including:
  • At least one processor At least one processor
  • a memory communicatively connected to the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and when the instructions are executed by the at least one processor, the at least one processor is caused to execute the above-mentioned face image quality evaluation method.
  • FIG. 1 is a flowchart of a method for evaluating the quality of a face image provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of another method for evaluating the quality of a face image provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a face image quality evaluation device provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of another face image quality evaluation device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for evaluating the quality of a face image according to an embodiment of the present disclosure. As shown in FIG. 1, the method in this embodiment has the following specific steps:
  • Step S101 Use the face recognition model to extract the face feature vector of the picture to be evaluated.
  • the picture to be evaluated refers to a picture of the image quality of the face to be evaluated.
  • the face recognition model may be any face recognition model based on deep neural networks in the prior art, which is not specifically limited here in this embodiment.
  • the face recognition model used to extract the face feature vector of the picture to be evaluated can directly use the face recognition model when performing face recognition on the picture to be evaluated.
  • the pros and cons of the effect of face recognition can be better reflected.
  • Step S102 Calculate the L2norm value of the face feature vector of the picture to be evaluated.
  • the L2norm value of the face feature vector of the picture to be evaluated is calculated, and the L2norm value is used as the index data for evaluating the image quality of the face of the picture to be evaluated. If the quality of the face image in the picture is higher, the feature value of each dimension in the face feature vector of the picture is larger, and the L2norm value of the face feature vector is also larger.
  • different dimensions in the face feature vector correspond to different face features, such as wrinkles, nose, eyes and other features.
  • calculating the L2norm value of the face feature vector of the picture to be evaluated may be implemented by any method of calculating the L2norm value of the feature vector of the picture in the prior art, which is not specifically limited here in this embodiment.
  • Step S103 Determine the face image quality level of the picture to be evaluated according to the L2norm value of the face feature vector of the picture to be evaluated and the quality level boundary information.
  • the L2norm value of the face feature vector of the picture is used as the index data for evaluating the image quality of the face of the picture. According to the size of the L2norm value of the face feature vector of the picture, the quality of the face image of the picture is divided into multiple face image quality levels.
  • Each face image quality level corresponds to a set of quality level boundary information, which is used to define the range of the L2norm value of the face feature vector corresponding to the face image quality level.
  • a large number of sample pictures can be obtained, and the L2norm values of the face feature vectors of all sample pictures can be calculated.
  • the maximum and minimum values of the L2norm values of the face feature vectors of all sample pictures form an L2norm value interval, and the L2norm value
  • the interval is divided into a preset number of sub-intervals, and (preset number-1) division points are obtained. Taking each division point as a grade boundary value, (preset number-1) grade boundary values are obtained, which (default Quantity-1)
  • a level boundary value can determine a preset number of personal face image quality levels.
  • the quality level boundary information includes boundary values of a plurality of face image quality levels.
  • the size of the boundary value of each face image quality level can be determined by comparing the L2norm value of the face feature vector of the picture to be evaluated and the size of the boundary value of each face image quality level.
  • the face image quality level corresponding to the L2norm value of the face feature vector of the picture, and the face image quality level corresponding to the L2norm value of the face feature vector of the picture to be evaluated is taken as the face image quality level of the picture to be evaluated.
  • the face feature vector of the picture to be evaluated is extracted by using the face recognition model; the L2norm value of the face feature vector of the picture to be evaluated is calculated; the L2norm value of the face feature vector of the picture to be evaluated, and the quality level boundary Information, to determine the quality level of the face image of the picture to be evaluated, to accurately assess the quality level of the face image of the picture based on the face features of the picture, to accurately assess the quality of the face image of the picture, and to reduce the quality of the face image
  • the complexity of the evaluation is low, which improves the efficiency of face image quality evaluation.
  • FIG. 2 is a flowchart of another method for evaluating the quality of a face image provided by an embodiment of the present disclosure.
  • the initial face recognition model can also be trained to obtain a face recognition model; and the face recognition model and a large number of samples including the current application scene are used Picture collection of pictures to determine quality level boundary information.
  • Step S201 Perform model training on the initial face recognition model to obtain a face recognition model.
  • the face recognition model used to extract the face feature vector of the picture to be evaluated can directly adopt the face recognition model when performing face recognition on the picture to be evaluated, so that The pros and cons of the face image quality of the evaluated picture to be evaluated can better reflect the pros and cons of the effect of face recognition.
  • any face recognition model based on deep neural networks in the prior art may be selected as the initial face recognition model, which is not specifically limited herein in this embodiment.
  • L2 regularization processing of the face feature vector extracted from the initial face recognition model is added.
  • L2 regularization is added to the feature layer, where the feature layer refers to the layer or layers used to extract the feature vector of the image when the face recognition model is deployed.
  • Other training processes than the feature layer are The existing technologies are consistent, and this embodiment will not repeat them here.
  • the structure and size of the deep neural network of the face recognition model can be adjusted according to different application scenarios to balance the efficiency and accuracy of the face recognition model.
  • Step S202 Acquire a picture set of the current application scene.
  • the picture set of the current application scene includes a large number of sample pictures of the current application scene.
  • the number of sample pictures in the picture set of the current application scene is greater than the preset number of samples to ensure the accuracy of the quality level boundary information determined based on the picture set of the current application scene.
  • the preset sample number is one thousand, several thousand, ten thousand, tens of thousands, etc.
  • the preset sample number can be set by a technician according to actual application scenarios and empirical values, which is not specifically limited here in this embodiment.
  • a large number of picture sets of the current application scene and the face recognition model can be used to calculate the quality level boundary information corresponding to the current application scene.
  • the quality level boundary information is determined through the picture collection of the actual application scene, so that the quality level boundary information is more accurate.
  • the quality level boundary information includes boundary values of a plurality of face image quality levels.
  • the determined quality level boundary information may also be based on the picture sets of multiple application scenes, so that the determined quality level boundary information is more widely applicable.
  • Step S203 Use the face recognition model to extract the face feature vector of the sample pictures in the picture set.
  • Step S204 Calculate the L2norm value of the face feature vector of the sample picture.
  • the L2norm value of the face feature vector of the picture is used as the index data for evaluating the image quality of the face of the picture. According to the size of the L2norm value of the face feature vector of the picture, the quality of the face image of the picture is divided into multiple face image quality levels.
  • Each face image quality level corresponds to a set of quality level boundary information, which is used to define the range of the L2norm value of the face feature vector corresponding to the face image quality level.
  • a face recognition model is used to extract the face feature vector of the sample pictures in the picture set, and the L2norm value of the face feature vector of each sample picture is calculated.
  • the quality level boundary information corresponding to the current application scene is determined according to the L2norm values of the face feature vectors of the sample pictures.
  • Step S205 Obtain the maximum value and the minimum value of the L2norm values of the face feature vector of the sample picture.
  • the L2norm values of the face feature vectors of all sample pictures can be sorted to obtain the maximum and minimum values of the L2norm values of the face feature vectors of the sample pictures.
  • Step S206 Determine the boundary value of each quality level according to the maximum and minimum values and the number of preset levels.
  • the boundary value of each quality level is determined according to the maximum and minimum values and the number of preset levels, which can be specifically implemented in the following manner:
  • the preset number of levels may be 2, 3, or 5, etc.
  • the preset number of levels may be set by a technician according to the current application scenario and experience value, which is not specifically limited here in this embodiment.
  • the number of preset levels can be 3.
  • the data interval between the minimum and maximum values can be expressed as: [minimum value, maximum value], and the data interval can be determined 2 bisectors, which divide the data interval into 3 subintervals. These two equal points are the boundary values between two adjacent sub-regions.
  • margin (maximum value-minimum value) ⁇ 3
  • the values from small to large are: (minimum+margin), and (maximum-margin).
  • the following three quality levels can be obtained:
  • the first quality level indicates that the face image quality is good: the L2norm value of the face feature vector of the picture is greater than (maximum value -margin).
  • the second quality level indicates that the face image quality is general: the L2norm value of the face feature vector of the picture is less than or equal to (maximum-margin), and the L2norm value of the face feature vector of the picture is greater than (minimum+margin).
  • the second quality level indicates that the image quality of the face is poor: the L2norm value of the face feature vector of the picture is less than or equal to (minimum value + margin).
  • the face image quality evaluation can be performed on any one picture through the following steps S207-S209 to evaluate the face image quality level of the picture.
  • Step S207 Use the face recognition model to extract the face feature vector of the picture to be evaluated.
  • the picture to be evaluated refers to a picture of the image quality of the face to be evaluated.
  • step S201 using the face recognition model trained in step S201 above, the face feature vector of the picture to be evaluated is extracted.
  • Step S208 Calculate the L2norm value of the face feature vector of the picture to be evaluated.
  • the L2norm value of the face feature vector of the picture to be evaluated is calculated, and the L2norm value is used as the index data for evaluating the image quality of the face of the picture to be evaluated.
  • calculating the L2norm value of the face feature vector of the picture to be evaluated may be implemented by any method of calculating the L2norm value of the feature vector of the picture in the prior art, which is not specifically limited here in this embodiment.
  • Step S209 Determine the face image quality level of the picture to be evaluated according to the L2norm value of the face feature vector of the picture to be evaluated and the quality level boundary information.
  • the size of the boundary value of each face image quality level can be determined by comparing the L2norm value of the face feature vector of the picture to be evaluated and the size of the boundary value of each face image quality level.
  • the face image quality level corresponding to the L2norm value of the face feature vector of the picture, and the face image quality level corresponding to the L2norm value of the face feature vector of the picture to be evaluated is taken as the face image quality level of the picture to be evaluated.
  • the embodiment of the present disclosure determines the quality level boundary information applicable to the current application scene based on the L2norm value of the face feature vector of a large number of sample pictures in the picture set of the current actual application scene, so that when applied to the current application scene, the person of the picture Check the quality of the face image more accurately; then, use the face recognition model to extract the face feature vector of the picture to be evaluated; calculate the L2norm value of the face feature vector of the picture to be evaluated; according to the L2norm value of the face feature vector of the picture to be evaluated , And quality level boundary information, determine the quality level of the face image of the picture to be evaluated, can accurately assess the quality level of the face image of the picture based on the face characteristics of the picture, can accurately assess the quality of the face image of the picture, and reduce The complexity of face image quality assessment is low, and the efficiency of face image quality assessment is improved.
  • the face image quality evaluation device 30 includes: a feature extraction module 301, a calculation module 302, and a quality evaluation module 303.
  • the feature extraction module 301 is used to extract the facial feature vector of the image to be evaluated using the facial recognition model.
  • the calculation module 302 is used to calculate the L2norm value of the face feature vector of the picture to be evaluated.
  • the quality evaluation module 303 is used to determine the face image quality level of the picture to be evaluated according to the L2norm value of the face feature vector of the picture to be evaluated and the quality level boundary information.
  • the apparatus provided by the embodiment of the present disclosure may be specifically used to execute the method flow provided by the method embodiment shown in FIG. 1 above, and specific functions will not be repeated here.
  • the face feature vector of the picture to be evaluated is extracted by using the face recognition model; the L2norm value of the face feature vector of the picture to be evaluated is calculated; the L2norm value of the face feature vector of the picture to be evaluated, and the quality level boundary Information, determine the quality level of the face image of the picture to be evaluated, can accurately assess the quality level of the face image of the picture based on the face characteristics of the picture, can accurately assess the quality of the face image of the picture, and reduce the quality of the face image
  • the complexity of the evaluation is low, which improves the efficiency of face image quality evaluation.
  • the face image quality evaluation device 30 further includes: a model training module 304.
  • model training module 304 is used to:
  • model training module 304 is also used to:
  • the quality assessment module 303 is also used to:
  • the quality level boundary information is determined.
  • the quality assessment module 303 is also used to:
  • the quality assessment module 303 is also used to:
  • the quality assessment module 303 is also used to:
  • the apparatus provided by the embodiment of the present disclosure may be specifically used to execute the method flow provided by the method embodiment shown in FIG. 2, and specific functions will not be repeated here.
  • the embodiment of the present disclosure determines the quality level boundary information applicable to the current application scene based on the L2norm value of the face feature vector of a large number of sample pictures in the picture set of the current actual application scene, so that when applied to the current application scene, the person of the picture Check the quality of the face image more accurately; then, use the face recognition model to extract the face feature vector of the picture to be evaluated; calculate the L2norm value of the face feature vector of the picture to be evaluated; according to the L2norm value of the face feature vector of the picture to be evaluated , And quality level boundary information, determine the quality level of the face image of the picture to be evaluated, can accurately assess the quality level of the face image of the picture based on the face characteristics of the picture, can accurately assess the quality of the face image of the picture, and reduce The complexity of face image quality assessment is low, and the efficiency of face image quality assessment is improved.
  • An embodiment of the present disclosure also provides a computer device, which includes the face image quality evaluation device provided by any of the foregoing embodiments.
  • An embodiment of the present disclosure also provides a computer-readable storage medium that stores computer-executable instructions that are configured to perform the above-mentioned face image quality assessment method.
  • An embodiment of the present disclosure also provides a computer program product.
  • the computer program product includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by a computer, the The computer executes the above-mentioned face image quality evaluation method.
  • the aforementioned computer-readable storage medium may be a transient computer-readable storage medium or a non-transitory computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, whose structure is shown in FIG. 5, the electronic device includes:
  • At least one processor (processor) 100 one processor 100 is taken as an example in FIG. 5; and the memory (memory) 101 may further include a communication interface (Communication) Interface 102 and a bus 103.
  • the processor 100, the communication interface 102, and the memory 101 can complete communication with each other through the bus 103.
  • the communication interface 102 can be used for information transmission.
  • the processor 100 may call logic instructions in the memory 101 to execute the XX method in the above embodiment.
  • logic instructions in the memory 101 described above can be implemented in the form of software functional units and sold or used as independent products, and can be stored in a computer-readable storage medium.
  • the memory 101 is a computer-readable storage medium and can be used to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 100 executes functional applications and data processing by running software programs, instructions, and modules stored in the memory 101, that is, implementing the XX method in the foregoing method embodiments.
  • the memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system and application programs required for at least one function; the storage data area may store data created according to the use of a terminal device and the like.
  • the memory 101 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the technical solutions of the embodiments of the present disclosure may be embodied in the form of software products, which are stored in a storage medium and include one or more instructions to make a computer device (which may be a personal computer, server, or network) Equipment, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure.
  • the aforementioned storage medium may be a non-transitory storage medium, including: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • a medium that can store program codes may also be a transient storage medium.
  • first, second, etc. may be used in this application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first element can be called the second element, and likewise, the second element can be called the first element, as long as all occurrences of the "first element” are consistently renamed and all occurrences of The “second component” can be renamed consistently.
  • the first element and the second element are both elements, but they may not be the same element.
  • the various aspects, implementations, implementations or features in the described embodiments can be used alone or in any combination.
  • Various aspects in the described embodiments may be implemented by software, hardware, or a combination of software and hardware.
  • the described embodiments may also be embodied by a computer-readable medium that stores computer-readable code including instructions executable by at least one computing device.
  • the computer-readable medium can be associated with any data storage device capable of storing data, which can be read by a computer system.
  • Computer-readable media used for examples may include read-only memory, random access memory, CD-ROM, HDD, DVD, magnetic tape, optical data storage devices, and the like.
  • the computer-readable medium may also be distributed in computer systems connected through a network, so that computer-readable codes can be stored and executed in a distributed manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

一种人脸图像质量评估方法、装置、设备及存储介质,所述方法通过利用人脸识别模型提取待评估图片的人脸特征向量(S101);计算所述待评估图片的人脸特征向量的L2norm值(S102);根据所述待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级(S103),上述方法能够基于图片的人脸特征准确地评估出图片的人脸图像质量等级,能够准确地评估图片的人脸图像质量,且降低了人脸图像质量评估的复杂度低,提高了人脸图像质量评估的效率。

Description

人脸图像质量评估方法、装置、设备及存储介质 技术领域
本申请涉及人脸识别领域,例如涉及一种人脸图像质量评估方法、装置、设备及存储介质。
背景技术
在有些应用场景中,需要从大量图片中选取人脸图像质量高的图片。人脸图像质量可以在不同的层次上进行评估,比如对图像全局特性如光照、对比度等的评估。
现有的人脸图像质量评估技术,通常是结合灰度直方图、图像色彩、边缘检测等传统方法进行判别,衡量的多个因素是图片质量而并不是人脸图像的质量,判断的人脸图像质量不准确,且复杂度高、效率低。
发明内容
本公开实施例提供了一种人脸图像质量评估方法,包括:
利用人脸识别模型提取待评估图片的人脸特征向量;
计算所述待评估图片的人脸特征向量的L2norm值;
根据所述待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级。
本公开实施例还提供了一种人脸图像质量评估装置,包括:
特征提取模块,用于利用人脸识别模型提取待评估图片的人脸特征向量;
计算模块,用于计算所述待评估图片的人脸特征向量的L2norm值;
质量评估模块,用于根据所述待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级。
本公开实施例还提供了一种计算机设备,包含上述的人脸图像质量评估装置。
本公开实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行上述的人脸图像质量评估方法。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述的人脸图像质量评估方法。
本公开实施例还提供了一种电子设备,包括:
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行时,使所述至少一个处理器执行上述的人脸图像质量评估方法。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1为本公开实施例提供的人脸图像质量评估方法的流程图;
图2为本公开实施例提供的另一人脸图像质量评估方法的流程图;
图3为本公开实施例提供的人脸图像质量评估装置的结构示意图;
图4为本公开实施例提供的另一人脸图像质量评估装置的结构示意图;
以及,图5为本公开实施例提供的电子设备的结构示意图。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,一个或多个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。
本公开实施例提供了一种人脸图像质量评估方法,图1为本公开实施例 提供的人脸图像质量评估方法的流程图。如图1所示,本实施例中的方法,具体步骤如下:
步骤S101、利用人脸识别模型提取待评估图片的人脸特征向量。
其中,待评估图片是指待评估人脸图像质量的图片。
本实施例中,人脸识别模型可以是现有技术中的任意一种基于深度神经网络人脸识别模型,本实施例此处不做具体限定。
另外,在对待评估图片进行人脸图像质量的评估过程中,用于提取待评估图片的人脸特征向量的人脸识别模型,可以直接采用对待评估图片进行人脸识别时的人脸识别模型,以使评估出的待评估图片的人脸图像质量的优劣能够更好地反映出人脸识别的效果的优劣。
在对待评估图片进行人脸图像质量时,首先需要提取待评估图片的人脸特征向量,依据人脸特征向量的优劣来评估待评估图片人脸图像质量的优劣。
步骤S102、计算待评估图片的人脸特征向量的L2norm值。
在得到待评估图片的人脸特征向量之后,计算待评估图片的人脸特征向量的L2norm值,将L2norm值作为评估待评估图片人脸图像质量的指标数据。如果图片中的人脸图像质量越高,那么该图片的人脸特征向量中各维度的特征值就越大,人脸特征向量的L2norm值也越大。
其中,人脸特征向量中不同的维度对应不同的人脸特征,例如,皱纹、鼻子、眼睛等特征。
待评估图片的人脸特征向量的L2norm值越大,则说明待评估图片的人脸图像质量越好。
本实施例中,计算待评估图片的人脸特征向量的L2norm值,可以采用现有技术中任意一种计算图片的特征向量的L2norm值的方法实现,本实施例此处不做具体限定。
步骤S103、根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级。
本实施例中,将图片的人脸特征向量的L2norm值作为评估图片的人脸图像质量的指标数据。根据图片的人脸特征向量的L2norm值的大小,将图片的人脸图像质量划分为多个人脸图像质量等级。
每个人脸图像质量等级对应一组质量等级边界信息,用于限定该人脸图 像质量等级对应的人脸特征向量的L2norm值的范围。
具体的,可以获取大量的样本图片,计算所有样本图片的人脸特征向量的L2norm值,所有样本图片的人脸特征向量的L2norm值中的最大值和最小值构成一个L2norm值区间,将L2norm值区间划分为预设数量的子区间,得到(预设数量-1)个分割点,将每个分割点作为一个等级边界值,得到(预设数量-1)个等级边界值,这(预设数量-1)个等级边界值可以确定预设数量个人脸图像质量等级。
其中,质量等级边界信息包括多个人脸图像质量等级的边界值。
该步骤中,在计算得到待评估图片的人脸特征向量的L2norm值之后,通过比较待评估图片的人脸特征向量的L2norm值与各个人脸图像质量等级的边界值的大小,可以确定待评估图片的人脸特征向量的L2norm值对应的人脸图像质量等级,将待评估图片的人脸特征向量的L2norm值对应的人脸图像质量等级作为待评估图片的人脸图像质量等级。
本公开实施例通过利用人脸识别模型提取待评估图片的人脸特征向量;计算待评估图片的人脸特征向量的L2norm值;根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级,能够基于图片的人脸特征准确地评估出图片的人脸图像质量等级,能够准确地评估图片的人脸图像质量,且降低了人脸图像质量评估的复杂度低,提高了人脸图像质量评估的效率。
图2为本公开实施例提供的另一人脸图像质量评估方法的流程图。在图1所示实施例的基础上,本实施例中,还可以对初始的人脸识别模型进行模型训练,得到人脸识别模型;并利用人脸识别模型和包括大量的当前应用场景的样本图片的图片集,确定质量等级边界信息。
如图2所示,该方法具体步骤如下:
步骤S201、对初始的人脸识别模型进行模型训练,得到人脸识别模型。
在对待评估图片进行人脸图像质量的评估过程中,用于提取待评估图片的人脸特征向量的人脸识别模型,可以直接采用对待评估图片进行人脸识别时的人脸识别模型,以使评估出的待评估图片的人脸图像质量的优劣能够更好地反映出人脸识别的效果的优劣。
为了使得人脸识别模型更加适用于人脸图像质量的评估,在对人脸识别模型的模型训练过程中,增加对人脸识别模型提取的人脸特征向量的L2正则化处理,其他训练过程与现有技术一致,本实施例此处不再赘述。
另外,可以选取现有技术中的任意一种基于深度神经网络的人脸识别模型作为初始的人脸识别模型,本实施例此处不做具体限定。
该步骤中,在对初始的人脸识别模型的模型训练过程中,增加对初始的人脸识别模型提取的人脸特征向量的L2正则化处理。具体的,在特征层加上L2正则化处理,其中特征层是指人脸识别模型在部署时用于提取图片的特征向量的那一层或者多层,除特征层之外的其他训练过程与现有技术一致,本实施例此处不再赘述。
可选的,还可以根据应用场景的不同,调整人脸识别模型的深度神经网络的结构和大小,以平衡人脸识别模型的效率和精度。
步骤S202、获取当前应用场景的图片集。
其中,当前应用场景的图片集包括大量的当前应用场景的样本图片。
图片集中的样本图片的数量越多,基于当前应用场景的图片集确定的质量等级边界信息的准确性越好。
本实施例中,当前应用场景的图片集中的样本图片的数量大于预设样本数量,以确保基于当前应用场景的图片集确定的质量等级边界信息的准确性。
其中,预设样本数量为一千、几千、一万、几万等等,预设样本数量可以由技术人员根据实际应用场景和经验值进行设定,本实施例此处不做具体限定。
在训练得到人脸识别模型之后,可以通过步骤S202-S206,利用大量的当前应用场景的图片集,以及人脸识别模型,来计算当前应用场景对应的质量等级边界信息。通过实际应用场景的图片集确定质量等级边界信息,使得质量等级边界信息更加准确。
其中,质量等级边界信息包括多个人脸图像质量等级的边界值。
可选的,本实施例的另一实施方式中,还可以基于多个应用场景的图片集,确定的质量等级边界信息,以使确定的质量等级边界信息的适用范围更加广泛。
步骤S203、利用人脸识别模型提取图片集中的样本图片的人脸特征向量。
步骤S204、计算样本图片的人脸特征向量的L2norm值。
本实施例中,将图片的人脸特征向量的L2norm值作为评估图片的人脸图像质量的指标数据。根据图片的人脸特征向量的L2norm值的大小,将图片的人脸图像质量划分为多个人脸图像质量等级。
其中,每个人脸图像质量等级对应一组质量等级边界信息,用于限定该人脸图像质量等级对应的人脸特征向量的L2norm值的范围。
在获取到当前应用场景的图片集之后,通过步骤S203-S204,利用人脸识别模型提取图片集中的样本图片的人脸特征向量,并计算出每个样本图片的人脸特征向量的L2norm值。
在得到所有样本图片的人脸特征向量的L2norm值之后,通过以下步骤S205-S206,根据样本图片的人脸特征向量的L2norm值,确定当前应用场景对应的质量等级边界信息。
步骤S205、获取样本图片的人脸特征向量的L2norm值中的最大值和最小值。
在得到所有样本图片的人脸特征向量的L2norm值之后,可以对各个样本图片的人脸特征向量的L2norm值进行排序,得到样本图片的人脸特征向量的L2norm值中的最大值和最小值。
步骤S206、根据最大值和最小值,以及预设等级数量,确定每个质量等级的边界值。
本实施例中,根据最大值和最小值,以及预设等级数量,确定每个质量等级的边界值,具体可以采用如下方式实现:
确定将最小值和最大值之间的数据区间等分为预设等级数量的子区间,得到相邻的两个子区间之间的边界值;将相邻的两个子区间之间的边界值作为相邻的两个质量等级的边界值。
其中,预设等级数量可以为2、3、或者5等等,预设等级数量可以由技术人员根据当前应用场景和经验值进行设定,本实施例此处不做具体限定。
例如,假设预设等级数量可以为3。在确定样本图片的人脸特征向量的L2norm值中的最大值和最小值之后,可以将最小值和最大值之间的数据区间可以表示为:[最小值,最大值],并且可以确定数据区间的2个等分点,这2个等分点将数据区间等分为3个子区间。这2个等分点即为相邻的两个子区 间之间的边界值。具体的,计算一个变量:margin=(最大值–最小值)÷3,那么,可以确定数据区间的2个等分点为:(最小值+margin),和(最大值-margin),也即是相邻的两个子区间之间的边界值由小到大依次为:(最小值+margin),和(最大值-margin);也就是说,得到相邻的两个质量等级之间的边界值由小到大依次为:(最小值+margin),和(最大值-margin)。综上,可以得到以下三个质量等级:
第一个质量等级,表示人脸图像质量好:图片的人脸特征向量的L2norm值大于(最大值-margin)。第二个质量等级,表示人脸图像质量一般:图片的人脸特征向量的L2norm值小于或者等于(最大值-margin),并且图片的人脸特征向量的L2norm值大于(最小值+margin)。第二个质量等级,表示人脸图像质量差:图片的人脸特征向量的L2norm值小于或者等于(最小值+margin)。
在得到每个质量等级的边界值之后,可以通过以下步骤S207-S209,对任意一张图片进行人脸图像质量评估,评估图片的人脸图像质量等级。
步骤S207、利用人脸识别模型提取待评估图片的人脸特征向量。
其中,待评估图片是指待评估人脸图像质量的图片。
在对待评估图片进行人脸图像质量时,首先需要提取待评估图片的人脸特征向量,依据人脸特征向量的优劣来评估待评估图片人脸图像质量的优劣。
该步骤中,利用上述步骤S201训练得到的人脸识别模型,提取待评估图片的人脸特征向量。
步骤S208、计算待评估图片的人脸特征向量的L2norm值。
在得到待评估图片的人脸特征向量之后,计算待评估图片的人脸特征向量的L2norm值,将L2norm值作为评估待评估图片人脸图像质量的指标数据。待评估图片的人脸特征向量的L2norm值越大,则说明待评估图片的人脸图像质量越好。
本实施例中,计算待评估图片的人脸特征向量的L2norm值,可以采用现有技术中任意一种计算图片的特征向量的L2norm值的方法实现,本实施例此处不做具体限定。
步骤S209、根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级。
该步骤中,在计算得到待评估图片的人脸特征向量的L2norm值之后,通过比较待评估图片的人脸特征向量的L2norm值与各个人脸图像质量等级的边界值的大小,可以确定待评估图片的人脸特征向量的L2norm值对应的人脸图像质量等级,将待评估图片的人脸特征向量的L2norm值对应的人脸图像质量等级作为待评估图片的人脸图像质量等级。
本公开实施例通过基于当前实际应用场景的图片集中大量的样本图片的人脸特征向量的L2norm值,确定适用于当前应用场景的质量等级边界信息,使得应用于当前应用场景时,对图片的人脸图像质量的检查更加准确;然后,利用人脸识别模型提取待评估图片的人脸特征向量;计算待评估图片的人脸特征向量的L2norm值;根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级,能够基于图片的人脸特征准确地评估出图片的人脸图像质量等级,能够准确地评估图片的人脸图像质量,且降低了人脸图像质量评估的复杂度低,提高了人脸图像质量评估的效率。
本公开实施例还提供了一种人脸图像质量评估装置,如图3所示,人脸图像质量评估装置30包括:特征提取模块301,计算模块302和质量评估模块303。
具体的,特征提取模块301,用于利用人脸识别模型提取待评估图片的人脸特征向量。
计算模块302,用于计算待评估图片的人脸特征向量的L2norm值。
质量评估模块303,用于根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级。
本公开实施例提供的装置可以具体用于执行上述图1所示的方法实施例提供的方法流程,具体功能此处不再赘述。
本公开实施例通过利用人脸识别模型提取待评估图片的人脸特征向量;计算待评估图片的人脸特征向量的L2norm值;根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级,能够基于图片的人脸特征准确地评估出图片的人脸图像质量等级,能够准确地评估图片的人脸图像质量,且降低了人脸图像质量评估的复杂度 低,提高了人脸图像质量评估的效率。
在上述图3所示的实施例的基础上,本实施例中,如图4所示,人脸图像质量评估装置30还包括:模型训练模块304。
具体的,模型训练模块304用于:
对初始的人脸识别模型进行模型训练,得到人脸识别模型;在对初始的人脸识别模型的模型训练过程中,增加对初始的人脸识别模型提取的人脸特征向量的L2正则化处理。
可选的,模型训练模块304还用于:
获取当前应用场景的图片集。
可选的,质量评估模块303还用于:
根据当前应用场景的图片集,以及人脸识别模型,确定质量等级边界信息。
可选的,质量评估模块303还用于:
利用人脸识别模型提取图片集中的样本图片的人脸特征向量;计算样本图片的人脸特征向量的L2norm值;根据样本图片的人脸特征向量的L2norm值,确定当前应用场景对应的质量等级边界信息。
可选的,质量评估模块303还用于:
获取样本图片的人脸特征向量的L2norm值中的最大值和最小值;根据最大值和最小值,以及预设等级数量,确定每个质量等级的边界值。
可选的,质量评估模块303还用于:
确定将最小值和最大值之间的数据区间等分为预设等级数量的子区间,得到相邻的两个子区间之间的边界值;将相邻的两个子区间之间的边界值作为相邻的两个质量等级的边界值。
本公开实施例提供的装置可以具体用于执行上述图2所示的方法实施例提供的方法流程,具体功能此处不再赘述。
本公开实施例通过基于当前实际应用场景的图片集中大量的样本图片的人脸特征向量的L2norm值,确定适用于当前应用场景的质量等级边界信息,使得应用于当前应用场景时,对图片的人脸图像质量的检查更加准确;然后,利用人脸识别模型提取待评估图片的人脸特征向量;计算待评估图片的人脸 特征向量的L2norm值;根据待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定待评估图片的人脸图像质量等级,能够基于图片的人脸特征准确地评估出图片的人脸图像质量等级,能够准确地评估图片的人脸图像质量,且降低了人脸图像质量评估的复杂度低,提高了人脸图像质量评估的效率。
本公开实施例还提供了一种计算机设备,包含上述任一实施例提供的人脸图像质量评估装置。
本公开实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行上述人脸图像质量评估方法。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述人脸图像质量评估方法。
上述的计算机可读存储介质可以是暂态计算机可读存储介质,也可以是非暂态计算机可读存储介质。
本公开实施例还提供了一种电子设备,其结构如图5所示,该电子设备包括:
至少一个处理器(processor)100,图5中以一个处理器100为例;和存储器(memory)101,还可以包括通信接口(Communication Interface)102和总线103。其中,处理器100、通信接口102、存储器101可以通过总线103完成相互间的通信。通信接口102可以用于信息传输。处理器100可以调用存储器101中的逻辑指令,以执行上述实施例的XX方法。
此外,上述的存储器101中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器101作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器100通过运行存储在存储器101中的软件程序、指令以及模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的XX方法。
存储器101可包括存储程序区和存储数据区,其中,存储程序区可存储 操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器101可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例所述方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
当用于本申请中时,虽然术语“第一”、“第二”等可能会在本申请中使用以描述各元件,但这些元件不应受到这些术语的限制。这些术语仅用于将一个元件与另一个元件区别开。比如,在不改变描述的含义的情况下,第一元件可以叫做第二元件,并且同样第,第二元件可以叫做第一元件,只要所有出现的“第一元件”一致重命名并且所有出现的“第二元件”一致重命名即可。第一元件和第二元件都是元件,但可以不是相同的元件。
本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”(a)、“一个”(an)和“所述”(the)旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”(comprise)及其变型“包括”(comprises)和/或包括(comprising)等指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。
所描述的实施例中的各方面、实施方式、实现或特征能够单独使用或以任意组合的方式使用。所描述的实施例中的各方面可由软件、硬件或软硬件的结合实现。所描述的实施例也可以由存储有计算机可读代码的计算机可读介质体现,该计算机可读代码包括可由至少一个计算装置执行的指令。所述计算机可读介质可与任何能够存储数据的数据存储装置相关联,该数据可由 计算机系统读取。用于举例的计算机可读介质可以包括只读存储器、随机存取存储器、CD-ROM、HDD、DVD、磁带以及光数据存储装置等。所述计算机可读介质还可以分布于通过网络联接的计算机系统中,这样计算机可读代码就可以分布式存储并执行。
上述技术描述可参照附图,这些附图形成了本申请的一部分,并且通过描述在附图中示出了依照所描述的实施例的实施方式。虽然这些实施例描述的足够详细以使本领域技术人员能够实现这些实施例,但这些实施例是非限制性的;这样就可以使用其它的实施例,并且在不脱离所描述的实施例的范围的情况下还可以做出变化。比如,流程图中所描述的操作顺序是非限制性的,因此在流程图中阐释并且根据流程图描述的两个或两个以上操作的顺序可以根据若干实施例进行改变。作为另一个例子,在若干实施例中,在流程图中阐释并且根据流程图描述的一个或一个以上操作是可选的,或是可删除的。另外,某些步骤或功能可以添加到所公开的实施例中,或两个以上的步骤顺序被置换。所有这些变化被认为包含在所公开的实施例以及权利要求中。
另外,上述技术描述中使用术语以提供所描述的实施例的透彻理解。然而,并不需要过于详细的细节以实现所描述的实施例。因此,实施例的上述描述是为了阐释和描述而呈现的。上述描述中所呈现的实施例以及根据这些实施例所公开的例子是单独提供的,以添加上下文并有助于理解所描述的实施例。上述说明书不用于做到无遗漏或将所描述的实施例限制到本公开的精确形式。根据上述教导,若干修改、选择适用以及变化是可行的。在某些情况下,没有详细描述为人所熟知的处理步骤以避免不必要地影响所描述的实施例。

Claims (17)

  1. 一种人脸图像质量评估方法,其特征在于,包括:
    利用人脸识别模型提取待评估图片的人脸特征向量;
    计算所述待评估图片的人脸特征向量的L2norm值;
    根据所述待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级。
  2. 根据权利要求1所述的方法,其特征在于,所述利用人脸识别模型提取待评估图片的人脸特征向量之前,还包括:
    对初始的人脸识别模型进行模型训练,得到所述人脸识别模型;
    在对所述初始的人脸识别模型的模型训练过程中,增加对所述初始的人脸识别模型提取的人脸特征向量的L2正则化处理。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述待评估图片的人脸特征的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级之前,还包括:
    根据当前应用场景的图片集,以及所述人脸识别模型,确定所述质量等级边界信息。
  4. 根据权利要求3所述的方法,其特征在于,根据当前应用场景的图片集,以及所述人脸识别模型,确定所述质量等级边界信息,包括:
    利用所述人脸识别模型提取所述图片集中的样本图片的人脸特征向量;
    计算所述样本图片的人脸特征向量的L2norm值;
    根据所述样本图片的人脸特征向量的L2norm值,确定当前应用场景对应的所述质量等级边界信息。
  5. 根据权利要求4所述的方法,其特征在于,根据所述样本图片的人脸特征向量的L2norm值,确定当前应用场景对应的所述质量等级边界信息,包括:
    获取所述样本图片的人脸特征向量的L2norm值中的最大值和最小值;
    根据所述最大值和最小值,以及预设等级数量,确定每个所述质量等级的边界值。
  6. 根据权利要求5所述的方法,其特征在于,根据所述最大值和最小值,以及预设等级数量,确定每个所述质量等级的边界值,包括:
    确定将所述最小值和最大值之间的数据区间等分为预设等级数量的子区间,得到相邻的两个子区间之间的边界值;
    将所述相邻的两个子区间之间的边界值作为相邻的两个质量等级的边界值。
  7. 根据权利要求3所述的方法,其特征在于,根据当前应用场景的图片集,以及所述人脸识别模型,确定所述质量等级边界信息之前,还包括:
    获取当前应用场景的图片集。
  8. 一种人脸图像质量评估装置,其特征在于,包括:
    特征提取模块,用于利用人脸识别模型提取待评估图片的人脸特征向量;
    计算模块,用于计算所述待评估图片的人脸特征向量的L2norm值;
    质量评估模块,用于根据所述待评估图片的人脸特征向量的L2norm值,以及质量等级边界信息,确定所述待评估图片的人脸图像质量等级。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    模型训练模块,用于:
    对初始的人脸识别模型进行模型训练,得到所述人脸识别模型;
    在对所述初始的人脸识别模型的模型训练过程中,增加对所述初始的人脸识别模型提取的人脸特征向量的L2正则化处理。
  10. 根据权利要求8或9所述的装置,其特征在于,所述质量评估模块还用于:
    根据当前应用场景的图片集,以及所述人脸识别模型,确定所述质量等级边界信息。
  11. 根据权利要求10所述的装置,其特征在于,所述质量评估模块还用于:
    利用所述人脸识别模型提取所述图片集中的样本图片的人脸特征向量;
    计算所述样本图片的人脸特征向量的L2norm值;
    根据所述样本图片的人脸特征向量的L2norm值,确定当前应用场景对应的所述质量等级边界信息。
  12. 根据权利要求11所述的装置,其特征在于,所述质量评估模块还用于:
    获取所述样本图片的人脸特征向量的L2norm值中的最大值和最小值;
    根据所述最大值和最小值,以及预设等级数量,确定每个所述质量等级的边界值。
  13. 根据权利要求12所述的装置,其特征在于,所述质量评估模块还用于:
    确定将所述最小值和最大值之间的数据区间等分为预设等级数量的子区间,得到相邻的两个子区间之间的边界值;
    将所述相邻的两个子区间之间的边界值作为相邻的两个质量等级的边界值。
  14. 一种计算机设备,其特征在于,包含权利要求8-13任一项所述的装置。
  15. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行时,使所述至少一个处理器执行权利要求1-7任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,存储有计算机可执行指令,所述计算机可执行指令设置为执行权利要求1-7任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-7任一项所述的方法。
PCT/CN2018/119812 2018-12-07 2018-12-07 人脸图像质量评估方法、装置、设备及存储介质 WO2020113563A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/119812 WO2020113563A1 (zh) 2018-12-07 2018-12-07 人脸图像质量评估方法、装置、设备及存储介质
CN201880098339.6A CN112889061A (zh) 2018-12-07 2018-12-07 人脸图像质量评估方法、装置、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119812 WO2020113563A1 (zh) 2018-12-07 2018-12-07 人脸图像质量评估方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020113563A1 true WO2020113563A1 (zh) 2020-06-11

Family

ID=71081981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119812 WO2020113563A1 (zh) 2018-12-07 2018-12-07 人脸图像质量评估方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112889061A (zh)
WO (1) WO2020113563A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523638A (zh) * 2023-11-28 2024-02-06 广州视声智能科技有限公司 基于优先级筛选的人脸识别方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129159A1 (en) * 2011-11-22 2013-05-23 Ronald Huijgens Face recognition method and apparatus
CN106951825A (zh) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 一种人脸图像质量评估系统以及实现方法
CN107341463A (zh) * 2017-06-28 2017-11-10 北京飞搜科技有限公司 一种结合图像质量分析与度量学习的人脸特征识别方法
CN108171256A (zh) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 人脸图像质评模型构建、筛选、识别方法及设备和介质
CN108600744A (zh) * 2018-07-17 2018-09-28 中星技术股份有限公司 图像质量控制的方法、摄像机拍摄图像的方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950415B (zh) * 2010-09-14 2011-11-16 武汉大学 一种基于形状语义模型约束的人脸超分辨率处理方法
US9576224B2 (en) * 2014-12-31 2017-02-21 TCL Research America Inc. Robust error correction with multi-model representation for face recognition
WO2017106996A1 (zh) * 2015-12-21 2017-06-29 厦门中控生物识别信息技术有限公司 一种人脸识别的方法以及人脸识别装置
CN107103592B (zh) * 2017-04-07 2020-04-28 南京邮电大学 一种基于双核范数正则的多姿态人脸图像质量增强方法
CN108235001B (zh) * 2018-01-29 2020-07-10 上海海洋大学 一种基于时空特征的深海视频质量客观评价方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129159A1 (en) * 2011-11-22 2013-05-23 Ronald Huijgens Face recognition method and apparatus
CN106951825A (zh) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 一种人脸图像质量评估系统以及实现方法
CN107341463A (zh) * 2017-06-28 2017-11-10 北京飞搜科技有限公司 一种结合图像质量分析与度量学习的人脸特征识别方法
CN108171256A (zh) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 人脸图像质评模型构建、筛选、识别方法及设备和介质
CN108600744A (zh) * 2018-07-17 2018-09-28 中星技术股份有限公司 图像质量控制的方法、摄像机拍摄图像的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523638A (zh) * 2023-11-28 2024-02-06 广州视声智能科技有限公司 基于优先级筛选的人脸识别方法及系统
CN117523638B (zh) * 2023-11-28 2024-05-17 广州视声智能科技有限公司 基于优先级筛选的人脸识别方法及系统

Also Published As

Publication number Publication date
CN112889061A (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
CN111327945B (zh) 用于分割视频的方法和装置
US9418319B2 (en) Object detection using cascaded convolutional neural networks
US20170032224A1 (en) Method, device and computer-readable medium for sensitive picture recognition
US11080553B2 (en) Image search method and apparatus
CN106682906B (zh) 一种风险识别、业务处理方法和设备
CN110913243B (zh) 一种视频审核的方法、装置和设备
WO2017197620A1 (en) Detection of humans in images using depth information
CN110956255B (zh) 难样本挖掘方法、装置、电子设备及计算机可读存储介质
EP3001354A1 (en) Object detection method and device for online training
KR102094506B1 (ko) 피사체 추적 기법을 이용한 카메라와 피사체 사이의 거리 변화 측정방법 상기 방법을 기록한 컴퓨터 판독 가능 저장매체 및 거리 변화 측정 장치.
CN111753870B (zh) 目标检测模型的训练方法、装置和存储介质
WO2019120025A1 (zh) 照片的调整方法、装置、存储介质及电子设备
CN111241873A (zh) 图像翻拍检测方法及其模型的训练方法、支付方法及装置
CN110991412A (zh) 人脸识别的方法、装置、存储介质及电子设备
CN111862040A (zh) 人像图片质量评价方法、装置、设备及存储介质
CN110135428B (zh) 图像分割处理方法和装置
CN111783812A (zh) 违禁图像识别方法、装置和计算机可读存储介质
Chen et al. Learning to rank retargeted images
CN111435445A (zh) 字符识别模型的训练方法及装置、字符识别方法及装置
CN108764248B (zh) 图像特征点的提取方法和装置
CN113221842A (zh) 模型训练方法、图像识别方法、装置、设备及介质
WO2020113563A1 (zh) 人脸图像质量评估方法、装置、设备及存储介质
CN110796115B (zh) 图像检测方法、装置、电子设备及可读存储介质
CN113361567A (zh) 图像处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942255

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18942255

Country of ref document: EP

Kind code of ref document: A1