CN110544233A - Depth Image Quality Evaluation Method Based on Face Recognition Application - Google Patents

Depth Image Quality Evaluation Method Based on Face Recognition Application Download PDF

Info

Publication number
CN110544233A
CN110544233A CN201910693279.9A CN201910693279A CN110544233A CN 110544233 A CN110544233 A CN 110544233A CN 201910693279 A CN201910693279 A CN 201910693279A CN 110544233 A CN110544233 A CN 110544233A
Authority
CN
China
Prior art keywords
test
mold
depth image
depth
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910693279.9A
Other languages
Chinese (zh)
Other versions
CN110544233B (en
Inventor
户磊
王亚运
崔哲
薛远
李东阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilu Shenshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilu Shenshi Technology Co Ltd filed Critical Beijing Dilu Shenshi Technology Co Ltd
Priority to CN201910693279.9A priority Critical patent/CN110544233B/en
Publication of CN110544233A publication Critical patent/CN110544233A/en
Application granted granted Critical
Publication of CN110544233B publication Critical patent/CN110544233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明提供一种基于人脸识别应用的深度图像质量评价方法,方法包括:基于深度感知传感器获取一个或多个测试模具的深度图像;其中,每个所述测试模具的表面形状不同;对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征;其中,从每个所述测试模具的深度图像中提取的特征不同;根据所有所述测试模具的深度图像的特征,对所述深度感知传感器采集的人脸深度图像进行评价。本发明提取的特征更全面,更精确,基于提取特征的人脸深度图像评价更精确,且具有高度的实用性和可复现性。

The present invention provides a method for evaluating the quality of depth images based on face recognition applications. The method includes: acquiring depth images of one or more test molds based on a depth perception sensor; wherein, each of the test molds has a different surface shape; for any One described test mould, analyze the depth image of this test mould, extract the feature of described depth image; Wherein, the feature extracted from the depth image of each described test mould is different; According to the depth of all described test moulds The feature of the image is to evaluate the depth image of the face collected by the depth perception sensor. The features extracted by the present invention are more comprehensive and accurate, and the evaluation of the human face depth image based on the extracted features is more accurate, and has high practicability and reproducibility.

Description

基于人脸识别应用的深度图像质量评价方法Depth Image Quality Evaluation Method Based on Face Recognition Application

技术领域technical field

本发明属于深度感知成像技术领域,尤其涉及一种基于人脸识别应用的深度图像质量评价方法。The invention belongs to the technical field of depth perception imaging, and in particular relates to a depth image quality evaluation method based on face recognition applications.

背景技术Background technique

深度感知成像技术一直是机器视觉学科的重要课题,目前主流的深度感知成像技术包括基于单目的空间编码结构光深度成像技术、基于双目的结构光纹理增强深度成像技术和TOF(Time of flight,飞行时间)技术等。Depth-sensing imaging technology has always been an important topic in the subject of machine vision. The current mainstream depth-sensing imaging technologies include single-purpose spatially encoded structured light depth imaging technology, binocular structured light texture-enhanced depth imaging technology and TOF (Time of flight, flight time) technology, etc.

在人脸识别应用领域,随着近几年深度学习技术的发展,逐渐从单一使用彩色图像或红外图像数据,过渡到综合使用彩色图像和深度图像数据,或综合使用红外图像和深度图像数据进行人脸识别。其中,深度图像的获取主要是通过前述的深度感知成像技术,因而基于深度成像算法生成的深度图的质量对人脸识别应用具有极为关键的影响。In the field of face recognition applications, with the development of deep learning technology in recent years, it has gradually transitioned from the single use of color image or infrared image data to the comprehensive use of color image and depth image data, or the comprehensive use of infrared image and depth image data. face recognition. Among them, the acquisition of the depth image is mainly through the aforementioned depth-sensing imaging technology, so the quality of the depth image generated based on the depth imaging algorithm has a very critical impact on the application of face recognition.

目前深度感知成像技术的相关算法已相当成熟,当前较为常用的深度成像误差测量方法主要是测定深度成像算法获取的物体实际点云与物体真实点云之间的配准误差,该方法存在以下缺陷及问题:(1)点云配准过程中会引入新的误差影响深度成像算法质量的评定;(2)配准误差将各种误差混在一起,无法根据实际应用需要进行更加细致的评定;(3)点云配准误差只是反映深度成像算法获取与真实物体相近深度数据的能力,对许多实际应用而言,真实度并非主要要求。At present, the relevant algorithms of depth-sensing imaging technology are quite mature. The currently more commonly used depth imaging error measurement method is mainly to measure the registration error between the actual point cloud of the object obtained by the depth imaging algorithm and the real point cloud of the object. This method has the following defects And problems: (1) New errors will be introduced in the process of point cloud registration to affect the evaluation of the quality of depth imaging algorithms; (2) Registration errors will mix various errors together, which cannot be more detailed evaluation according to the needs of practical applications; ( 3) The point cloud registration error only reflects the ability of the depth imaging algorithm to obtain depth data similar to the real object. For many practical applications, the degree of realism is not the main requirement.

综上所述,目前行业内尚无统一有效的深度成像算法质量评价方法,特别是针对人脸识别应用,深度成像算法和人脸识别算法共同影响人脸识别精度,需要通过深度成像算法质量的合理评价,对深度成像算法和人脸识别算法进行解耦。常用的深度成像误差测量方法无法满足实际应用的需要,特别是针对某些特定应用,如人脸识别的深度成像算法的质量评定。针对人脸识别应用,单纯通过点云配准误差测量等常用方法,无法准确衡量深度成像算法质量及其对人脸识别准确率的影响。To sum up, there is currently no uniform and effective quality evaluation method for depth imaging algorithms in the industry, especially for face recognition applications. The depth imaging algorithm and face recognition algorithm jointly affect the accuracy of face recognition, and it is necessary to evaluate the quality of depth imaging algorithms. Reasonable evaluation, decoupling of depth imaging algorithm and face recognition algorithm. Commonly used depth imaging error measurement methods cannot meet the needs of practical applications, especially for some specific applications, such as the quality assessment of depth imaging algorithms for face recognition. For face recognition applications, it is impossible to accurately measure the quality of the depth imaging algorithm and its impact on the accuracy of face recognition simply through common methods such as point cloud registration error measurement.

发明内容Contents of the invention

为克服上述现有的深度成像误差测量方法不能对人脸识别应用中深度图像质量进行有效评价的问题或者至少部分地解决上述问题,本发明实施例提供一种基于人脸识别应用的深度图像质量评价方法。In order to overcome the problem that the existing depth imaging error measurement methods cannot effectively evaluate the quality of depth images in face recognition applications or at least partially solve the above problems, embodiments of the present invention provide a depth image quality measurement method based on face recognition applications. evaluation method.

本发明实施例提供一种基于人脸识别应用的深度图像质量评价方法,包括:An embodiment of the present invention provides a method for evaluating the quality of a depth image based on a face recognition application, including:

基于深度感知传感器获取一个或多个测试模具的深度图像;其中,每个所述测试模具的表面形状不同;Obtaining depth images of one or more test molds based on a depth perception sensor; wherein each of the test molds has a different surface shape;

对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征;其中,从每个所述测试模具的深度图像中提取的特征不同;For any of the test moulds, analyzing the depth images of the test moulds, extracting features of the depth images; wherein, the features extracted from the depth images of each of the test moulds are different;

根据所有所述测试模具的深度图像的特征,对所述深度感知传感器采集的人脸深度图像进行评价。According to the characteristics of the depth images of all the test moulds, the depth images of the human face collected by the depth perception sensor are evaluated.

本发明实施例提供一种基于人脸识别应用的深度图像质量评价方法,该方法通过先使用深度传感器获取测试模具的深度图像,对不同表面形状的深度图像进行分析,提取根据测试模具的表面形状从测试模具的深度图像中提取相应的特征,将所有测试模具的深度图像的特征作为评价指标对人脸深度图像进行评价,提取的特征更全面,更精确,基于提取特征的人脸深度图像评价更精确,且具有高度的实用性和可复现性。Embodiments of the present invention provide a method for evaluating the quality of depth images based on face recognition applications. The method uses a depth sensor to obtain depth images of test molds, analyzes depth images of different surface shapes, and extracts the depth images according to the surface shape of the test mold. Extract the corresponding features from the depth image of the test mold, and use the features of all the depth images of the test mold as evaluation indicators to evaluate the face depth image. The extracted features are more comprehensive and accurate, and the face depth image evaluation based on the extracted features More accurate, and highly practical and reproducible.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法整体流程示意图;FIG. 1 is a schematic diagram of the overall flow of a method for evaluating the quality of a depth image based on a face recognition application provided by an embodiment of the present invention;

图2为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法中正弦模具的横向正弦波形尺寸示意图;2 is a schematic diagram of the horizontal sinusoidal waveform size of the sinusoidal mold in the depth image quality evaluation method based on the face recognition application provided by the embodiment of the present invention;

图3为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法中正弦模具的竖向正弦波形尺寸示意图;3 is a schematic diagram of the vertical sinusoidal waveform size of the sinusoidal mold in the depth image quality evaluation method based on the face recognition application provided by the embodiment of the present invention;

图4为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法中折面模具的尺寸示意图;4 is a schematic diagram of the size of the folding mold in the depth image quality evaluation method based on the face recognition application provided by the embodiment of the present invention;

图5为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法中圆柱面模具的尺寸示意图;5 is a schematic diagram of the size of a cylindrical surface mold in the depth image quality evaluation method based on the face recognition application provided by the embodiment of the present invention;

图6为本发明实施例提供的电子设备整体结构示意图。FIG. 6 is a schematic diagram of an overall structure of an electronic device provided by an embodiment of the present invention.

具体实施方式Detailed ways

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

在本发明的一个实施例中提供一种基于人脸识别应用的深度图像质量评价方法,图1为本发明实施例提供的基于人脸识别应用的深度图像质量评价方法整体流程示意图,该方法包括:S101,基于深度感知传感器获取一个或多个测试模具的深度图像;其中,每个所述测试模具的表面形状不同;In one embodiment of the present invention, a method for evaluating the quality of a depth image based on a face recognition application is provided. FIG. 1 is a schematic diagram of the overall flow of a method for evaluating the quality of a depth image based on a face recognition application provided by an embodiment of the present invention. The method includes : S101, acquiring depth images of one or more test molds based on a depth perception sensor; wherein, each of the test molds has a different surface shape;

其中,深度感知传感器为内置深度成像算法的感知系统,如集成深度成像算法的深度相机。测试模具的表面形状根据应用场景确定,如当应用到人脸识别中时,测试模具的表面形状根据人脸表面特征设定。由于人脸的表面特征较为复杂,可以将人脸的表面特征用多个简单表面特征表示,根据每个简单表面特征制备一个测试模具。Among them, the depth perception sensor is a perception system with a built-in depth imaging algorithm, such as a depth camera integrated with a depth imaging algorithm. The surface shape of the test mold is determined according to the application scenario. For example, when it is applied to face recognition, the surface shape of the test mold is set according to the surface characteristics of the human face. Since the surface features of the human face are relatively complex, the surface features of the human face can be represented by multiple simple surface features, and a test mold can be prepared according to each simple surface feature.

先制备测试模具,可以采用3D打印方式制备。然后使用深度感知传感器采集每个测试模具的深度图像。将深度感知传感器和测试模具的相对位置固定,保证测试模具出现在深度感知传感器视野中心无旋转,并与深度感知传感器的成像传感器保持平行。优选地,可以使用固定工装调节深度感知传感器和测试模具之间的相对位置,旋转角度不应大于3°。使用深度感知传感器采集每个测试模具的一张深度图像。也可以从预设最近评测距离开始,对每个测试模具每隔一定距离采集一张深度图像,直到预设最远测评距离为止,对每个评测距离采集的深度图像进行分析,从而对每个评测距离采集的深度图像的质量进行评价。Prepare the test mold first, which can be prepared by 3D printing. A depth image of each test die is then acquired using a depth-aware sensor. The relative positions of the depth perception sensor and the test mold are fixed to ensure that the test mold appears in the center of the field of view of the depth perception sensor without rotation, and is kept parallel to the imaging sensor of the depth perception sensor. Preferably, the relative position between the depth sensing sensor and the test mold can be adjusted using a fixed tooling, and the rotation angle should not be greater than 3°. One depth image of each test mold is acquired using a depth-aware sensor. It is also possible to start from the preset shortest evaluation distance, collect a depth image at a certain distance for each test mold, and analyze the depth images collected at each evaluation distance until the preset farthest evaluation distance, so that each The quality of the depth images acquired by the evaluation distance is evaluated.

S102,对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征;其中,从每个所述测试模具的深度图像中提取的特征不同;S102. For any of the test molds, analyze the depth image of the test mold to extract features of the depth image; wherein, the features extracted from the depth images of each of the test molds are different;

由于每个测试模具的表面形状不同,因此从每个测试模具的深度图像中提取的特征也不相同。本实施例不限于从每个测试模具的深度图像中提取的特征种类。Since the surface shape of each test mold is different, the features extracted from the depth images of each test mold are also different. This embodiment is not limited to the types of features extracted from the depth images of each test mold.

S103,根据所有所述测试模具的深度图像的特征,对所述深度感知传感器采集的人脸深度图像进行评价。S103. Evaluate the depth images of the human face collected by the depth perception sensor according to the features of the depth images of all the test molds.

由于每个测试模具的表面形状用于表征人脸的一种简单表面特征,将每个测试模具的深度图像的特征作为人脸深度图像的特征,从而获取到人脸深度图像的多种特征。将人脸深度图像的多种特征作为评价指标对人脸深度图像进行评价。Since the surface shape of each test mold is used to represent a simple surface feature of the face, the features of the depth image of each test mold are used as the features of the depth image of the face, thereby obtaining various features of the depth image of the face. Various features of the face depth image are used as evaluation indexes to evaluate the face depth image.

本实施例通过先使用深度传感器获取测试模具的深度图像,对不同表面形状的深度图像进行分析,提取根据测试模具的表面形状从测试模具的深度图像中提取相应的特征,将所有测试模具的深度图像的特征作为评价指标对人脸深度图像进行评价,提取的特征更全面,更精确,基于提取特征的人脸深度图像评价更精确,且具有高度的实用性和可复现性。In this embodiment, the depth images of the test molds are obtained by first using the depth sensor, the depth images of different surface shapes are analyzed, and the corresponding features are extracted from the depth images of the test molds according to the surface shape of the test molds. The feature of the image is used as an evaluation index to evaluate the depth image of the face. The extracted features are more comprehensive and accurate. The evaluation of the depth image of the face based on the extracted features is more accurate, and has a high degree of practicability and reproducibility.

在上述实施例的基础上,本实施例中所述测试模具包括平面模具、正弦面模具、折面模具和圆柱面模具;其中,所述平面模具为表面起伏误差小于第一预设阈值的模具;所述正弦面模具为表面呈横向和纵向交错正弦形状的模具;所述折面模具为表面呈连续直角折面的模具;所述圆柱面模具为表面呈连续圆柱曲面的模具。On the basis of the above embodiments, the test molds in this embodiment include plane molds, sinusoidal molds, folding molds and cylindrical molds; wherein, the plane mold is a mold whose surface fluctuation error is less than the first preset threshold ; The sinusoidal surface mold is a mold with a horizontal and vertical staggered sinusoidal shape on the surface; the folded surface mold is a mold with a continuous right-angle folding surface; the cylindrical surface mold is a mold with a continuous cylindrical curved surface.

其中,平面模具为平面起伏误差小于第一预设阈值的平面板或墙面,用于考察深度成像算法对平面特征恢复的精确程度;正弦面模具的面型为交错正弦型,用于模拟人脸部分特征,如鼻子和嘴巴;折面模具的面型为连续直角折面,用于考察深度成像算法对平面法线方向的区分能力;圆柱面模具的面型为连续圆柱曲面,用于考察深度成像算法对连续曲面特征恢复的平滑程度。Among them, the plane mold is a plane plate or wall whose plane fluctuation error is less than the first preset threshold, which is used to investigate the accuracy of the depth imaging algorithm for plane feature restoration; the surface shape of the sinusoidal mold is staggered sinusoidal, which is used to simulate Some features of the face, such as nose and mouth; the surface type of the folded surface mold is a continuous right-angle folded surface, which is used to investigate the ability of the depth imaging algorithm to distinguish the normal direction of the plane; the surface type of the cylindrical surface mold is a continuous cylindrical surface, which is used to investigate The smoothness of continuous surface feature recovery by the depth imaging algorithm.

可以采用3D打印方式制作上述测试模具,保证测试模具的加工精度不低于1mm。其中,平面模具的整体尺寸为300mm*300mm,第一预设阈值为1mm;正弦模具的整体尺寸为300mm*300mm,横向和竖向各有8个波峰。其中,如图2所示,横向正弦波形的幅值为10mm,周期为40mm;如图3所示,竖向正弦波形的幅值为10mm,周期为30mm;折面模具的整体尺寸为300mm*300mm,由6组夹角为90度的折面组成,其中折面峰谷值为20mm,相邻的峰峰距离为40mm,如图4所示;圆柱面模具的整体尺寸为300mm*300mm,由3组连续圆柱面组成,其中圆柱面峰谷值为40mm,相邻的峰峰距离为80mm,如图5所示。The above-mentioned test mold can be made by 3D printing to ensure that the processing accuracy of the test mold is not less than 1mm. Among them, the overall size of the planar mold is 300mm*300mm, and the first preset threshold is 1mm; the overall size of the sinusoidal mold is 300mm*300mm, and there are 8 peaks in the horizontal and vertical directions. Among them, as shown in Figure 2, the amplitude of the horizontal sine wave is 10mm and the period is 40mm; as shown in Figure 3, the amplitude of the vertical sine wave is 10mm and the period is 30mm; the overall size of the folding mold is 300mm* 300mm, composed of 6 groups of folded surfaces with an included angle of 90 degrees, where the peak-to-valley value of the folded surfaces is 20mm, and the distance between adjacent peaks and peaks is 40mm, as shown in Figure 4; the overall size of the cylindrical surface mold is 300mm*300mm, It consists of three groups of continuous cylindrical surfaces, where the peak-to-valley value of the cylindrical surface is 40mm, and the distance between adjacent peaks and peaks is 80mm, as shown in Figure 5.

在上述实施例的基础上,本实施例中所述平面模具对应的特征包括精密度、有效区域空洞率和坏点占比;所述正弦面模具对应的特征包括正弦拟合度、幅值相对误差和周期相对误差;所述折面模具对应的特征包括直角折面法线区分度;所述圆柱面模具对应的特征包括圆柱面法向平滑度。On the basis of the above-mentioned embodiments, the features corresponding to the planar mold in this embodiment include precision, void ratio in the effective area, and the percentage of dead pixels; the features corresponding to the sinusoidal mold include sine fit, relative amplitude error and periodic relative error; the features corresponding to the folded surface mold include the normal distinction of right-angled folded surfaces; the features corresponding to the cylindrical surface mold include the normal smoothness of the cylindrical surface.

具体地,提取的平面模具的深度图像的特征包括精密度、有效区域空洞率和坏点占比三项。其中,精密度特征用于表征深度成像算法对平面特征恢复的精密程度,有效区域空洞率特征用于表征深度成像算法恢复平面特征时出现空洞的概率,坏点占比特征用于表征深度成像算法恢复平面特征时误差较大点的占比情况。Specifically, the extracted features of the depth image of the planar mold include three items: precision, void ratio in effective area, and proportion of dead pixels. Among them, the precision feature is used to characterize the precision of the depth imaging algorithm for plane feature restoration, the effective area void ratio feature is used to represent the probability of voids when the depth imaging algorithm restores plane features, and the dead point ratio feature is used to characterize the depth imaging algorithm recovery. The proportion of points with larger errors in plane features.

提取正弦面模具的深度图像的特征包括正弦拟合度、幅值相对误差和周期相对误差三项。其中,正弦拟合度特征表示在进行深度成像时面型恢复是否真实、准确,幅值相对误差特征表示深度成像算法是否能够识别起伏以及深度数据的正确性,周期相对误差代表深度成像算法的频率响应,是否能够响应正弦面模具的高频特征。The features extracted from the depth image of the sinusoidal surface mold include three items: the degree of sinusoidal fit, the relative error of the amplitude and the relative error of the period. Among them, the sine fit feature indicates whether the surface shape restoration is true and accurate during depth imaging, the amplitude relative error feature indicates whether the depth imaging algorithm can identify fluctuations and the correctness of the depth data, and the cycle relative error represents the frequency of the depth imaging algorithm Response, whether it can respond to the high-frequency characteristics of the sinusoidal surface mold.

提取折面模具的深度图像的特征包括直角折面法线区分度,代表深度成像算法恢复出构成直角的两平面的区分程度,使用选取区域内三维点单位法向矢量测量值与真值间之间的加权平均距离来表征,区分度数值越小表示深度成像算法恢复的直角平面区分能力越高。The feature of extracting the depth image of the folded surface mold includes the discrimination of the normal of the right-angled folded surface, which represents the degree of distinction between the two planes forming the right angle recovered by the depth imaging algorithm. The weighted average distance between them is represented, and the smaller the value of the discrimination degree is, the higher the discrimination ability of the right-angle plane restored by the depth imaging algorithm is.

提取圆柱面模具的深度图像的特征包括圆柱面法向平滑度,该特征代表深度成像算法恢复出圆柱面特征的平滑程度,使用选取区域内三维点单位法向矢量测量值中,位于法向真值曲线一定范围内的占比来表征,平滑度数值越小表示深度成像算法恢复的圆柱面越不平滑。The feature of extracting the depth image of the cylindrical surface mold includes the normal smoothness of the cylindrical surface, which represents the smoothness of the cylindrical surface features restored by the depth imaging algorithm. The measured value of the normal vector of the three-dimensional point in the selected area is located in the normal direction. The smoothness value is represented by the proportion within a certain range of the value curve, and the smaller the smoothness value is, the less smooth the cylindrical surface recovered by the depth imaging algorithm is.

在上述实施例的基础上,本实施例中对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征具体包括:若该测试模具为平面模具,则对该测试模具的深度图像以预设采样间隔进行采样;基于最小二乘法对各采样点的邻域窗口内像素点的坐标值进行空间平面拟合,获取各所述采样点对应的拟合平面;将任一所述采样点到该采样点对应的拟合平面的距离作为该采样点处目标函数的残差;将所有所述采样点处目标函数的残差的平均值作为该测试模具对应的精密度;将各所述采样点处目标函数的残差转换为像素误差;将该测试模具的深度图像中像素误差大于第二预设阈值的采样点的数量占比作为该测试模具对应的坏点率;从该测试模具的深度图像的有效区域中心中选择预设比例的区域作为感兴趣区域;统计所述感兴趣区域中空洞点数占所述感兴趣区域中像素总数的比例,将所述比例作为该测试模具对应的有效区域空洞率。On the basis of the above embodiments, in this embodiment, for any of the test molds, analyzing the depth image of the test mold, extracting the features of the depth image specifically includes: if the test mold is a planar mold, then The depth image of the test mold is sampled at a preset sampling interval; the coordinate values of the pixel points in the neighborhood window of each sampling point are fitted to a spatial plane based on the least squares method, and the fitting plane corresponding to each of the sampling points is obtained; The distance from any of the sampling points to the fitting plane corresponding to the sampling point is used as the residual error of the objective function at the sampling point; the average value of the residual error of the objective function at all the sampling points is used as the corresponding Precision; the residual error of the objective function at each described sampling point is converted into a pixel error; the ratio of the number of sampling points whose pixel error is greater than the second preset threshold in the depth image of the test mold is used as the badness corresponding to the test mold Point rate; From the center of the effective area of the depth image of the test mold, select the area of preset ratio as the area of interest; count the ratio of the number of hollow points in the area of interest to the total number of pixels in the area of interest, and the The ratio is taken as the effective area void rate corresponding to the test mold.

具体地,提取精密度特征的具体过程为:在每个评测距离采集的深度图像上,每隔预设数量的像素点采样一次,如采样间隔为10。使用最小二乘法根据每个采样点的邻域窗口内像素点的坐标值进行空间平面拟合,如邻域窗口大小50*50。获取各采样点处目标函数的残差,即各采样点到拟合平面的空间距离。将所有采样点到拟合平面的空间距离的平均值作为每个采样距离下的精密度特征。Specifically, the specific process of extracting precision features is: sampling every preset number of pixels on the depth image collected at each evaluation distance, for example, the sampling interval is 10. Use the least square method to fit the spatial plane according to the coordinate values of the pixels in the neighborhood window of each sampling point, for example, the size of the neighborhood window is 50*50. Obtain the residual of the objective function at each sampling point, that is, the spatial distance from each sampling point to the fitting plane. The average of the spatial distances from all sampling points to the fitting plane is used as the precision feature at each sampling distance.

提取有效区域空洞率特征的具体过程为:从平面模具在每个评测距离的深度图像的有效区域中心中选择预设比例的区域作为感兴趣区域,统计该感兴趣区域中空洞点数占感兴趣区域像素总数的百分比,作为每个评测距离的有效区域空洞率特征。优选地,本实施例采用有效区域中心80%的区域作为感兴趣区域。The specific process of extracting the void rate feature of the effective area is as follows: select an area with a preset ratio from the center of the effective area of the depth image at each evaluation distance of the plane mold as the area of interest, and count the number of void points in the area of interest accounting for the area of interest The percentage of the total number of pixels that characterizes the effective area hole rate for each evaluation distance. Preferably, in this embodiment, 80% of the center of the effective region is used as the region of interest.

提取坏点占比的具体过程为:在每个评测距离的深度图像上,每隔预设数量的像素点选取一个像素点,如采样间隔为10。使用最小二乘法对各采样点的邻域窗口内的像素点进行平面拟合,如邻域窗口大小50*50。获取各采样点处目标函数的残差,然后将上述残差转换为像素误差,通过以下公式将各采样点处目标函数的残差转换为像素误差:The specific process of extracting the proportion of dead pixels is as follows: on the depth image of each evaluation distance, select a pixel every preset number of pixels, for example, the sampling interval is 10. Use the least square method to perform plane fitting on the pixels in the neighborhood window of each sampling point, for example, the size of the neighborhood window is 50*50. Obtain the residual error of the objective function at each sampling point, and then convert the above residual error into a pixel error, and convert the residual error of the objective function at each sampling point into a pixel error by the following formula:

Ep=Re*T*F/d2E p =R e *T*F/d 2 ;

其中,Ep为任一采样点的像素误差,Re为任一采样点处目标函数的残差,T为深度感知传感器的基线长度,F为深度感知传感器的焦距,d为深度图像的采集距离。将平面模具的深度图像中像素误差大于第二预设阈值的采样点的数量占比作为平面模具对应的坏点率,如第二预设阈值取0.5。Among them, E p is the pixel error of any sampling point, Re is the residual error of the objective function at any sampling point, T is the baseline length of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition of the depth image distance. The percentage of sampling points whose pixel error is greater than the second preset threshold in the depth image of the plane mold is taken as the dead pixel rate corresponding to the plane mold, for example, the second preset threshold is 0.5.

在上述实施例的基础上,本实施例中对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征具体包括:若该测试模具为正弦面模具,则将该测试模具的深度图像转换为伪彩图;从所述伪彩图中选择多个横向和多个纵向的峰谷所在的截面,将各所述截面两端的波峰相连,获取波峰连线与所述深度图像的成像平面的交线;将所述交线上的像素坐标转换为平面坐标;将转换后的坐标点进行直线拟合,根据拟合的直线的斜率对所述转换后的坐标点进行旋转;将旋转后的坐标点进行正弦曲线拟合,根据拟合的正弦曲线和该测试模具的参数值,计算该测试模具对应的幅值相对误差和周期相对误差;根据所述波峰连线上各坐标点在所述拟合的正弦曲线上对应的拟合值,计算该测试模具对应的正弦拟合度。On the basis of the above embodiments, in this embodiment, for any of the test molds, analyzing the depth image of the test mold, extracting the features of the depth image specifically includes: if the test mold is a sinusoidal surface mold, then Convert the depth image of the test mold into a pseudo-color map; select a section where a plurality of horizontal and a plurality of vertical peaks and valleys are located in the pseudo-color map, connect the peaks at both ends of each section, and obtain the peak connection line and The intersection line of the imaging plane of the depth image; the pixel coordinates on the intersection line are converted into plane coordinates; the converted coordinate points are fitted with a straight line, and the converted coordinates are adjusted according to the slope of the fitted straight line The point is rotated; the rotated coordinate points are fitted with a sinusoidal curve, and according to the fitted sinusoidal curve and the parameter value of the test mold, calculate the relative amplitude error and cycle relative error corresponding to the test mold; The fitting value corresponding to each coordinate point on the line on the fitted sinusoidal curve is used to calculate the corresponding sinusoidal fitting degree of the test mold.

具体地,对正弦面模具的深度图像进行特征提取的步骤具体包括:Specifically, the step of feature extraction of the depth image of the sinusoidal mold specifically includes:

a、将采集的正弦模具的深度图像转为伪彩图,以便于后续准确选取正弦波峰点,即极值点。挑选若干个横向和竖向的峰谷所在的截面,通过选择该截面两端的波峰点的连线,获取该连线与成像平面的交线,选取上述交线上的图像点坐标(i、j、z)转换为平面坐标点(dij、z),用于后续直线和曲线拟合,转换公式描述如下:a. Convert the depth image of the collected sinusoidal mold into a pseudo-color image, so as to accurately select the peak point of the sinusoidal wave, that is, the extreme point. Select a section where several horizontal and vertical peaks and valleys are located, and obtain the intersection line between the line and the imaging plane by selecting the connection line of the peak points at both ends of the section, and select the image point coordinates (i, j , z) into plane coordinate points (d ij , z) for subsequent straight line and curve fitting, the conversion formula is described as follows:

其中,i、j对应为像素点的横纵坐标,z为像素点的深度值,dij为平面坐标值,cx和cy为深度相机的主点坐标,fx和fy为深度相机的焦距;优选地,本实施例选取正弦面模具中心区域的两条横向和两条竖向截面的峰峰连线进行测试。Among them, i and j correspond to the horizontal and vertical coordinates of the pixel point, z is the depth value of the pixel point, d ij is the plane coordinate value, c x and cy are the principal point coordinates of the depth camera, f x and f y are the depth camera The focal length; preferably, this embodiment selects the peak-to-peak connecting lines of the two transverse and two vertical sections in the central area of the sinusoidal mold for testing.

b、将各连线上转换后的坐标点先拟合一条直线,并按照直线斜率反方向旋转,消除正弦模具拍摄时摆放倾斜对正弦拟合效果的影响;b. Fit the converted coordinate points on each connection line to a straight line first, and rotate in the opposite direction according to the slope of the straight line, so as to eliminate the influence of the tilting on the sinusoidal fitting effect when the sinusoidal mold is placed when shooting;

c、正弦曲线和直线的拟合均采用最小二乘法,正弦曲线的拟合需要较好的参数初值,包括:幅值、周期、相移和幅移,可以采用人工给定和默认值两种方式;c. Both the fitting of the sinusoidal curve and the straight line adopt the least square method. The fitting of the sinusoidal curve requires a good initial value of parameters, including: amplitude, period, phase shift and amplitude shift, which can be manually set or default. way;

d、根据拟合的正弦曲线的参数值,即幅值和周期,结合正弦模具的参数真值,计算幅值的相对误差和周期的相对误差,同时得到峰峰连线上各点对应的拟合值,用于计算正弦拟合度。d. According to the parameter values of the fitted sinusoidal curve, that is, the amplitude and period, combined with the true value of the parameters of the sinusoidal mold, calculate the relative error of the amplitude and the relative error of the period, and at the same time obtain the simulated curve corresponding to each point on the peak-peak line The combined value is used to calculate the sine fit.

优选地,正弦拟合度特征可以采用拟合优度R2,其计算公式如下:Preferably, the sine fit feature can use the goodness of fit R 2 , and its calculation formula is as follows:

其中,R2为正弦面模具对应的正弦拟合度,yi为波峰连线上第i个坐标点的拟合值,Yi为波峰连线上第i个坐标点的实际值,为波峰连线上所有坐标点的实际值的平均值。Among them, R 2 is the sinusoidal fitting degree corresponding to the sinusoidal surface mold, y i is the fitting value of the i-th coordinate point on the peak line, and Y i is the actual value of the i-th coordinate point on the wave-peak line, is the average value of the actual values of all coordinate points on the peak line.

在上述实施例的基础上,本实施例中对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征具体包括:若该测试模具为折面模具,则将该测试模具的深度图像转换为伪彩图;从所述伪彩图中框选测试区域,根据所述深度感知传感器的参数将框选的所述测试区域内的深度非零像素点转换为三维坐标点;对于任一所述三维坐标点,基于KD-tree算法获取该三维坐标点的邻域内预设数量的最近邻点,对所述最近邻点进行平面拟合,根据拟合的平面获取该三维坐标点的法线矢量,并对所述法线矢量进行归一化,获取该三维坐标点的法向测量值;在单位法向球面上,选择该测试模具成直角的两个相邻平面的法向真值;计算该三维坐标点的法向测量值分别与两个所述法向真值之间的欧式距离,并获取两个所述欧式距离中的最小值;将各所述三维坐标点对应的最小值按从小到大的顺序进行排序,根据排序结果计算所述三维坐标点的法向测量值与所述法向真值之间的加权平均距离,将所述加权平均距离作为该测试模具对应的直角折面法线区分度。On the basis of the above embodiments, in this embodiment, for any of the test molds, analyzing the depth image of the test mold, extracting the features of the depth image specifically includes: if the test mold is a folding mold, then Convert the depth image of the test mold into a pseudo-color map; frame the test area from the pseudo-color map, and convert the non-zero pixel points of the depth in the frame-selected test area according to the parameters of the depth perception sensor to Three-dimensional coordinate point; for any of the three-dimensional coordinate points, based on the KD-tree algorithm, the nearest neighbor points of the preset number in the neighborhood of the three-dimensional coordinate point are obtained, and the plane fitting is carried out to the nearest neighbor points, according to the fitted plane Obtain the normal vector of the three-dimensional coordinate point, and normalize the normal vector to obtain the normal measurement value of the three-dimensional coordinate point; on the unit normal sphere, select two phases of the test mold at right angles The normal true value of the adjacent plane; calculate the Euclidean distance between the normal measured value of the three-dimensional coordinate point and the two described normal true values respectively, and obtain the minimum value in the two described Euclidean distances; The minimum values corresponding to the three-dimensional coordinate points are sorted in ascending order, and the weighted average distance between the normal measurement value of the three-dimensional coordinate points and the normal true value is calculated according to the sorting results, and the weighted average The distance is used as the discrimination degree of the normal line of the right-angled folded surface corresponding to the test mold.

具体地,对折面模具的深度图像进行特征提取的步骤具体包括:Specifically, the step of performing feature extraction on the depth image of the folding mold specifically includes:

a、将采集的折面模具深度图转为伪彩图,以便于准确框选测试区域;选取框选的测试区域内的深度非零像素点,结合深度感知传感器的参数将其转换为三维点云;a. Convert the collected folding mold depth map into a pseudo-color map, so as to accurately frame the test area; select the non-zero depth pixels in the frame-selected test area, and convert them into three-dimensional points by combining the parameters of the depth perception sensor cloud;

b、基于KD-tree算法获取各三维点邻域内预设数量的最近邻点,通过对上述最近邻点拟合平面,计算得到各三维点的法线矢量并归一化,遍历三维点云后,对获取的单位法向点云通过欧式聚类去除离群噪声点。将去燥后获取的各三维点的单位法向点作为各三维坐标点的法向测量值。优选地,本实施例采用各三维点邻域内50个最近邻点进行平面拟合;b. Based on the KD-tree algorithm, the preset number of nearest neighbor points in the neighborhood of each 3D point is obtained. By fitting the plane to the above nearest neighbor points, the normal vector of each 3D point is calculated and normalized. After traversing the 3D point cloud , to remove outlier noise points through Euclidean clustering on the obtained unit normal point cloud. The unit normal point of each three-dimensional point obtained after denoising is used as the normal measurement value of each three-dimensional coordinate point. Preferably, in this embodiment, 50 nearest neighbor points in the neighborhood of each three-dimensional point are used for plane fitting;

c、在单位法向球面上,选取两个垂直平面的法向真值,即法向点云聚集点,遍历各法向测量值与两个法向真值的欧氏距离,取两个欧式距离中的最小值,按照从小到大的顺序排序,并按百分比设定相应权值,计算加权平均距离,即加权距离求和后取均值,将其作为直角折面法线区分度特征,计算公式如下:c. On the unit normal sphere, select the normal true values of two vertical planes, that is, the normal point cloud aggregation points, traverse the Euclidean distance between each normal measurement value and the two normal true values, and take two Euclidean The minimum value in the distance is sorted from small to large, and the corresponding weight is set according to the percentage, and the weighted average distance is calculated, that is, the average value is taken after the weighted distance is summed, and it is used as the distinguishing feature of the rectangular folded surface normal. The formula is as follows:

式中,D为加权平均距离,a为加权时所用距离排序百分比,α和β分别为相应百分比的权值,di为排序结果中第i个三维坐标点的法向测量值与两个法向真值间的欧氏距离中的最小值,db为两个法向真值间的欧氏距离;优选地,本实施例采用排序前80%的距离权值设为0.2,后20%的距离权值设为0.8。In the formula, D is the weighted average distance, a is the distance sorting percentage used in weighting, α and β are the weights of the corresponding percentages, d i is the normal measurement value of the i-th three-dimensional coordinate point in the sorting result and the two method The minimum value in the Euclidean distance between the true values, d b is the Euclidean distance between the two normal true values; preferably, the present embodiment adopts the distance weight of the first 80% of the sorting to be set to 0.2, and the latter 20% The distance weight of is set to 0.8.

在上述实施例的基础上,本实施例中对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征具体包括:若该测试模具为圆柱面模具,则将该测试模具的深度图像转换为伪彩图;从所述伪彩图中框选测试区域,根据所述深度感知传感器的参数将框选的所述测试区域内的深度非零像素点转换为三维坐标点;对于任一所述三维坐标点,基于KD-tree算法获取该三维坐标点的邻域内预设数量的最近邻点,对所述最近邻点进行平面拟合,根据拟合的平面获取该三维坐标点的法线矢量,并对所述法线矢量进行归一化,获取该三维坐标点的法向测量值;将该三维坐标点的法向测量值向单位法向球面上的XY坐标平面进行投影,获取该三维坐标点的法向测量值的投影点;将所有所述三维坐标点的法向测量值的投影点拟合成直线,计算各所述投影点到所述直线的距离;计算所述距离小于第三预设阈值的投影点占所有投影点的比例,将所述比例作为该测试模具对应的圆柱面法向平滑度。On the basis of the above embodiments, in this embodiment, for any of the test molds, analyzing the depth image of the test mold, extracting the features of the depth image specifically includes: if the test mold is a cylindrical mold, then Convert the depth image of the test mold into a pseudo-color map; frame the test area from the pseudo-color map, and convert the non-zero pixel points of the depth in the frame-selected test area according to the parameters of the depth perception sensor to Three-dimensional coordinate point; for any of the three-dimensional coordinate points, based on the KD-tree algorithm, the nearest neighbor points of the preset number in the neighborhood of the three-dimensional coordinate point are obtained, and the plane fitting is carried out to the nearest neighbor points, according to the fitted plane Obtain the normal vector of the three-dimensional coordinate point, and normalize the normal vector to obtain the normal measurement value of the three-dimensional coordinate point; Projecting on the XY coordinate plane to obtain the projection point of the normal measurement value of the three-dimensional coordinate point; fitting the projection points of the normal measurement value of all the three-dimensional coordinate points into a straight line, and calculating each of the projection points to the straight line Calculate the ratio of the projection points whose distance is less than the third preset threshold to all the projection points, and use the ratio as the normal smoothness of the cylindrical surface corresponding to the test mold.

具体地,对圆柱面模具的深度图像进行特征提取的步骤具体包括:Specifically, the step of extracting features from the depth image of the cylindrical mold specifically includes:

a、将采集的圆柱面模具的深度图转为伪彩图,以便于准确框选测试区域,选取框选区域内的深度非零像素点,结合相机参数将其转换为三维点云;a. Convert the collected depth map of the cylindrical surface mold into a pseudo-color map, so as to accurately frame the test area, select the non-zero depth pixels in the frame area, and convert it into a 3D point cloud in combination with camera parameters;

b、基于KD-tree算法获取各三维点邻域内一定数量的最近邻点,通过对上述最近邻点拟合平面,计算得到各三维点的法线矢量并归一化,遍历三维点云后,对获取的各单位点的单位法向点云通过欧式聚类去除离群噪声点,将去燥后获取的各三维点的单位法向点作为各三维坐标点的法向测量值;优选地,本实施例采用各三维点邻域内50个最近邻点进行平面拟合;b. Obtain a certain number of nearest neighbor points in the neighborhood of each 3D point based on the KD-tree algorithm. By fitting the plane to the above nearest neighbor points, calculate and normalize the normal vector of each 3D point. After traversing the 3D point cloud, The unit normal point cloud of each unit point obtained is removed by Euclidean clustering to remove outlier noise points, and the unit normal point of each three-dimensional point obtained after denoising is used as the normal measurement value of each three-dimensional coordinate point; preferably, In this embodiment, 50 nearest neighbor points in the neighborhood of each three-dimensional point are used for plane fitting;

c、在单位法向球面上,将法向测量值向XY坐标面中投影,即只取出x、y坐标,基于最小二乘法对所有三维点的投影点进行直线拟合,计算各投影点到拟合直线的距离,统计小于距离阈值的投影点占总点数的比例,即为圆柱面法向平滑度;优选地,第三预设阈值为0.1。c. On the unit normal sphere, project the normal measurement value to the XY coordinate plane, that is, only take out the x and y coordinates, and perform straight line fitting on the projection points of all three-dimensional points based on the least squares method, and calculate each projection point to For the distance of the fitted line, the ratio of the projected points smaller than the distance threshold to the total number of points is counted, which is the normal smoothness of the cylindrical surface; preferably, the third preset threshold is 0.1.

针对人脸识别应用,不同测试模具的不同特征对深度成像算法的质量影响也不同,根据各特征的实际影响将其分级如下:For face recognition applications, different features of different test molds have different impacts on the quality of the depth imaging algorithm. According to the actual impact of each feature, it is classified as follows:

(1)第一层特征:正弦模具->正弦拟合度,该特征数值越大越好,且需要保持在一定阈值之上才能保证较高的人脸识别率;优选地,本发明采用的拟合度阈值为0.9;(1) The first layer of features: sinusoidal mold -> sinusoidal fitting degree, the larger the value of the feature, the better, and it needs to be kept above a certain threshold to ensure a higher face recognition rate; preferably, the proposed method used in the present invention The degree threshold is 0.9;

(2)第二层特征:正弦模具->周期相对误差,该特征数值与生成人脸识别训练数据集的深度成像算法同一特征越接近越好;折面模具->直角折面法线区分度,该特征数值越小越好;圆柱面模具->圆柱面法向平滑度,该特征数值越大越好;(2) The second layer of features: sinusoidal mold -> periodic relative error, the closer the feature value is to the same feature of the depth imaging algorithm that generates the face recognition training data set, the better; folded surface mold -> right angle folded surface normal differentiation , the smaller the characteristic value, the better; the cylindrical surface mold->cylindrical surface normal smoothness, the larger the characteristic value, the better;

(3)第三层特征:平面模具->坏点占比,该特征数值越小越好;平面模具->有效区域空洞率,该特征数值越小越好;平面模具->精密度,该特征数值越小越好;(3) The third layer of features: flat mold -> proportion of dead pixels, the smaller the value of the feature, the better; flat mold -> void ratio in the effective area, the smaller the better; plane mold -> precision, the The smaller the characteristic value, the better;

(4)第四层特征:正弦模具->振幅相对误差,该特征数值越小越好。(4) The fourth layer feature: sinusoidal mold -> relative amplitude error, the smaller the value of this feature, the better.

按照各级特征对人脸识别应用的影响关系,对其分别要求如下:According to the impact relationship of features at all levels on face recognition applications, the respective requirements are as follows:

(1)第一层特征必须满足,否则可能导致人脸识别率较低;(1) The first layer of features must be satisfied, otherwise it may lead to a low face recognition rate;

(2)第二层特征对人脸识别率的影响很大,尽量靠近目标值;(2) The second layer features have a great influence on the face recognition rate, try to be as close as possible to the target value;

(3)第三层特征对人脸识别率的影响不明显,但依然非常重要,特别注意某些特征在超过某阈值后会严重影响人脸识别结果,如空洞率等;(3) The impact of the third-level features on the face recognition rate is not obvious, but it is still very important. Pay special attention to certain features that will seriously affect the face recognition results after exceeding a certain threshold, such as the hole rate, etc.;

(4)第四层特征目前对人脸识别率没有明显的影响,但尽量保证在可控范围之内,以避免对人脸识别率造成影响。(4) The fourth layer feature has no obvious impact on the face recognition rate at present, but try to keep it within the controllable range to avoid affecting the face recognition rate.

本实施例提供一种电子设备,图6为本发明实施例提供的电子设备整体结构示意图,该设备包括:至少一个处理器601、至少一个存储器602和总线603;其中,This embodiment provides an electronic device. FIG. 6 is a schematic diagram of the overall structure of the electronic device provided by the embodiment of the present invention. The device includes: at least one processor 601, at least one memory 602, and a bus 603; wherein,

处理器601和存储器602通过总线603完成相互间的通信;The processor 601 and the memory 602 communicate with each other through the bus 603;

存储器602存储有可被处理器601执行的程序指令,处理器调用程序指令能够执行上述各方法实施例所提供的方法,例如包括:基于深度感知传感器获取一个或多个测试模具的深度图像;其中,每个所述测试模具的表面形状不同;对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征;其中,从每个所述测试模具的深度图像中提取的特征不同;根据所有所述测试模具的深度图像的特征,对所述深度感知传感器采集的人脸深度图像进行评价。The memory 602 stores program instructions that can be executed by the processor 601, and the processor calls the program instructions to perform the methods provided by the above-mentioned method embodiments, for example, including: acquiring depth images of one or more test molds based on a depth perception sensor; wherein , the surface shape of each of the test molds is different; for any of the test molds, the depth image of the test mold is analyzed, and the features of the depth image are extracted; wherein, from the depth image of each of the test molds The extracted features are different; according to the features of the depth images of all the test molds, the depth images of the face collected by the depth perception sensor are evaluated.

本实施例提供一种非暂态计算机可读存储介质,非暂态计算机可读存储介质存储计算机指令,计算机指令使计算机执行上述各方法实施例所提供的方法,例如包括:基于深度感知传感器获取一个或多个测试模具的深度图像;其中,每个所述测试模具的表面形状不同;对于任一所述测试模具,对该测试模具的深度图像进行分析,提取所述深度图像的特征;其中,从每个所述测试模具的深度图像中提取的特征不同;根据所有所述测试模具的深度图像的特征,对所述深度感知传感器采集的人脸深度图像进行评价。This embodiment provides a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium stores computer instructions. The computer instructions cause the computer to execute the methods provided in the above-mentioned method embodiments, for example, including: Depth images of one or more test moulds; wherein, each of the test moulds has a different surface shape; for any of the test moulds, the depth images of the test moulds are analyzed to extract features of the depth images; wherein , the features extracted from the depth images of each of the test molds are different; according to the features of all the depth images of the test molds, the face depth images collected by the depth perception sensor are evaluated.

本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps for realizing the above-mentioned method embodiments can be completed by hardware related to program instructions, and the aforementioned program can be stored in a computer-readable storage medium. When the program is executed, the It includes the steps of the above method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.

以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without any creative efforts.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。Through the above description of the implementations, those skilled in the art can clearly understand that each implementation can be implemented by means of software plus a necessary general hardware platform, and of course also by hardware. Based on this understanding, the essence of the above technical solution or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic discs, optical discs, etc., including several instructions to make a computer device (which may be a personal computer, server, or network device, etc.) execute the methods described in various embodiments or some parts of the embodiments.

最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent replacements are made to some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the present invention.

Claims (10)

1. A depth image quality evaluation method based on face recognition application is characterized by comprising the following steps:
Acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
2. the depth image quality evaluation method based on face recognition application of claim 1, wherein the test mold comprises a plane mold, a sine surface mold, a folded surface mold and a cylindrical surface mold;
The plane mould is a mould with a surface undulation error smaller than a first preset threshold value;
The sine surface die is a die with the surface in a transversely and longitudinally staggered sine shape;
The surface folding mold is a mold with a continuous right-angle folding surface;
the cylindrical surface mould is a mould with a continuous cylindrical curved surface on the surface.
3. the depth image quality evaluation method based on face recognition application of claim 2, wherein the characteristics corresponding to the planar mold comprise precision, effective area voidage and dead pixel ratio;
the characteristics corresponding to the sine surface mold comprise sine fitting degree, amplitude relative error and period relative error;
The characteristics corresponding to the folded surface mould comprise a right-angle folded surface normal line division degree;
the corresponding features of the cylindrical surface mold include a cylindrical surface normal smoothness.
4. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mold is a plane mold, sampling the depth image of the test mold at a preset sampling interval;
Performing space plane fitting on coordinate values of pixel points in a neighborhood window of each sampling point based on a least square method to obtain a fitting plane corresponding to each sampling point;
Taking the distance from any sampling point to a fitting plane corresponding to the sampling point as the residual error of the target function at the sampling point;
Taking the average value of the residual errors of the target functions at all the sampling points as the precision corresponding to the test mold;
converting the residual error of the objective function at each sampling point into a pixel error;
taking the number proportion of sampling points with pixel errors larger than a second preset threshold value in the depth image of the test mold as the dead pixel rate corresponding to the test mold;
selecting a region with a preset proportion from the center of the effective region of the depth image of the test mold as a region of interest;
and counting the proportion of the number of the holes in the region of interest to the total number of the pixels in the region of interest, and taking the proportion as the effective region hole rate corresponding to the test mold.
5. the method of claim 4, wherein the residual error of the objective function at each sampling point is converted into a pixel error by the following formula:
E=R*T*F/d;
And the parameter is the pixel error of any sampling point, Re is the residual error of an objective function at any sampling point, T is the length of a base line of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition distance of the depth image.
6. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mold is a sine-surface mold, converting the depth image of the test mold into a pseudo-color image;
Selecting cross sections where a plurality of transverse and longitudinal peak valleys are located from the pseudo-color image, connecting the peaks at two ends of each cross section, and obtaining an intersection line of a peak connecting line and an imaging plane of the depth image;
Converting the pixel coordinates on the intersection line into plane coordinates;
Performing linear fitting on the converted coordinate points, and rotating the converted coordinate points according to the slope of the fitted linear;
carrying out sine curve fitting on the rotated coordinate points, and calculating amplitude relative errors and period relative errors corresponding to the test mould according to the fitted sine curve and the parameter values of the test mould;
And calculating the corresponding sine fitting degree of the test mold according to the corresponding fitting value of each coordinate point on the peak connecting line on the fitted sine curve.
7. The method of claim 6, wherein the degree of fitting of the sine corresponding to the test mold is calculated according to the fitting value of each coordinate point on the peak connecting line on the fitted sine curve by the following formula:
wherein, R2 is the sine fitting degree corresponding to the test mold, Yi is the fitting value of the ith coordinate point on the peak connecting line, Yi is the actual value of the ith coordinate point on the peak connecting line, and Yi is the average value of the actual values of all coordinate points on the peak connecting line.
8. The depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mould is a folded surface mould, converting the depth image of the test mould into a pseudo-color image;
selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
For any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
selecting normal truth values of two adjacent planes of the test mold which are at right angles on a unit normal spherical surface;
Calculating Euclidean distances between the normal measurement value of the three-dimensional coordinate point and the two normal truth values respectively, and acquiring the minimum value of the two Euclidean distances;
And sequencing the minimum values corresponding to the three-dimensional coordinate points from small to large, calculating the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate points according to the sequencing result, and taking the weighted average distance as the normal discrimination of the right-angled folding surface corresponding to the test mold.
9. The method of claim 8, wherein the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate point is calculated according to the ranking result by:
and D is the weighted average distance, n is the number of the three-dimensional coordinate points, a is a preset proportion, alpha and beta are weights, di is the minimum value of the Euclidean distance between the normal measured value of the ith three-dimensional coordinate point in the sequencing result and the two normal truth values, and db is the Euclidean distance between the two normal truth values.
10. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mould is a cylindrical surface mould, converting the depth image of the test mould into a pseudo-color image;
Selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
For any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
Projecting the normal measurement value of the three-dimensional coordinate point to an XY coordinate plane on a unit normal spherical surface to obtain a projection point of the normal measurement value of the three-dimensional coordinate point;
fitting projection points of normal measurement values of all the three-dimensional coordinate points into a straight line, and calculating the distance from each projection point to the straight line;
And calculating the proportion of the projection points with the distance smaller than the third preset threshold value in all the projection points, and taking the proportion as the normal smoothness of the cylindrical surface corresponding to the test mould.
CN201910693279.9A 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application Active CN110544233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693279.9A CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693279.9A CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Publications (2)

Publication Number Publication Date
CN110544233A true CN110544233A (en) 2019-12-06
CN110544233B CN110544233B (en) 2022-03-08

Family

ID=68709887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693279.9A Active CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Country Status (1)

Country Link
CN (1) CN110544233B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063016A (en) * 2019-12-31 2020-04-24 螳螂慧视科技有限公司 Multi-depth lens face modeling method and system, storage medium and terminal
CN111353982A (en) * 2020-02-28 2020-06-30 贝壳技术有限公司 Depth camera image sequence screening method and device
CN113126944A (en) * 2021-05-17 2021-07-16 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium
CN113836980A (en) * 2020-06-24 2021-12-24 中兴通讯股份有限公司 Face recognition method, electronic device and storage medium
CN114283141A (en) * 2021-12-29 2022-04-05 上海肇观电子科技有限公司 Method, apparatus, electronic device and medium for assessing depth image quality
CN114299016A (en) * 2021-12-28 2022-04-08 北京的卢深视科技有限公司 Depth map detection device, method, system and storage medium
CN114792436A (en) * 2021-01-25 2022-07-26 合肥的卢深视科技有限公司 Face depth image quality evaluation method and system, electronic device and storage medium
CN114862779A (en) * 2022-04-25 2022-08-05 合肥的卢深视科技有限公司 Image quality evaluation method, electronic device, and storage medium
CN115049658A (en) * 2022-08-15 2022-09-13 合肥的卢深视科技有限公司 RGB-D camera quality detection method, electronic device and storage medium
CN116576806A (en) * 2023-04-21 2023-08-11 深圳市磐锋精密技术有限公司 A precision control system for thickness detection equipment based on visual analysis
CN117058111A (en) * 2023-08-21 2023-11-14 大连亚明汽车部件股份有限公司 Quality inspection method and system for automobile aluminum alloy die casting die

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960097A (en) * 1997-01-21 1999-09-28 Raytheon Company Background adaptive target detection and tracking with multiple observation and processing stages
CN101681520A (en) * 2007-05-30 2010-03-24 皇家飞利浦电子股份有限公司 Pet local tomography
CN103763552A (en) * 2014-02-17 2014-04-30 福州大学 Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
CN105956582A (en) * 2016-06-24 2016-09-21 深圳市唯特视科技有限公司 Face identifications system based on three-dimensional data
CN105989591A (en) * 2015-02-11 2016-10-05 詹曙 Automatic teller machine imaging method capable of automatically acquiring remittee face stereo information
CN106127250A (en) * 2016-06-24 2016-11-16 深圳市唯特视科技有限公司 A kind of face method for evaluating quality based on three dimensional point cloud
US20160371539A1 (en) * 2014-04-03 2016-12-22 Tencent Technology (Shenzhen) Company Limited Method and system for extracting characteristic of three-dimensional face image
CN106803952A (en) * 2017-01-20 2017-06-06 宁波大学 With reference to the cross validation depth map quality evaluating method of JND model
CN107462587A (en) * 2017-08-31 2017-12-12 华南理工大学 A kind of the precise vision detecting system and method for flexible IC substrates bump mark defect
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960097A (en) * 1997-01-21 1999-09-28 Raytheon Company Background adaptive target detection and tracking with multiple observation and processing stages
CN101681520A (en) * 2007-05-30 2010-03-24 皇家飞利浦电子股份有限公司 Pet local tomography
CN103763552A (en) * 2014-02-17 2014-04-30 福州大学 Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
US20160371539A1 (en) * 2014-04-03 2016-12-22 Tencent Technology (Shenzhen) Company Limited Method and system for extracting characteristic of three-dimensional face image
CN105989591A (en) * 2015-02-11 2016-10-05 詹曙 Automatic teller machine imaging method capable of automatically acquiring remittee face stereo information
CN105956582A (en) * 2016-06-24 2016-09-21 深圳市唯特视科技有限公司 Face identifications system based on three-dimensional data
CN106127250A (en) * 2016-06-24 2016-11-16 深圳市唯特视科技有限公司 A kind of face method for evaluating quality based on three dimensional point cloud
CN106803952A (en) * 2017-01-20 2017-06-06 宁波大学 With reference to the cross validation depth map quality evaluating method of JND model
CN107462587A (en) * 2017-08-31 2017-12-12 华南理工大学 A kind of the precise vision detecting system and method for flexible IC substrates bump mark defect
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张万祯: ""数字投影结构光三维测量方法研究"", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
方程: ""3D 人脸识别技术"", 《电脑编程技巧与维护》 *
郝雯等: ""面向点云的三维物体识别方法综述"", 《计算机科学》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063016A (en) * 2019-12-31 2020-04-24 螳螂慧视科技有限公司 Multi-depth lens face modeling method and system, storage medium and terminal
CN111353982A (en) * 2020-02-28 2020-06-30 贝壳技术有限公司 Depth camera image sequence screening method and device
CN111353982B (en) * 2020-02-28 2023-06-20 贝壳技术有限公司 Depth camera image sequence screening method and device
CN113836980A (en) * 2020-06-24 2021-12-24 中兴通讯股份有限公司 Face recognition method, electronic device and storage medium
CN114792436A (en) * 2021-01-25 2022-07-26 合肥的卢深视科技有限公司 Face depth image quality evaluation method and system, electronic device and storage medium
CN113126944A (en) * 2021-05-17 2021-07-16 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium
CN114299016A (en) * 2021-12-28 2022-04-08 北京的卢深视科技有限公司 Depth map detection device, method, system and storage medium
CN114283141A (en) * 2021-12-29 2022-04-05 上海肇观电子科技有限公司 Method, apparatus, electronic device and medium for assessing depth image quality
CN114862779A (en) * 2022-04-25 2022-08-05 合肥的卢深视科技有限公司 Image quality evaluation method, electronic device, and storage medium
CN115049658A (en) * 2022-08-15 2022-09-13 合肥的卢深视科技有限公司 RGB-D camera quality detection method, electronic device and storage medium
CN116576806A (en) * 2023-04-21 2023-08-11 深圳市磐锋精密技术有限公司 A precision control system for thickness detection equipment based on visual analysis
CN116576806B (en) * 2023-04-21 2024-01-26 深圳市磐锋精密技术有限公司 Precision control system for thickness detection equipment based on visual analysis
CN117058111A (en) * 2023-08-21 2023-11-14 大连亚明汽车部件股份有限公司 Quality inspection method and system for automobile aluminum alloy die casting die
CN117058111B (en) * 2023-08-21 2024-02-09 大连亚明汽车部件股份有限公司 Quality inspection method and system for automobile aluminum alloy die casting die

Also Published As

Publication number Publication date
CN110544233B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN110544233A (en) Depth Image Quality Evaluation Method Based on Face Recognition Application
Tong et al. Convolutional neural network for asphalt pavement surface texture analysis
CN102657532B (en) Height measuring method and device based on body posture identification
CN110390696A (en) A visual detection method of circular hole pose based on image super-resolution reconstruction
CN109671174A (en) A kind of pylon method for inspecting and device
CN105300316A (en) Light stripe center rapid extraction method based on gray centroid method
Anusree et al. Characterization of sand particle morphology: state-of-the-art
CN108921864A (en) A kind of Light stripes center extraction method and device
CN111385558B (en) TOF camera module accuracy measurement method and system
CN113295142B (en) A terrain scanning analysis method and device based on FARO scanner and point cloud
CN112329726B (en) Face recognition method and device
CN114972153A (en) A method and system for visual measurement of bridge vibration and displacement based on deep learning
CN106780058A (en) The group dividing method and device of dynamic network
CN109060290A (en) The method that wind-tunnel density field is measured based on video and Sub-pixel Technique
CN109086350A (en) A kind of mixed image search method based on WiFi
CN115738219A (en) Pull-up evaluation method and device, electronic equipment and storage medium
CN106845535B (en) Typical Components recognition methods based on cloud
CN114065650B (en) Material crack tip multi-scale strain field measurement tracking method based on deep learning
CN106600616A (en) Image background clutter measurement method and system
CN113029103B (en) Inclination measuring method and system for foundation ring of wind turbine tower and storage medium
CN118898600A (en) Fully automatic analyzer sample status assessment method based on AI visual inspection
CN112924037A (en) Infrared body temperature detection system and detection method based on image registration
CN116228958A (en) A 3D Pavement Modeling Method
CN116612097A (en) Method and system for predicting internal section morphology of wood based on surface defect image
CN115619689A (en) An image processing method based on multi-source video acquisition terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230609

Address after: Room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui 230001

Patentee after: Hefei lushenshi Technology Co.,Ltd.

Address before: Room 3032, gate 6, block B, 768 Creative Industry Park, 5 Xueyuan Road, Haidian District, Beijing 100083

Patentee before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right