CN110544233B - Depth image quality evaluation method based on face recognition application - Google Patents

Depth image quality evaluation method based on face recognition application Download PDF

Info

Publication number
CN110544233B
CN110544233B CN201910693279.9A CN201910693279A CN110544233B CN 110544233 B CN110544233 B CN 110544233B CN 201910693279 A CN201910693279 A CN 201910693279A CN 110544233 B CN110544233 B CN 110544233B
Authority
CN
China
Prior art keywords
test
depth image
depth
mold
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693279.9A
Other languages
Chinese (zh)
Other versions
CN110544233A (en
Inventor
户磊
王亚运
崔哲
薛远
李东阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilusense Technology Co Ltd filed Critical Beijing Dilusense Technology Co Ltd
Priority to CN201910693279.9A priority Critical patent/CN110544233B/en
Publication of CN110544233A publication Critical patent/CN110544233A/en
Application granted granted Critical
Publication of CN110544233B publication Critical patent/CN110544233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a depth image quality evaluation method based on face recognition application, which comprises the following steps: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds. The method has the advantages of more comprehensive and accurate extracted features, more accurate evaluation of the face depth image based on the extracted features, and high practicability and reproducibility.

Description

Depth image quality evaluation method based on face recognition application
Technical Field
The invention belongs to the technical field of depth perception imaging, and particularly relates to a depth image quality evaluation method based on face recognition application.
Background
The depth perception imaging technology is always an important subject of the machine vision subject, and currently, the mainstream depth perception imaging technology includes a single-purpose-based spatial coding structured light depth imaging technology, a binocular-based structured light texture enhancement depth imaging technology, a Time of flight (TOF) technology and the like.
In the field of face recognition application, with the development of deep learning technology in recent years, the transition gradually goes from using color image or infrared image data singly to using color image and depth image data comprehensively, or using infrared image and depth image data comprehensively for face recognition. The depth image is mainly obtained through the depth perception imaging technology, so that the quality of the depth image generated based on the depth imaging algorithm has a very critical influence on the face recognition application.
At present, the related algorithm of the depth perception imaging technology is quite mature, and the current commonly used depth imaging error measurement method mainly measures the registration error between the actual point cloud of an object and the actual point cloud of the object obtained by the depth imaging algorithm, and the method has the following defects and problems: (1) a new error is introduced in the point cloud registration process to influence the evaluation of the quality of the depth imaging algorithm; (2) the registration error mixes various errors, and more detailed evaluation cannot be performed according to the actual application requirement; (3) the point cloud registration error only reflects the capability of the depth imaging algorithm to acquire depth data similar to a real object, and for many practical applications, the truth is not a main requirement.
In summary, there is no unified and effective quality evaluation method for depth imaging algorithm in the industry at present, and particularly for face recognition application, the depth imaging algorithm and the face recognition algorithm affect the face recognition accuracy together, and the depth imaging algorithm and the face recognition algorithm need to be decoupled through reasonable evaluation of the quality of the depth imaging algorithm. The conventional depth imaging error measurement method cannot meet the requirements of practical application, particularly the quality assessment of a depth imaging algorithm for certain specific applications, such as face recognition. For face recognition application, the quality of a depth imaging algorithm and the influence of the quality on the face recognition accuracy cannot be accurately measured only by common methods such as point cloud registration error measurement and the like.
Disclosure of Invention
In order to overcome the problem that the existing depth imaging error measurement method cannot effectively evaluate the quality of a depth image in face recognition application or at least partially solve the problem, the embodiment of the invention provides a depth image quality evaluation method based on face recognition application.
The embodiment of the invention provides a depth image quality evaluation method based on face recognition application, which comprises the following steps:
acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
The embodiment of the invention provides a depth image quality evaluation method based on face recognition application, which comprises the steps of firstly obtaining a depth image of a test mold by using a depth sensor, analyzing the depth images with different surface shapes, extracting corresponding features from the depth image of the test mold according to the surface shape of the test mold, and evaluating the face depth image by using the features of the depth images of all the test molds as evaluation indexes, wherein the extracted features are more comprehensive and more accurate, and the face depth image evaluation based on the extracted features is more accurate and has high practicability and reproducibility.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic overall flow chart of a depth image quality evaluation method based on face recognition application according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a transverse sine waveform size of a sine die in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
fig. 3 is a schematic diagram of a vertical sine waveform size of a sine die in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
fig. 4 is a schematic size diagram of a folded surface mold in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
fig. 5 is a schematic size diagram of a cylindrical surface mold in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
fig. 6 is a schematic view of an overall structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
In an embodiment of the present invention, a depth image quality evaluation method based on face recognition application is provided, and fig. 1 is a schematic overall flow chart of the depth image quality evaluation method based on face recognition application provided in the embodiment of the present invention, where the method includes: s101, acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
the depth perception sensor is a perception system with a built-in depth imaging algorithm, such as a depth camera integrated with the depth imaging algorithm. The surface shape of the test mold is determined according to an application scene, for example, when the test mold is applied to face recognition, the surface shape of the test mold is set according to the face surface characteristics. Because the surface characteristics of the human face are complex, the surface characteristics of the human face can be represented by a plurality of simple surface characteristics, and a test mold is prepared according to each simple surface characteristic.
The test mold is prepared first, and the test mold can be prepared in a 3D printing mode. A depth image of each test mold is then acquired using a depth perception sensor. The relative position of the depth perception sensor and the testing mold is fixed, the testing mold is ensured to appear in the center of the field of view of the depth perception sensor without rotating, and the testing mold is kept parallel to the imaging sensor of the depth perception sensor. Preferably, the relative position between the depth perception sensor and the test mold can be adjusted by using a fixed tool, and the rotation angle should not be greater than 3 degrees. A depth image of each test mold is acquired using a depth-sensing sensor. Or from the preset nearest evaluation distance, collecting a depth image at a certain distance for each test mold, and analyzing the depth image collected by each evaluation distance until the farthest evaluation distance is preset, so as to evaluate the quality of the depth image collected by each evaluation distance.
S102, analyzing the depth image of any test mold, and extracting the features of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
since the surface shape of each test mold is different, the features extracted from the depth image of each test mold are also different. The present embodiment is not limited to the kind of feature extracted from the depth image of each test pattern.
And S103, evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
The surface shape of each test mold is used for representing a simple surface feature of the human face, and the feature of the depth image of each test mold is used as the feature of the human face depth image, so that various features of the human face depth image are obtained. And evaluating the face depth image by taking various characteristics of the face depth image as evaluation indexes.
According to the depth image evaluation method and device, the depth image of the test mold is obtained by using the depth sensor, the depth images with different surface shapes are analyzed, corresponding features are extracted from the depth image of the test mold according to the surface shape of the test mold, the features of the depth images of all the test molds are used as evaluation indexes to evaluate the face depth image, the extracted features are more comprehensive and accurate, the face depth image evaluation based on the extracted features is more accurate, and the method and device have high practicability and reproducibility.
On the basis of the above embodiment, the test mold in this embodiment includes a plane mold, a sine surface mold, a folded surface mold and a cylindrical surface mold; the plane mould is a mould with a surface undulation error smaller than a first preset threshold value; the sine surface die is a die with the surface in a transversely and longitudinally staggered sine shape; the surface folding mold is a mold with a continuous right-angle folding surface; the cylindrical surface mould is a mould with a continuous cylindrical curved surface on the surface.
The plane mould is a plane plate or a wall surface with a plane fluctuation error smaller than a first preset threshold value and is used for inspecting the accuracy of a depth imaging algorithm on plane feature recovery; the face shape of the sine face mold is a staggered sine shape and is used for simulating the characteristics of human face parts, such as nose and mouth; the surface type of the folded surface die is a continuous right-angle folded surface and is used for inspecting the distinguishing capability of a depth imaging algorithm on the plane normal direction; the surface type of the cylindrical surface mould is a continuous cylindrical curved surface and is used for inspecting the smoothness of the depth imaging algorithm on the continuous curved surface characteristic recovery.
The test mould can be manufactured in a 3D printing mode, and the machining precision of the test mould is not lower than 1 mm. The overall size of the plane die is 300 mm-300 mm, and the first preset threshold value is 1 mm; the overall size of the sinusoidal mould was 300mm with 8 peaks in each of the transverse and vertical directions. Wherein, as shown in fig. 2, the amplitude of the transverse sine waveform is 10mm, and the period is 40 mm; as shown in fig. 3, the amplitude of the vertical sine waveform is 10mm, and the period is 30 mm; the overall size of the folded surface die is 300mm x 300mm, and the folded surface die consists of 6 groups of folded surfaces with included angles of 90 degrees, wherein the peak-to-valley value of each folded surface is 20mm, and the distance between adjacent peaks is 40mm, as shown in fig. 4; the overall size of the cylinder mould was 300mm, consisting of 3 sets of consecutive cylinders, with the cylinder peak to valley value 40mm and the adjacent peak to peak distance 80mm, as shown in figure 5.
On the basis of the above embodiment, the characteristics corresponding to the planar mold in this embodiment include precision, void ratio of the effective area, and ratio of dead spots; the characteristics corresponding to the sine surface mold comprise sine fitting degree, amplitude relative error and period relative error; the characteristics corresponding to the folded surface mould comprise a right-angle folded surface normal line division degree; the corresponding features of the cylindrical surface mold include a cylindrical surface normal smoothness.
Specifically, the extracted features of the depth image of the planar mold include three items of precision, effective area voidage, and dead pixel ratio. The precision characteristic is used for representing the precision degree of the depth imaging algorithm for recovering the plane characteristic, the void rate characteristic of the effective area is used for representing the probability of voids when the depth imaging algorithm recovers the plane characteristic, and the dead pixel proportion characteristic is used for representing the proportion condition of points with larger errors when the depth imaging algorithm recovers the plane characteristic.
The characteristics of the depth image of the sine surface mold comprise three items of sine fitting degree, amplitude relative error and period relative error. The sine fitting degree characteristic represents whether the surface type recovery is real and accurate during depth imaging, the amplitude relative error characteristic represents whether the depth imaging algorithm can identify fluctuation and the accuracy of depth data, and the period relative error represents the frequency response of the depth imaging algorithm and whether the high-frequency characteristic of the sine surface mold can be responded.
The characteristics of the depth image of the folded surface mold comprise right-angle folded surface normal line distinguishing degree which represents the distinguishing degree of two planes forming a right angle restored by the depth imaging algorithm, the distinguishing degree is represented by using the weighted average distance between the measured value and the true value of the normal vector of the three-dimensional point unit in the selected area, and the smaller the value of the distinguishing degree is, the higher the distinguishing capability of the right-angle plane restored by the depth imaging algorithm is.
The characteristics of the depth image of the extracted cylindrical surface mold comprise the normal smoothness of the cylindrical surface, the characteristics represent the smoothness degree of the cylindrical surface characteristics recovered by the depth imaging algorithm, the ratio of the three-dimensional point unit normal vector measurement values in the selected area within a certain range of a normal truth value curve is used for representing, and the smaller the smoothness value is, the smoother the cylindrical surface recovered by the depth imaging algorithm is.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mold is a plane mold, sampling the depth image of the test mold at a preset sampling interval; performing space plane fitting on coordinate values of pixel points in a neighborhood window of each sampling point based on a least square method to obtain a fitting plane corresponding to each sampling point; taking the distance from any sampling point to a fitting plane corresponding to the sampling point as the residual error of the target function at the sampling point; taking the average value of the residual errors of the target functions at all the sampling points as the precision corresponding to the test mold; converting the residual error of the objective function at each sampling point into a pixel error; taking the number proportion of sampling points with pixel errors larger than a second preset threshold value in the depth image of the test mold as the dead pixel rate corresponding to the test mold; selecting a region with a preset proportion from the center of the effective region of the depth image of the test mold as a region of interest; and counting the proportion of the number of the holes in the region of interest to the total number of the pixels in the region of interest, and taking the proportion as the effective region hole rate corresponding to the test mold.
Specifically, the specific process of extracting the precision features is as follows: sampling is carried out on the depth image acquired at each evaluation distance every preset number of pixel points, and if the sampling interval is 10. And performing space plane fitting by using a least square method according to the coordinate values of the pixel points in the neighborhood window of each sampling point, wherein the size of the neighborhood window is 50 x 50. And acquiring the residual error of the target function at each sampling point, namely the space distance from each sampling point to the fitting plane. The average of the spatial distances of all sample points to the fitted plane is taken as the precision characteristic at each sample distance.
The specific process for extracting the void ratio characteristics of the effective area comprises the following steps: selecting a region with a preset proportion from the center of the effective region of the depth image of each evaluation distance of the plane mold as an interested region, counting the percentage of the number of hollow points in the interested region in the total number of pixels of the interested region, and using the percentage as the effective region hollow rate characteristic of each evaluation distance. Preferably, the present embodiment employs an area of 80% of the center of the effective area as the region of interest.
The specific process of extracting the dead pixel ratio is as follows: and selecting a pixel point every other preset number of pixel points on the depth image of each evaluation distance, wherein the sampling interval is 10. And performing plane fitting on pixel points in a neighborhood window of each sampling point by using a least square method, wherein the size of the neighborhood window is 50 x 50. Obtaining the residual error of the target function at each sampling point, then converting the residual error into a pixel error, and converting the residual error of the target function at each sampling point into the pixel error by the following formula:
Ep=Re*T*F/d2
wherein,Epis the pixel error of any sample point, ReThe residual error of the target function at any sampling point is shown, T is the length of a base line of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition distance of the depth image. And taking the number ratio of the sampling points with the pixel errors larger than a second preset threshold value in the depth image of the plane mould as the dead pixel rate corresponding to the plane mould, wherein the second preset threshold value is 0.5.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mold is a sine-surface mold, converting the depth image of the test mold into a pseudo-color image; selecting cross sections where a plurality of transverse and longitudinal peak valleys are located from the pseudo-color image, connecting the peaks at two ends of each cross section, and obtaining an intersection line of a peak connecting line and an imaging plane of the depth image; converting the pixel coordinates on the intersection line into plane coordinates; performing linear fitting on the converted coordinate points, and rotating the converted coordinate points according to the slope of the fitted linear; carrying out sine curve fitting on the rotated coordinate points, and calculating amplitude relative errors and period relative errors corresponding to the test mould according to the fitted sine curve and the parameter values of the test mould; and calculating the corresponding sine fitting degree of the test mold according to the corresponding fitting value of each coordinate point on the peak connecting line on the fitted sine curve.
Specifically, the step of extracting the features of the depth image of the sine surface mold specifically includes:
a. and converting the acquired depth image of the sinusoidal mold into a pseudo-color image so as to conveniently and accurately select a sinusoidal peak point, namely an extreme point, subsequently. Selecting a plurality of cross sections with transverse and vertical peak-valley positions, selecting a connecting line of peak points at two ends of the cross section to obtain an intersection line of the connecting line and an imaging plane, and selecting image point coordinates (i, j, z) on the intersection line to be converted into a plane coordinate point (d)ijZ) for subsequent line and curve fitting, the conversion formula is described as follows:
Figure BDA0002148543320000081
wherein i and j are horizontal and vertical coordinates of the pixel points correspondingly, z is the depth value of the pixel point, and dijIs a plane coordinate value, cxAnd cyAs principal point coordinates of the depth camera, fxAnd fyIs the focal length of the depth camera; preferably, the embodiment selects the peak-to-peak connecting lines of two transverse and two vertical cross sections of the central area of the sine surface die for testing.
b. Fitting a straight line on the converted coordinate points on each connecting line, and rotating in the opposite direction according to the slope of the straight line to eliminate the influence of the placing inclination on the sine fitting effect during shooting of a sine die;
c. the least square method is all adopted in the fitting of sinusoidal curve and straight line, and the better parameter initial value is required in the fitting of sinusoidal curve, include: the amplitude, the period, the phase shift and the amplitude shift can adopt two modes of manual setting and default values;
d. and calculating the relative error of the amplitude and the relative error of the period according to the parameter values, namely the amplitude and the period, of the fitted sinusoidal curve and by combining the true parameter value of the sinusoidal mold, and simultaneously obtaining the fitting value corresponding to each point on the peak-peak connecting line for calculating the fitting degree of the sine.
Preferably, the sinusoidal fitness feature may employ goodness-of-fit R2The calculation formula is as follows:
Figure BDA0002148543320000091
wherein R is2The degree of fitting of the sine corresponding to the sine surface mold, yiIs the fitted value of the ith coordinate point on the peak line, YiIs the actual value of the ith coordinate point on the peak line,
Figure BDA0002148543320000092
the average value of the actual values of all coordinate points on the peak connecting line.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mould is a folded surface mould, converting the depth image of the test mould into a pseudo-color image; selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor; for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point; selecting normal truth values of two adjacent planes of the test mold which are at right angles on a unit normal spherical surface; calculating Euclidean distances between the normal measurement value of the three-dimensional coordinate point and the two normal truth values respectively, and acquiring the minimum value of the two Euclidean distances; and sequencing the minimum values corresponding to the three-dimensional coordinate points from small to large, calculating the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate points according to the sequencing result, and taking the weighted average distance as the normal discrimination of the right-angled folding surface corresponding to the test mold.
Specifically, the step of performing feature extraction on the depth image of the folded surface mold specifically includes:
a. converting the collected depth map of the folded surface mold into a pseudo-color map so as to accurately select a test area; selecting depth non-zero pixel points in the framed test area, and converting the depth non-zero pixel points into three-dimensional point cloud by combining the parameters of the depth perception sensor;
b. and acquiring a preset number of nearest neighbor points in the neighborhood of each three-dimensional point based on a KD-tree algorithm, calculating and normalizing the normal vector of each three-dimensional point by fitting a plane to the nearest neighbor points, and removing outlier noise points from the acquired unit normal point cloud by Euclidean clustering after traversing the three-dimensional point cloud. And taking the unit normal point of each three-dimensional point obtained after drying as the normal measured value of each three-dimensional coordinate point. Preferably, the present embodiment performs plane fitting by using 50 nearest neighbors in each three-dimensional point neighborhood;
c. on a unit normal spherical surface, selecting normal true values of two vertical planes, namely normal point cloud aggregation points, traversing Euclidean distances between each normal measured value and the two normal true values, taking the minimum value of the two Euclidean distances, sequencing the Euclidean distances according to the sequence from small to large, setting corresponding weights according to percentages, calculating weighted average distances, namely taking the average value after summing the weighted distances, and taking the average value as a normal line discrimination characteristic of a right-angled folding surface, wherein the calculation formula is as follows:
Figure BDA0002148543320000101
wherein D is the weighted average distance, a is the sorting percentage of the distance used in weighting, alpha and beta are the weight of the corresponding percentage respectively, DiD is the minimum value in Euclidean distance between the normal measured value of the ith three-dimensional coordinate point in the sequencing result and the two normal truth valuesbThe Euclidean distance between two normal truth values; preferably, the distance weight of the first 80% is set to 0.2, and the distance weight of the last 20% is set to 0.8.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mould is a cylindrical surface mould, converting the depth image of the test mould into a pseudo-color image; selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor; for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point; projecting the normal measurement value of the three-dimensional coordinate point to an XY coordinate plane on a unit normal spherical surface to obtain a projection point of the normal measurement value of the three-dimensional coordinate point; fitting projection points of normal measurement values of all the three-dimensional coordinate points into a straight line, and calculating the distance from each projection point to the straight line; and calculating the proportion of the projection points with the distance smaller than the third preset threshold value in all the projection points, and taking the proportion as the normal smoothness of the cylindrical surface corresponding to the test mould.
Specifically, the step of extracting the features of the depth image of the cylindrical surface mold specifically includes:
a. converting the acquired depth map of the cylindrical surface mold into a pseudo-color map so as to accurately frame a test area, selecting depth non-zero pixel points in the framed area, and converting the depth non-zero pixel points into three-dimensional point cloud by combining camera parameters;
b. acquiring a certain number of nearest neighbor points in the neighborhood of each three-dimensional point based on a KD-tree algorithm, calculating and normalizing a normal vector of each three-dimensional point by fitting a plane to the nearest neighbor points, removing outlier noise points from the acquired unit normal point cloud of each unit point by Euclidean clustering after traversing the three-dimensional point cloud, and taking the unit normal point of each three-dimensional point acquired after drying as a normal measurement value of each three-dimensional coordinate point; preferably, the present embodiment performs plane fitting by using 50 nearest neighbors in each three-dimensional point neighborhood;
c. on a unit normal spherical surface, projecting a normal measured value to an XY coordinate plane, namely only taking out x and y coordinates, performing straight line fitting on projection points of all three-dimensional points based on a least square method, calculating the distance from each projection point to a fitted straight line, and counting the proportion of the projection points smaller than a distance threshold value to the total number of points, namely the normal smoothness of the cylindrical surface; preferably, the third preset threshold is 0.1.
Aiming at the face recognition application, the quality influence of different characteristics of different testing molds on the depth imaging algorithm is different, and the quality influence is classified as follows according to the actual influence of each characteristic:
(1) a first layer of features: the sine mold- > sine fitting degree, the larger the characteristic value is, the better the characteristic value is, and the higher face recognition rate can be ensured only by keeping the characteristic value above a certain threshold value; preferably, the present invention employs a threshold of fitness of 0.9;
(2) second layer characteristics: sine mold- > period relative error, the closer the characteristic value is to the same characteristic of the depth imaging algorithm for generating the face recognition training data set, the better the characteristic value is; the division of the folding surface mold- > right-angle folding surface normal line is better when the characteristic numerical value is smaller; the cylindrical surface mold- > the normal smoothness of the cylindrical surface, the larger the characteristic numerical value is, the better the characteristic numerical value is;
(3) the third layer is characterized in that: planar mold- > dead-spot ratio, the smaller the characteristic value, the better; the smaller the characteristic value is, the better the characteristic value is; planar mold- > precision, the smaller the value of the characteristic is, the better;
(4) fourth layer characteristics: sinusoidal mode- > amplitude relative error, the smaller the eigenvalue the better.
According to the influence relationship of each level of features on the face recognition application, the following requirements are respectively required:
(1) the first-layer characteristics must be satisfied, otherwise, the face recognition rate may be low;
(2) the second layer of features have great influence on the face recognition rate and are close to the target value as much as possible;
(3) the third layer of features has no obvious influence on the face recognition rate, but is still very important, and particularly, certain features can seriously influence the face recognition result after exceeding a certain threshold, such as the void rate and the like;
(4) the fourth layer of features has no obvious influence on the face recognition rate at present, but is ensured to be within a controllable range as much as possible so as to avoid influencing the face recognition rate.
The embodiment provides an electronic device, and fig. 6 is a schematic view of an overall structure of the electronic device according to the embodiment of the present invention, where the electronic device includes: at least one processor 601, at least one memory 602, and a bus 603; wherein,
the processor 601 and the memory 602 communicate with each other via a bus 603;
the memory 602 stores program instructions executable by the processor 601, and the processor calls the program instructions to perform the methods provided by the above method embodiments, for example, the method includes: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the methods provided by the above method embodiments, for example, including: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A depth image quality evaluation method based on face recognition application is characterized by comprising the following steps:
acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds;
the test mould comprises a plane mould, a sine surface mould, a folded surface mould and a cylindrical surface mould;
the plane mould is a mould with a surface undulation error smaller than a first preset threshold value;
the sine surface die is a die with the surface in a transversely and longitudinally staggered sine shape;
the surface folding mold is a mold with a continuous right-angle folding surface;
the cylindrical surface mould is a mould with a continuous cylindrical curved surface on the surface.
2. The depth image quality evaluation method based on face recognition application of claim 1, wherein the characteristics corresponding to the planar mold comprise precision, effective area voidage and dead pixel ratio;
the characteristics corresponding to the sine surface mold comprise sine fitting degree, amplitude relative error and period relative error;
the characteristics corresponding to the folded surface mould comprise a right-angle folded surface normal line division degree;
the corresponding features of the cylindrical surface mold include a cylindrical surface normal smoothness.
3. The method for evaluating the quality of a depth image based on face recognition application according to claim 1, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mold is a plane mold, sampling the depth image of the test mold at a preset sampling interval;
performing space plane fitting on coordinate values of pixel points in a neighborhood window of each sampling point based on a least square method to obtain a fitting plane corresponding to each sampling point;
taking the distance from any sampling point to a fitting plane corresponding to the sampling point as the residual error of the target function at the sampling point;
taking the average value of the residual errors of the target functions at all the sampling points as the precision corresponding to the test mold;
converting the residual error of the objective function at each sampling point into a pixel error;
taking the number proportion of sampling points with pixel errors larger than a second preset threshold value in the depth image of the test mold as the dead pixel rate corresponding to the test mold;
selecting a region with a preset proportion from the center of the effective region of the depth image of the test mold as a region of interest;
and counting the proportion of the number of the holes in the region of interest to the total number of the pixels in the region of interest, and taking the proportion as the effective region hole rate corresponding to the test mold.
4. The method of claim 3, wherein the residual error of the objective function at each sampling point is converted into a pixel error by the following formula:
Ep=Re*T*F/d2
wherein E ispPixel error, R, for any one of said sample pointseAnd the residual error of the target function at any sampling point is obtained, T is the base length of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition distance of the depth image.
5. The method for evaluating the quality of a depth image based on face recognition application according to claim 1, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mold is a sine-surface mold, converting the depth image of the test mold into a pseudo-color image;
selecting cross sections where a plurality of transverse and longitudinal peak valleys are located from the pseudo-color image, connecting the peaks at two ends of each cross section, and obtaining an intersection line of a peak connecting line and an imaging plane of the depth image;
converting the pixel coordinates on the intersection line into plane coordinates;
performing linear fitting on the converted coordinate points, and rotating the converted coordinate points according to the slope of the fitted linear;
carrying out sine curve fitting on the rotated coordinate points, and calculating amplitude relative errors and period relative errors corresponding to the test mould according to the fitted sine curve and the parameter values of the test mould;
and calculating the corresponding sine fitting degree of the test mold according to the corresponding fitting value of each coordinate point on the peak connecting line on the fitted sine curve.
6. The method of claim 5, wherein the degree of fitting of the sine corresponding to the test mold is calculated according to the fitting value of each coordinate point on the peak connecting line on the fitted sine curve by the following formula:
Figure FDA0003296921190000031
wherein R is2The sine fitting degree, y, corresponding to the test moldiIs the fitted value, Y, of the ith coordinate point on the peak lineiIs the actual value of the ith coordinate point on the peak connecting line,
Figure FDA0003296921190000032
and the average value of the actual values of all coordinate points on the peak connecting line is obtained.
7. The method for evaluating the quality of a depth image based on face recognition application according to claim 1, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mould is a folded surface mould, converting the depth image of the test mould into a pseudo-color image;
selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
selecting normal truth values of two adjacent planes of the test mold which are at right angles on a unit normal spherical surface;
calculating Euclidean distances between the normal measurement value of the three-dimensional coordinate point and the two normal truth values respectively, and acquiring the minimum value of the two Euclidean distances;
and sequencing the minimum values corresponding to the three-dimensional coordinate points from small to large, calculating the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate points according to the sequencing result, and taking the weighted average distance as the normal discrimination of the right-angled folding surface corresponding to the test mold.
8. The method of claim 7, wherein the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate point is calculated according to the ranking result by:
Figure FDA0003296921190000041
wherein D is the weighted average distance, n is the number of the three-dimensional coordinate points, a is a preset proportion, alpha and beta are weights, DiD is the minimum value in Euclidean distance between the normal measured value of the ith three-dimensional coordinate point in the sequencing result and two normal truth valuesbIs the euclidean distance between two of said normal truth values.
9. The method for evaluating the quality of a depth image based on face recognition application according to claim 1, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mould is a cylindrical surface mould, converting the depth image of the test mould into a pseudo-color image;
selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
projecting the normal measurement value of the three-dimensional coordinate point to an XY coordinate plane on a unit normal spherical surface to obtain a projection point of the normal measurement value of the three-dimensional coordinate point;
fitting projection points of normal measurement values of all the three-dimensional coordinate points into a straight line, and calculating the distance from each projection point to the straight line;
and calculating the proportion of the projection points with the distance smaller than the third preset threshold value in all the projection points, and taking the proportion as the normal smoothness of the cylindrical surface corresponding to the test mould.
CN201910693279.9A 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application Active CN110544233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693279.9A CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693279.9A CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Publications (2)

Publication Number Publication Date
CN110544233A CN110544233A (en) 2019-12-06
CN110544233B true CN110544233B (en) 2022-03-08

Family

ID=68709887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693279.9A Active CN110544233B (en) 2019-07-30 2019-07-30 Depth image quality evaluation method based on face recognition application

Country Status (1)

Country Link
CN (1) CN110544233B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111063016A (en) * 2019-12-31 2020-04-24 螳螂慧视科技有限公司 Multi-depth lens face modeling method and system, storage medium and terminal
CN111353982B (en) * 2020-02-28 2023-06-20 贝壳技术有限公司 Depth camera image sequence screening method and device
CN113836980A (en) * 2020-06-24 2021-12-24 中兴通讯股份有限公司 Face recognition method, electronic device and storage medium
CN113126944B (en) * 2021-05-17 2021-11-09 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium
CN114299016B (en) * 2021-12-28 2023-01-10 合肥的卢深视科技有限公司 Depth map detection device, method, system and storage medium
CN115049658B (en) * 2022-08-15 2022-12-16 合肥的卢深视科技有限公司 RGB-D camera quality detection method, electronic device and storage medium
CN116576806B (en) * 2023-04-21 2024-01-26 深圳市磐锋精密技术有限公司 Precision control system for thickness detection equipment based on visual analysis
CN117058111B (en) * 2023-08-21 2024-02-09 大连亚明汽车部件股份有限公司 Quality inspection method and system for automobile aluminum alloy die casting die

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960097A (en) * 1997-01-21 1999-09-28 Raytheon Company Background adaptive target detection and tracking with multiple observation and processing stages
US8457380B2 (en) * 2007-05-30 2013-06-04 Koninklijke Philips Electronics N.V. PET local tomography
CN103763552B (en) * 2014-02-17 2015-07-22 福州大学 Stereoscopic image non-reference quality evaluation method based on visual perception characteristics
CN104978549B (en) * 2014-04-03 2019-04-02 北京邮电大学 Three-dimensional face images feature extracting method and system
CN105989591A (en) * 2015-02-11 2016-10-05 詹曙 Automatic teller machine imaging method capable of automatically acquiring remittee face stereo information
CN105956582B (en) * 2016-06-24 2019-07-30 深圳市唯特视科技有限公司 A kind of face identification system based on three-dimensional data
CN106127250A (en) * 2016-06-24 2016-11-16 深圳市唯特视科技有限公司 A kind of face method for evaluating quality based on three dimensional point cloud
CN106803952B (en) * 2017-01-20 2018-09-14 宁波大学 In conjunction with the cross validation depth map quality evaluating method of JND model
CN107462587B (en) * 2017-08-31 2021-01-19 华南理工大学 Precise visual inspection system and method for concave-convex mark defects of flexible IC substrate
CN108171748B (en) * 2018-01-23 2021-12-07 哈工大机器人(合肥)国际创新研究院 Visual identification and positioning method for intelligent robot grabbing application

Also Published As

Publication number Publication date
CN110544233A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110544233B (en) Depth image quality evaluation method based on face recognition application
CN105550678B (en) Human action feature extracting method based on global prominent edge region
CN108303037B (en) Method and device for detecting workpiece surface shape difference based on point cloud analysis
CN109272524B (en) Small-scale point cloud noise denoising method based on threshold segmentation
CN107462204B (en) A kind of three-dimensional pavement nominal contour extracting method and system
CN108288271A (en) Image detecting system and method based on three-dimensional residual error network
CN110807781B (en) Point cloud simplifying method for retaining details and boundary characteristics
CN108921864A (en) A kind of Light stripes center extraction method and device
CN111223133A (en) Registration method of heterogeneous images
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN115330958A (en) Real-time three-dimensional reconstruction method and device based on laser radar
CN116539619B (en) Product defect detection method, system, device and storage medium
CN112329726B (en) Face recognition method and device
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN110349209A (en) Vibrating spear localization method based on binocular vision
CN116152697A (en) Three-dimensional model measuring method and related device for concrete structure cracks
CN115035164A (en) Moving target identification method and device
CN105740859B (en) A kind of three-dimensional interest point detecting method based on geometric measures and sparse optimization
CN117706577A (en) Ship size measurement method based on laser radar three-dimensional point cloud algorithm
CN117152344B (en) Tunnel surrounding rock structural surface analysis method and system based on photo reconstruction point cloud
CN116612097A (en) Method and system for predicting internal section morphology of wood based on surface defect image
CN112164044A (en) Wear analysis method of rigid contact net based on binocular vision
CN116721410A (en) Three-dimensional instance segmentation method and system for dense parts of aeroengine
CN116934678A (en) Method for detecting pit defects of aircraft skin under different scales based on point cloud data
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230609

Address after: Room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui 230001

Patentee after: Hefei lushenshi Technology Co.,Ltd.

Address before: Room 3032, gate 6, block B, 768 Creative Industry Park, 5 Xueyuan Road, Haidian District, Beijing 100083

Patentee before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right