CN110544233A - Depth image quality evaluation method based on face recognition application - Google Patents
Depth image quality evaluation method based on face recognition application Download PDFInfo
- Publication number
- CN110544233A CN110544233A CN201910693279.9A CN201910693279A CN110544233A CN 110544233 A CN110544233 A CN 110544233A CN 201910693279 A CN201910693279 A CN 201910693279A CN 110544233 A CN110544233 A CN 110544233A
- Authority
- CN
- China
- Prior art keywords
- test
- depth image
- mold
- depth
- normal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 22
- 238000012360 testing method Methods 0.000 claims abstract description 134
- 230000008447 perception Effects 0.000 claims abstract description 33
- 238000003384 imaging method Methods 0.000 claims description 41
- 238000005070 sampling Methods 0.000 claims description 35
- 238000005259 measurement Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 12
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 abstract description 18
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000011800 void material Substances 0.000 description 4
- 238000000691 measurement method Methods 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 238000010146 3D printing Methods 0.000 description 2
- 238000001035 drying Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a depth image quality evaluation method based on face recognition application, which comprises the following steps: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds. The method has the advantages of more comprehensive and accurate extracted features, more accurate evaluation of the face depth image based on the extracted features, and high practicability and reproducibility.
Description
Technical Field
The invention belongs to the technical field of depth perception imaging, and particularly relates to a depth image quality evaluation method based on face recognition application.
background
the depth perception imaging technology is always an important subject of the machine vision subject, and currently, the mainstream depth perception imaging technology includes a single-purpose-based spatial coding structured light depth imaging technology, a binocular-based structured light texture enhancement depth imaging technology, a Time of flight (TOF) technology and the like.
in the field of face recognition application, with the development of deep learning technology in recent years, the transition gradually goes from using color image or infrared image data singly to using color image and depth image data comprehensively, or using infrared image and depth image data comprehensively for face recognition. The depth image is mainly obtained through the depth perception imaging technology, so that the quality of the depth image generated based on the depth imaging algorithm has a very critical influence on the face recognition application.
At present, the related algorithm of the depth perception imaging technology is quite mature, and the current commonly used depth imaging error measurement method mainly measures the registration error between the actual point cloud of an object and the actual point cloud of the object obtained by the depth imaging algorithm, and the method has the following defects and problems: (1) a new error is introduced in the point cloud registration process to influence the evaluation of the quality of the depth imaging algorithm; (2) the registration error mixes various errors, and more detailed evaluation cannot be performed according to the actual application requirement; (3) the point cloud registration error only reflects the capability of the depth imaging algorithm to acquire depth data similar to a real object, and for many practical applications, the truth is not a main requirement.
in summary, there is no unified and effective quality evaluation method for depth imaging algorithm in the industry at present, and particularly for face recognition application, the depth imaging algorithm and the face recognition algorithm affect the face recognition accuracy together, and the depth imaging algorithm and the face recognition algorithm need to be decoupled through reasonable evaluation of the quality of the depth imaging algorithm. The conventional depth imaging error measurement method cannot meet the requirements of practical application, particularly the quality assessment of a depth imaging algorithm for certain specific applications, such as face recognition. For face recognition application, the quality of a depth imaging algorithm and the influence of the quality on the face recognition accuracy cannot be accurately measured only by common methods such as point cloud registration error measurement and the like.
disclosure of Invention
in order to overcome the problem that the existing depth imaging error measurement method cannot effectively evaluate the quality of a depth image in face recognition application or at least partially solve the problem, the embodiment of the invention provides a depth image quality evaluation method based on face recognition application.
the embodiment of the invention provides a depth image quality evaluation method based on face recognition application, which comprises the following steps:
acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
The embodiment of the invention provides a depth image quality evaluation method based on face recognition application, which comprises the steps of firstly obtaining a depth image of a test mold by using a depth sensor, analyzing the depth images with different surface shapes, extracting corresponding features from the depth image of the test mold according to the surface shape of the test mold, and evaluating the face depth image by using the features of the depth images of all the test molds as evaluation indexes, wherein the extracted features are more comprehensive and more accurate, and the face depth image evaluation based on the extracted features is more accurate and has high practicability and reproducibility.
drawings
in order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
fig. 1 is a schematic overall flow chart of a depth image quality evaluation method based on face recognition application according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a transverse sine waveform size of a sine die in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
Fig. 3 is a schematic diagram of a vertical sine waveform size of a sine die in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
Fig. 4 is a schematic size diagram of a folded surface mold in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
fig. 5 is a schematic size diagram of a cylindrical surface mold in the depth image quality evaluation method based on face recognition application according to the embodiment of the present invention;
Fig. 6 is a schematic view of an overall structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
in an embodiment of the present invention, a depth image quality evaluation method based on face recognition application is provided, and fig. 1 is a schematic overall flow chart of the depth image quality evaluation method based on face recognition application provided in the embodiment of the present invention, where the method includes: s101, acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
the depth perception sensor is a perception system with a built-in depth imaging algorithm, such as a depth camera integrated with the depth imaging algorithm. The surface shape of the test mold is determined according to an application scene, for example, when the test mold is applied to face recognition, the surface shape of the test mold is set according to the face surface characteristics. Because the surface characteristics of the human face are complex, the surface characteristics of the human face can be represented by a plurality of simple surface characteristics, and a test mold is prepared according to each simple surface characteristic.
The test mold is prepared first, and the test mold can be prepared in a 3D printing mode. A depth image of each test mold is then acquired using a depth perception sensor. The relative position of the depth perception sensor and the testing mold is fixed, the testing mold is ensured to appear in the center of the field of view of the depth perception sensor without rotating, and the testing mold is kept parallel to the imaging sensor of the depth perception sensor. Preferably, the relative position between the depth perception sensor and the test mold can be adjusted by using a fixed tool, and the rotation angle should not be greater than 3 degrees. A depth image of each test mold is acquired using a depth-sensing sensor. Or from the preset nearest evaluation distance, collecting a depth image at a certain distance for each test mold, and analyzing the depth image collected by each evaluation distance until the farthest evaluation distance is preset, so as to evaluate the quality of the depth image collected by each evaluation distance.
S102, analyzing the depth image of any test mold, and extracting the features of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
since the surface shape of each test mold is different, the features extracted from the depth image of each test mold are also different. The present embodiment is not limited to the kind of feature extracted from the depth image of each test pattern.
and S103, evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
The surface shape of each test mold is used for representing a simple surface feature of the human face, and the feature of the depth image of each test mold is used as the feature of the human face depth image, so that various features of the human face depth image are obtained. And evaluating the face depth image by taking various characteristics of the face depth image as evaluation indexes.
according to the depth image evaluation method and device, the depth image of the test mold is obtained by using the depth sensor, the depth images with different surface shapes are analyzed, corresponding features are extracted from the depth image of the test mold according to the surface shape of the test mold, the features of the depth images of all the test molds are used as evaluation indexes to evaluate the face depth image, the extracted features are more comprehensive and accurate, the face depth image evaluation based on the extracted features is more accurate, and the method and device have high practicability and reproducibility.
On the basis of the above embodiment, the test mold in this embodiment includes a plane mold, a sine surface mold, a folded surface mold and a cylindrical surface mold; the plane mould is a mould with a surface undulation error smaller than a first preset threshold value; the sine surface die is a die with the surface in a transversely and longitudinally staggered sine shape; the surface folding mold is a mold with a continuous right-angle folding surface; the cylindrical surface mould is a mould with a continuous cylindrical curved surface on the surface.
The plane mould is a plane plate or a wall surface with a plane fluctuation error smaller than a first preset threshold value and is used for inspecting the accuracy of a depth imaging algorithm on plane feature recovery; the face shape of the sine face mold is a staggered sine shape and is used for simulating the characteristics of human face parts, such as nose and mouth; the surface type of the folded surface die is a continuous right-angle folded surface and is used for inspecting the distinguishing capability of a depth imaging algorithm on the plane normal direction; the surface type of the cylindrical surface mould is a continuous cylindrical curved surface and is used for inspecting the smoothness of the depth imaging algorithm on the continuous curved surface characteristic recovery.
The test mould can be manufactured in a 3D printing mode, and the machining precision of the test mould is not lower than 1 mm. The overall size of the plane die is 300 mm-300 mm, and the first preset threshold value is 1 mm; the overall size of the sinusoidal mould was 300mm with 8 peaks in each of the transverse and vertical directions. Wherein, as shown in fig. 2, the amplitude of the transverse sine waveform is 10mm, and the period is 40 mm; as shown in fig. 3, the amplitude of the vertical sine waveform is 10mm, and the period is 30 mm; the overall size of the folded surface die is 300mm x 300mm, and the folded surface die consists of 6 groups of folded surfaces with included angles of 90 degrees, wherein the peak-to-valley value of each folded surface is 20mm, and the distance between adjacent peaks is 40mm, as shown in fig. 4; the overall size of the cylinder mould was 300mm, consisting of 3 sets of consecutive cylinders, with the cylinder peak to valley value 40mm and the adjacent peak to peak distance 80mm, as shown in figure 5.
on the basis of the above embodiment, the characteristics corresponding to the planar mold in this embodiment include precision, void ratio of the effective area, and ratio of dead spots; the characteristics corresponding to the sine surface mold comprise sine fitting degree, amplitude relative error and period relative error; the characteristics corresponding to the folded surface mould comprise a right-angle folded surface normal line division degree; the corresponding features of the cylindrical surface mold include a cylindrical surface normal smoothness.
Specifically, the extracted features of the depth image of the planar mold include three items of precision, effective area voidage, and dead pixel ratio. The precision characteristic is used for representing the precision degree of the depth imaging algorithm for recovering the plane characteristic, the void rate characteristic of the effective area is used for representing the probability of voids when the depth imaging algorithm recovers the plane characteristic, and the dead pixel proportion characteristic is used for representing the proportion condition of points with larger errors when the depth imaging algorithm recovers the plane characteristic.
the characteristics of the depth image of the sine surface mold comprise three items of sine fitting degree, amplitude relative error and period relative error. The sine fitting degree characteristic represents whether the surface type recovery is real and accurate during depth imaging, the amplitude relative error characteristic represents whether the depth imaging algorithm can identify fluctuation and the accuracy of depth data, and the period relative error represents the frequency response of the depth imaging algorithm and whether the high-frequency characteristic of the sine surface mold can be responded.
The characteristics of the depth image of the folded surface mold comprise right-angle folded surface normal line distinguishing degree which represents the distinguishing degree of two planes forming a right angle restored by the depth imaging algorithm, the distinguishing degree is represented by using the weighted average distance between the measured value and the true value of the normal vector of the three-dimensional point unit in the selected area, and the smaller the value of the distinguishing degree is, the higher the distinguishing capability of the right-angle plane restored by the depth imaging algorithm is.
The characteristics of the depth image of the extracted cylindrical surface mold comprise the normal smoothness of the cylindrical surface, the characteristics represent the smoothness degree of the cylindrical surface characteristics recovered by the depth imaging algorithm, the ratio of the three-dimensional point unit normal vector measurement values in the selected area within a certain range of a normal truth value curve is used for representing, and the smaller the smoothness value is, the smoother the cylindrical surface recovered by the depth imaging algorithm is.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mold is a plane mold, sampling the depth image of the test mold at a preset sampling interval; performing space plane fitting on coordinate values of pixel points in a neighborhood window of each sampling point based on a least square method to obtain a fitting plane corresponding to each sampling point; taking the distance from any sampling point to a fitting plane corresponding to the sampling point as the residual error of the target function at the sampling point; taking the average value of the residual errors of the target functions at all the sampling points as the precision corresponding to the test mold; converting the residual error of the objective function at each sampling point into a pixel error; taking the number proportion of sampling points with pixel errors larger than a second preset threshold value in the depth image of the test mold as the dead pixel rate corresponding to the test mold; selecting a region with a preset proportion from the center of the effective region of the depth image of the test mold as a region of interest; and counting the proportion of the number of the holes in the region of interest to the total number of the pixels in the region of interest, and taking the proportion as the effective region hole rate corresponding to the test mold.
specifically, the specific process of extracting the precision features is as follows: sampling is carried out on the depth image acquired at each evaluation distance every preset number of pixel points, and if the sampling interval is 10. And performing space plane fitting by using a least square method according to the coordinate values of the pixel points in the neighborhood window of each sampling point, wherein the size of the neighborhood window is 50 x 50. And acquiring the residual error of the target function at each sampling point, namely the space distance from each sampling point to the fitting plane. The average of the spatial distances of all sample points to the fitted plane is taken as the precision characteristic at each sample distance.
the specific process for extracting the void ratio characteristics of the effective area comprises the following steps: selecting a region with a preset proportion from the center of the effective region of the depth image of each evaluation distance of the plane mold as an interested region, counting the percentage of the number of hollow points in the interested region in the total number of pixels of the interested region, and using the percentage as the effective region hollow rate characteristic of each evaluation distance. Preferably, the present embodiment employs an area of 80% of the center of the effective area as the region of interest.
The specific process of extracting the dead pixel ratio is as follows: and selecting a pixel point every other preset number of pixel points on the depth image of each evaluation distance, wherein the sampling interval is 10. And performing plane fitting on pixel points in a neighborhood window of each sampling point by using a least square method, wherein the size of the neighborhood window is 50 x 50. Obtaining the residual error of the target function at each sampling point, then converting the residual error into a pixel error, and converting the residual error of the target function at each sampling point into the pixel error by the following formula:
E=R*T*F/d;
and the parameter is the pixel error of any sampling point, Re is the residual error of the objective function at any sampling point, T is the length of a base line of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition distance of the depth image. And taking the number ratio of the sampling points with the pixel errors larger than a second preset threshold value in the depth image of the plane mould as the dead pixel rate corresponding to the plane mould, wherein the second preset threshold value is 0.5.
on the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mold is a sine-surface mold, converting the depth image of the test mold into a pseudo-color image; selecting cross sections where a plurality of transverse and longitudinal peak valleys are located from the pseudo-color image, connecting the peaks at two ends of each cross section, and obtaining an intersection line of a peak connecting line and an imaging plane of the depth image; converting the pixel coordinates on the intersection line into plane coordinates; performing linear fitting on the converted coordinate points, and rotating the converted coordinate points according to the slope of the fitted linear; carrying out sine curve fitting on the rotated coordinate points, and calculating amplitude relative errors and period relative errors corresponding to the test mould according to the fitted sine curve and the parameter values of the test mould; and calculating the corresponding sine fitting degree of the test mold according to the corresponding fitting value of each coordinate point on the peak connecting line on the fitted sine curve.
Specifically, the step of extracting the features of the depth image of the sine surface mold specifically includes:
a. and converting the acquired depth image of the sinusoidal mold into a pseudo-color image so as to conveniently and accurately select a sinusoidal peak point, namely an extreme point, subsequently. Selecting a plurality of cross sections where the transverse and vertical peak-valley are located, obtaining an intersection line of the connection line and an imaging plane by selecting a connection line of peak points at two ends of the cross section, and selecting image point coordinates (i, j and z) on the intersection line to be converted into plane coordinate points (dij and z) for subsequent straight line and curve fitting, wherein the conversion formula is described as follows:
wherein i and j are corresponding horizontal and vertical coordinates of the pixel points, z is the depth value of the pixel points, dij is a plane coordinate value, cx and cy are main point coordinates of the depth camera, and fx and fy are focal lengths of the depth camera; preferably, the embodiment selects the peak-to-peak connecting lines of two transverse and two vertical cross sections of the central area of the sine surface die for testing.
b. Fitting a straight line on the converted coordinate points on each connecting line, and rotating in the opposite direction according to the slope of the straight line to eliminate the influence of the placing inclination on the sine fitting effect during shooting of a sine die;
c. the least square method is all adopted in the fitting of sinusoidal curve and straight line, and the better parameter initial value is required in the fitting of sinusoidal curve, include: the amplitude, the period, the phase shift and the amplitude shift can adopt two modes of manual setting and default values;
d. and calculating the relative error of the amplitude and the relative error of the period according to the parameter values, namely the amplitude and the period, of the fitted sinusoidal curve and by combining the true parameter value of the sinusoidal mold, and simultaneously obtaining the fitting value corresponding to each point on the peak-peak connecting line for calculating the fitting degree of the sine.
preferably, the sinusoidal fitness feature may employ goodness-of-fit R2, which is calculated as follows:
wherein, R2 is the sine fitting degree corresponding to the sine surface mold, Yi is the fitting value of the ith coordinate point on the crest connecting line, Yi is the actual value of the ith coordinate point on the crest connecting line, and is the average value of the actual values of all the coordinate points on the crest connecting line.
on the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mould is a folded surface mould, converting the depth image of the test mould into a pseudo-color image; selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor; for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point; selecting normal truth values of two adjacent planes of the test mold which are at right angles on a unit normal spherical surface; calculating Euclidean distances between the normal measurement value of the three-dimensional coordinate point and the two normal truth values respectively, and acquiring the minimum value of the two Euclidean distances; and sequencing the minimum values corresponding to the three-dimensional coordinate points from small to large, calculating the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate points according to the sequencing result, and taking the weighted average distance as the normal discrimination of the right-angled folding surface corresponding to the test mold.
Specifically, the step of performing feature extraction on the depth image of the folded surface mold specifically includes:
a. Converting the collected depth map of the folded surface mold into a pseudo-color map so as to accurately select a test area; selecting depth non-zero pixel points in the framed test area, and converting the depth non-zero pixel points into three-dimensional point cloud by combining the parameters of the depth perception sensor;
b. and acquiring a preset number of nearest neighbor points in the neighborhood of each three-dimensional point based on a KD-tree algorithm, calculating and normalizing the normal vector of each three-dimensional point by fitting a plane to the nearest neighbor points, and removing outlier noise points from the acquired unit normal point cloud by Euclidean clustering after traversing the three-dimensional point cloud. And taking the unit normal point of each three-dimensional point obtained after drying as the normal measured value of each three-dimensional coordinate point. Preferably, the present embodiment performs plane fitting by using 50 nearest neighbors in each three-dimensional point neighborhood;
c. On a unit normal spherical surface, selecting normal true values of two vertical planes, namely normal point cloud aggregation points, traversing Euclidean distances between each normal measured value and the two normal true values, taking the minimum value of the two Euclidean distances, sequencing the Euclidean distances according to the sequence from small to large, setting corresponding weights according to percentages, calculating weighted average distances, namely taking the average value after summing the weighted distances, and taking the average value as a normal line discrimination characteristic of a right-angled folding surface, wherein the calculation formula is as follows:
In the formula, D is a weighted average distance, a is a distance sorting percentage used in weighting, alpha and beta are weights of corresponding percentages respectively, di is the minimum value of a normal measurement value of the ith three-dimensional coordinate point in the sorting result and the Euclidean distance between two normal truth values, and db is the Euclidean distance between the two normal truth values; preferably, the distance weight of the first 80% is set to 0.2, and the distance weight of the last 20% is set to 0.8.
On the basis of the foregoing embodiment, in this embodiment, for any one of the test molds, analyzing the depth image of the test mold, and extracting the features of the depth image specifically includes: if the test mould is a cylindrical surface mould, converting the depth image of the test mould into a pseudo-color image; selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor; for any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point; projecting the normal measurement value of the three-dimensional coordinate point to an XY coordinate plane on a unit normal spherical surface to obtain a projection point of the normal measurement value of the three-dimensional coordinate point; fitting projection points of normal measurement values of all the three-dimensional coordinate points into a straight line, and calculating the distance from each projection point to the straight line; and calculating the proportion of the projection points with the distance smaller than the third preset threshold value in all the projection points, and taking the proportion as the normal smoothness of the cylindrical surface corresponding to the test mould.
specifically, the step of extracting the features of the depth image of the cylindrical surface mold specifically includes:
a. converting the acquired depth map of the cylindrical surface mold into a pseudo-color map so as to accurately frame a test area, selecting depth non-zero pixel points in the framed area, and converting the depth non-zero pixel points into three-dimensional point cloud by combining camera parameters;
b. Acquiring a certain number of nearest neighbor points in the neighborhood of each three-dimensional point based on a KD-tree algorithm, calculating and normalizing a normal vector of each three-dimensional point by fitting a plane to the nearest neighbor points, removing outlier noise points from the acquired unit normal point cloud of each unit point by Euclidean clustering after traversing the three-dimensional point cloud, and taking the unit normal point of each three-dimensional point acquired after drying as a normal measurement value of each three-dimensional coordinate point; preferably, the present embodiment performs plane fitting by using 50 nearest neighbors in each three-dimensional point neighborhood;
c. On a unit normal spherical surface, projecting a normal measured value to an XY coordinate plane, namely only taking out x and y coordinates, performing straight line fitting on projection points of all three-dimensional points based on a least square method, calculating the distance from each projection point to a fitted straight line, and counting the proportion of the projection points smaller than a distance threshold value to the total number of points, namely the normal smoothness of the cylindrical surface; preferably, the third preset threshold is 0.1.
aiming at the face recognition application, the quality influence of different characteristics of different testing molds on the depth imaging algorithm is different, and the quality influence is classified as follows according to the actual influence of each characteristic:
(1) a first layer of features: the sine mold- > sine fitting degree, the larger the characteristic value is, the better the characteristic value is, and the higher face recognition rate can be ensured only by keeping the characteristic value above a certain threshold value; preferably, the present invention employs a threshold of fitness of 0.9;
(2) Second layer characteristics: sine mold- > period relative error, the closer the characteristic value is to the same characteristic of the depth imaging algorithm for generating the face recognition training data set, the better the characteristic value is; the division of the folding surface mold- > right-angle folding surface normal line is better when the characteristic numerical value is smaller; the cylindrical surface mold- > the normal smoothness of the cylindrical surface, the larger the characteristic numerical value is, the better the characteristic numerical value is;
(3) the third layer is characterized in that: planar mold- > dead-spot ratio, the smaller the characteristic value, the better; the smaller the characteristic value is, the better the characteristic value is; planar mold- > precision, the smaller the value of the characteristic is, the better;
(4) Fourth layer characteristics: sinusoidal mode- > amplitude relative error, the smaller the eigenvalue the better.
according to the influence relationship of each level of features on the face recognition application, the following requirements are respectively required:
(1) The first-layer characteristics must be satisfied, otherwise, the face recognition rate may be low;
(2) the second layer of features have great influence on the face recognition rate and are close to the target value as much as possible;
(3) The third layer of features has no obvious influence on the face recognition rate, but is still very important, and particularly, certain features can seriously influence the face recognition result after exceeding a certain threshold, such as the void rate and the like;
(4) the fourth layer of features has no obvious influence on the face recognition rate at present, but is ensured to be within a controllable range as much as possible so as to avoid influencing the face recognition rate.
The embodiment provides an electronic device, and fig. 6 is a schematic view of an overall structure of the electronic device according to the embodiment of the present invention, where the electronic device includes: at least one processor 601, at least one memory 602, and a bus 603; wherein,
The processor 601 and the memory 602 communicate with each other via a bus 603;
the memory 602 stores program instructions executable by the processor 601, and the processor calls the program instructions to perform the methods provided by the above method embodiments, for example, the method includes: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
the present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the methods provided by the above method embodiments, for example, including: acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different; for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different; and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
the above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A depth image quality evaluation method based on face recognition application is characterized by comprising the following steps:
Acquiring depth images of one or more test molds based on a depth perception sensor; wherein the surface shape of each of the test molds is different;
for any test mould, analyzing the depth image of the test mould, and extracting the characteristics of the depth image; wherein the features extracted from the depth image of each of the test molds are different;
and evaluating the face depth image collected by the depth perception sensor according to the characteristics of the depth images of all the test molds.
2. the depth image quality evaluation method based on face recognition application of claim 1, wherein the test mold comprises a plane mold, a sine surface mold, a folded surface mold and a cylindrical surface mold;
The plane mould is a mould with a surface undulation error smaller than a first preset threshold value;
The sine surface die is a die with the surface in a transversely and longitudinally staggered sine shape;
The surface folding mold is a mold with a continuous right-angle folding surface;
the cylindrical surface mould is a mould with a continuous cylindrical curved surface on the surface.
3. the depth image quality evaluation method based on face recognition application of claim 2, wherein the characteristics corresponding to the planar mold comprise precision, effective area voidage and dead pixel ratio;
the characteristics corresponding to the sine surface mold comprise sine fitting degree, amplitude relative error and period relative error;
The characteristics corresponding to the folded surface mould comprise a right-angle folded surface normal line division degree;
the corresponding features of the cylindrical surface mold include a cylindrical surface normal smoothness.
4. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mold is a plane mold, sampling the depth image of the test mold at a preset sampling interval;
Performing space plane fitting on coordinate values of pixel points in a neighborhood window of each sampling point based on a least square method to obtain a fitting plane corresponding to each sampling point;
Taking the distance from any sampling point to a fitting plane corresponding to the sampling point as the residual error of the target function at the sampling point;
Taking the average value of the residual errors of the target functions at all the sampling points as the precision corresponding to the test mold;
converting the residual error of the objective function at each sampling point into a pixel error;
taking the number proportion of sampling points with pixel errors larger than a second preset threshold value in the depth image of the test mold as the dead pixel rate corresponding to the test mold;
selecting a region with a preset proportion from the center of the effective region of the depth image of the test mold as a region of interest;
and counting the proportion of the number of the holes in the region of interest to the total number of the pixels in the region of interest, and taking the proportion as the effective region hole rate corresponding to the test mold.
5. the method of claim 4, wherein the residual error of the objective function at each sampling point is converted into a pixel error by the following formula:
E=R*T*F/d;
And the parameter is the pixel error of any sampling point, Re is the residual error of an objective function at any sampling point, T is the length of a base line of the depth perception sensor, F is the focal length of the depth perception sensor, and d is the acquisition distance of the depth image.
6. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mold is a sine-surface mold, converting the depth image of the test mold into a pseudo-color image;
Selecting cross sections where a plurality of transverse and longitudinal peak valleys are located from the pseudo-color image, connecting the peaks at two ends of each cross section, and obtaining an intersection line of a peak connecting line and an imaging plane of the depth image;
Converting the pixel coordinates on the intersection line into plane coordinates;
Performing linear fitting on the converted coordinate points, and rotating the converted coordinate points according to the slope of the fitted linear;
carrying out sine curve fitting on the rotated coordinate points, and calculating amplitude relative errors and period relative errors corresponding to the test mould according to the fitted sine curve and the parameter values of the test mould;
And calculating the corresponding sine fitting degree of the test mold according to the corresponding fitting value of each coordinate point on the peak connecting line on the fitted sine curve.
7. The method of claim 6, wherein the degree of fitting of the sine corresponding to the test mold is calculated according to the fitting value of each coordinate point on the peak connecting line on the fitted sine curve by the following formula:
wherein, R2 is the sine fitting degree corresponding to the test mold, Yi is the fitting value of the ith coordinate point on the peak connecting line, Yi is the actual value of the ith coordinate point on the peak connecting line, and Yi is the average value of the actual values of all coordinate points on the peak connecting line.
8. The depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
If the test mould is a folded surface mould, converting the depth image of the test mould into a pseudo-color image;
selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
For any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
selecting normal truth values of two adjacent planes of the test mold which are at right angles on a unit normal spherical surface;
Calculating Euclidean distances between the normal measurement value of the three-dimensional coordinate point and the two normal truth values respectively, and acquiring the minimum value of the two Euclidean distances;
And sequencing the minimum values corresponding to the three-dimensional coordinate points from small to large, calculating the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate points according to the sequencing result, and taking the weighted average distance as the normal discrimination of the right-angled folding surface corresponding to the test mold.
9. The method of claim 8, wherein the weighted average distance between the normal measured value and the normal true value of the three-dimensional coordinate point is calculated according to the ranking result by:
and D is the weighted average distance, n is the number of the three-dimensional coordinate points, a is a preset proportion, alpha and beta are weights, di is the minimum value of the Euclidean distance between the normal measured value of the ith three-dimensional coordinate point in the sequencing result and the two normal truth values, and db is the Euclidean distance between the two normal truth values.
10. the depth image quality evaluation method based on face recognition application according to claim 2, wherein for any test mold, analyzing the depth image of the test mold, and extracting the features of the depth image specifically comprises:
if the test mould is a cylindrical surface mould, converting the depth image of the test mould into a pseudo-color image;
Selecting a test area from the pseudo color image, and converting depth non-zero pixel points in the selected test area into three-dimensional coordinate points according to the parameters of the depth perception sensor;
For any three-dimensional coordinate point, acquiring a preset number of nearest neighbor points in the neighborhood of the three-dimensional coordinate point based on a KD-tree algorithm, performing plane fitting on the nearest neighbor points, acquiring a normal vector of the three-dimensional coordinate point according to a fitted plane, normalizing the normal vector, and acquiring a normal measurement value of the three-dimensional coordinate point;
Projecting the normal measurement value of the three-dimensional coordinate point to an XY coordinate plane on a unit normal spherical surface to obtain a projection point of the normal measurement value of the three-dimensional coordinate point;
fitting projection points of normal measurement values of all the three-dimensional coordinate points into a straight line, and calculating the distance from each projection point to the straight line;
And calculating the proportion of the projection points with the distance smaller than the third preset threshold value in all the projection points, and taking the proportion as the normal smoothness of the cylindrical surface corresponding to the test mould.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910693279.9A CN110544233B (en) | 2019-07-30 | 2019-07-30 | Depth image quality evaluation method based on face recognition application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910693279.9A CN110544233B (en) | 2019-07-30 | 2019-07-30 | Depth image quality evaluation method based on face recognition application |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110544233A true CN110544233A (en) | 2019-12-06 |
CN110544233B CN110544233B (en) | 2022-03-08 |
Family
ID=68709887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910693279.9A Active CN110544233B (en) | 2019-07-30 | 2019-07-30 | Depth image quality evaluation method based on face recognition application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110544233B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
CN111353982A (en) * | 2020-02-28 | 2020-06-30 | 贝壳技术有限公司 | Depth camera image sequence screening method and device |
CN113126944A (en) * | 2021-05-17 | 2021-07-16 | 北京的卢深视科技有限公司 | Depth map display method, display device, electronic device, and storage medium |
CN113836980A (en) * | 2020-06-24 | 2021-12-24 | 中兴通讯股份有限公司 | Face recognition method, electronic device and storage medium |
CN114299016A (en) * | 2021-12-28 | 2022-04-08 | 北京的卢深视科技有限公司 | Depth map detection device, method, system and storage medium |
CN115049658A (en) * | 2022-08-15 | 2022-09-13 | 合肥的卢深视科技有限公司 | RGB-D camera quality detection method, electronic device and storage medium |
CN116576806A (en) * | 2023-04-21 | 2023-08-11 | 深圳市磐锋精密技术有限公司 | Precision control system for thickness detection equipment based on visual analysis |
CN117058111A (en) * | 2023-08-21 | 2023-11-14 | 大连亚明汽车部件股份有限公司 | Quality inspection method and system for automobile aluminum alloy die casting die |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960097A (en) * | 1997-01-21 | 1999-09-28 | Raytheon Company | Background adaptive target detection and tracking with multiple observation and processing stages |
CN101681520A (en) * | 2007-05-30 | 2010-03-24 | 皇家飞利浦电子股份有限公司 | Pet local tomography |
CN103763552A (en) * | 2014-02-17 | 2014-04-30 | 福州大学 | Stereoscopic image non-reference quality evaluation method based on visual perception characteristics |
CN105956582A (en) * | 2016-06-24 | 2016-09-21 | 深圳市唯特视科技有限公司 | Face identifications system based on three-dimensional data |
CN105989591A (en) * | 2015-02-11 | 2016-10-05 | 詹曙 | Automatic teller machine imaging method capable of automatically acquiring remittee face stereo information |
CN106127250A (en) * | 2016-06-24 | 2016-11-16 | 深圳市唯特视科技有限公司 | A kind of face method for evaluating quality based on three dimensional point cloud |
US20160371539A1 (en) * | 2014-04-03 | 2016-12-22 | Tencent Technology (Shenzhen) Company Limited | Method and system for extracting characteristic of three-dimensional face image |
CN106803952A (en) * | 2017-01-20 | 2017-06-06 | 宁波大学 | With reference to the cross validation depth map quality evaluating method of JND model |
CN107462587A (en) * | 2017-08-31 | 2017-12-12 | 华南理工大学 | A kind of the precise vision detecting system and method for flexible IC substrates bump mark defect |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
-
2019
- 2019-07-30 CN CN201910693279.9A patent/CN110544233B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960097A (en) * | 1997-01-21 | 1999-09-28 | Raytheon Company | Background adaptive target detection and tracking with multiple observation and processing stages |
CN101681520A (en) * | 2007-05-30 | 2010-03-24 | 皇家飞利浦电子股份有限公司 | Pet local tomography |
CN103763552A (en) * | 2014-02-17 | 2014-04-30 | 福州大学 | Stereoscopic image non-reference quality evaluation method based on visual perception characteristics |
US20160371539A1 (en) * | 2014-04-03 | 2016-12-22 | Tencent Technology (Shenzhen) Company Limited | Method and system for extracting characteristic of three-dimensional face image |
CN105989591A (en) * | 2015-02-11 | 2016-10-05 | 詹曙 | Automatic teller machine imaging method capable of automatically acquiring remittee face stereo information |
CN105956582A (en) * | 2016-06-24 | 2016-09-21 | 深圳市唯特视科技有限公司 | Face identifications system based on three-dimensional data |
CN106127250A (en) * | 2016-06-24 | 2016-11-16 | 深圳市唯特视科技有限公司 | A kind of face method for evaluating quality based on three dimensional point cloud |
CN106803952A (en) * | 2017-01-20 | 2017-06-06 | 宁波大学 | With reference to the cross validation depth map quality evaluating method of JND model |
CN107462587A (en) * | 2017-08-31 | 2017-12-12 | 华南理工大学 | A kind of the precise vision detecting system and method for flexible IC substrates bump mark defect |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
Non-Patent Citations (3)
Title |
---|
张万祯: ""数字投影结构光三维测量方法研究"", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
方程: ""3D 人脸识别技术"", 《电脑编程技巧与维护》 * |
郝雯等: ""面向点云的三维物体识别方法综述"", 《计算机科学》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
CN111353982A (en) * | 2020-02-28 | 2020-06-30 | 贝壳技术有限公司 | Depth camera image sequence screening method and device |
CN111353982B (en) * | 2020-02-28 | 2023-06-20 | 贝壳技术有限公司 | Depth camera image sequence screening method and device |
CN113836980A (en) * | 2020-06-24 | 2021-12-24 | 中兴通讯股份有限公司 | Face recognition method, electronic device and storage medium |
CN113126944A (en) * | 2021-05-17 | 2021-07-16 | 北京的卢深视科技有限公司 | Depth map display method, display device, electronic device, and storage medium |
CN114299016A (en) * | 2021-12-28 | 2022-04-08 | 北京的卢深视科技有限公司 | Depth map detection device, method, system and storage medium |
CN115049658A (en) * | 2022-08-15 | 2022-09-13 | 合肥的卢深视科技有限公司 | RGB-D camera quality detection method, electronic device and storage medium |
CN116576806A (en) * | 2023-04-21 | 2023-08-11 | 深圳市磐锋精密技术有限公司 | Precision control system for thickness detection equipment based on visual analysis |
CN116576806B (en) * | 2023-04-21 | 2024-01-26 | 深圳市磐锋精密技术有限公司 | Precision control system for thickness detection equipment based on visual analysis |
CN117058111A (en) * | 2023-08-21 | 2023-11-14 | 大连亚明汽车部件股份有限公司 | Quality inspection method and system for automobile aluminum alloy die casting die |
CN117058111B (en) * | 2023-08-21 | 2024-02-09 | 大连亚明汽车部件股份有限公司 | Quality inspection method and system for automobile aluminum alloy die casting die |
Also Published As
Publication number | Publication date |
---|---|
CN110544233B (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110544233B (en) | Depth image quality evaluation method based on face recognition application | |
CN108303037B (en) | Method and device for detecting workpiece surface shape difference based on point cloud analysis | |
CN109272524B (en) | Small-scale point cloud noise denoising method based on threshold segmentation | |
CN107462204B (en) | A kind of three-dimensional pavement nominal contour extracting method and system | |
CN110807781B (en) | Point cloud simplifying method for retaining details and boundary characteristics | |
CN110390696A (en) | A kind of circular hole pose visible detection method based on image super-resolution rebuilding | |
CN108921864A (en) | A kind of Light stripes center extraction method and device | |
CN104634242A (en) | Point adding system and method of probe | |
CN115330958A (en) | Real-time three-dimensional reconstruction method and device based on laser radar | |
CN113393439A (en) | Forging defect detection method based on deep learning | |
CN116539619B (en) | Product defect detection method, system, device and storage medium | |
CN112329726B (en) | Face recognition method and device | |
CN109584206B (en) | Method for synthesizing training sample of neural network in part surface flaw detection | |
CN110349209A (en) | Vibrating spear localization method based on binocular vision | |
CN115797551B (en) | Automatic modeling method for laser point cloud data based on two-step unsupervised clustering algorithm | |
CN111968224A (en) | Ship 3D scanning point cloud data processing method | |
CN116152697A (en) | Three-dimensional model measuring method and related device for concrete structure cracks | |
Anusree et al. | Characterization of sand particle morphology: state-of-the-art | |
CN105740859B (en) | A kind of three-dimensional interest point detecting method based on geometric measures and sparse optimization | |
CN117706577A (en) | Ship size measurement method based on laser radar three-dimensional point cloud algorithm | |
CN113628170A (en) | Laser line extraction method and system based on deep learning | |
CN116612097A (en) | Method and system for predicting internal section morphology of wood based on surface defect image | |
CN113205553A (en) | Light stripe center extraction method based on three-channel feature fusion | |
CN116721410A (en) | Three-dimensional instance segmentation method and system for dense parts of aeroengine | |
CN112164044A (en) | Wear analysis method of rigid contact net based on binocular vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230609 Address after: Room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui 230001 Patentee after: Hefei lushenshi Technology Co.,Ltd. Address before: Room 3032, gate 6, block B, 768 Creative Industry Park, 5 Xueyuan Road, Haidian District, Beijing 100083 Patentee before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD. |
|
TR01 | Transfer of patent right |