CN107563963B - Super-resolution reconstruction method based on single depth map - Google Patents

Super-resolution reconstruction method based on single depth map Download PDF

Info

Publication number
CN107563963B
CN107563963B CN201710686263.6A CN201710686263A CN107563963B CN 107563963 B CN107563963 B CN 107563963B CN 201710686263 A CN201710686263 A CN 201710686263A CN 107563963 B CN107563963 B CN 107563963B
Authority
CN
China
Prior art keywords
resolution
depth map
edge
map
sample block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710686263.6A
Other languages
Chinese (zh)
Other versions
CN107563963A (en
Inventor
梁晓辉
王晓川
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Aeronautics and Astronautics
Original Assignee
Beijing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Aeronautics and Astronautics filed Critical Beijing University of Aeronautics and Astronautics
Priority to CN201710686263.6A priority Critical patent/CN107563963B/en
Publication of CN107563963A publication Critical patent/CN107563963A/en
Application granted granted Critical
Publication of CN107563963B publication Critical patent/CN107563963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a super-resolution reconstruction method based on a single depth map, and relates to the field of computer vision and image processing. The method can reconstruct the high-resolution depth map of the single acquired depth map. Firstly, constructing a training data sample set through an input low-resolution depth map by utilizing local self-similarity; then, reconstructing a high-resolution depth edge map through the input low-resolution depth map and the constructed self-similar sample set by using a Markov random field model; then, under the guidance of the reconstructed high-resolution depth edge map, restoring the high-resolution depth map through modified bilateral filtering; and finally, performing iterative processing on the obtained high-resolution depth map serving as a low-resolution input map until the target resolution is reached. The invention introduces the local self-similarity of the image and the method for restoring the edge guide depth map, and can quickly and effectively carry out high-resolution reconstruction on a single low-resolution depth map.

Description

Super-resolution reconstruction method based on single depth map
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a super-resolution reconstruction method based on a single depth map.
Background
The depth map has wide application in the field of computer vision such as 3DTV, three-dimensional modeling, robot navigation, target tracking, interactive games and the like. However, the resolution of the depth map currently acquired with a depth camera is low compared to high resolution color maps, which limits the further use of depth maps to a large extent. For example, Swiss Range SR400 and PMDCamCube obtain depth map resolution of only 200X200, even for Kinect, only 512X424, which is much lower than its corresponding color map resolution of 1920X 1080. Therefore, to overcome this problem, increasing the resolution of the depth map is a critical and urgent research.
Currently, depth map super-resolution methods can be divided into three major categories. The first category is fusion-based methods, which aim at obtaining high-resolution depth maps by fusing multiple low-resolution depth maps. But such methods rely heavily on one hypothesis: i.e., multiple ranges of still images can be acquired with little camera motion, this may not be true in many practical applications and the cost of operation is very expensive. The second category is based on a method of combining color maps to guide depth maps to perform super-resolution by using a structural relationship between a high-resolution color map and a low-resolution depth map. Such methods can cause problems such as texture duplication of color maps, and in practical cases, registering and synchronizing color maps and depth maps is also a troublesome problem. The third kind is a method based on a single depth map, which uses a super-resolution method for a single natural image as a reference, especially a method based on sample learning. Compared with natural image super-resolution, the depth map has less information, has higher requirements on edge retention and noise removal, and faces greater challenges. However, such methods do not require additional depth image frames or corresponding high resolution color images and are therefore easier to implement.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method overcomes the defects of the prior art and provides a super-resolution reconstruction method based on a single depth map. By utilizing local self-similarity and high-resolution edge map guidance, the problems of edge ringing and the like of the reconstructed high-resolution depth map are effectively solved, and a better reconstruction effect is obtained. The implementation shows that the method provided by the invention can quickly and effectively reconstruct the high-resolution depth map from a single depth map.
The technical scheme adopted by the invention for solving the problems is as follows: a super-resolution reconstruction method based on a single depth map comprises the following steps:
and (1) constructing a training data sample set, namely, utilizing local self-similarity of images, combining an image edge detection operator and a shock filtering method through a low-resolution depth map, and carrying out blocking processing on the processed images to obtain the training sample data sample set.
And (2) reconstructing a high-resolution edge image, namely obtaining an input sample block pair by partitioning the input low-resolution depth image, and solving N groups of training sample block pairs which are nearest to the input sample block in the step (1) by utilizing distance transformation and Euclidean distance. And constructing a Markov model on the basis, obtaining an optimal matching sample block, and then carrying out fusion processing on the obtained final sample block to obtain a high-resolution edge image.
And (3) performing depth map super-resolution, namely obtaining a high-resolution edge map by using the step (2), and obtaining a final high-resolution depth map by using a modified bilateral filter in combination with the input low-resolution depth map.
And (4) carrying out iterative processing until the target resolution is reached.
Further, the specific content of the training data sample set construction in the step (1) is as follows:
and step (A1), utilizing local self-similarity, keeping the small sample blocks in the image similar to the self structure under small-scale transformation. Therefore, in order to construct a self-similar data sample set from the depth map itself, the input low-resolution depth map D is further down-sampled by a predetermined multiple to obtain Dl. Using image edge detection operators, for DlExtracting the edge to obtain ElExtracting edge from the input low-resolution depth map to obtain E, and performing shock filtering on the E to obtain Els
Step (A2) for edge graph El,EsRespectively carrying out blocking processing to respectively obtain a sample block set yi lAnd yi sForm a training sample block pair Y ═ Yi l,yi sWhich acts like pairs of external data set sample blocks.
Further, the specific content of the reconstruction of the high-resolution edge map in step (2) is as follows:
step (B1) is to construct an input sample data set corresponding to the training data sample set in step (1) by using local self-similarity of images. For the input low resolution depth map D and the depth map D obtained by further down-samplinglRespectively performing upsampling on set multiples to obtain DhAnd Dm. Using image edge detection operators, for DhExtracting the edge to obtain EhTo D, pairmExtracting the edge to obtain EmAnd to EhPerforming shock filtering to obtain Ehs
Step (B2), for edge graph Em,EhsBlock processing is carried out to respectively obtain a sample block set xi lAnd xi sForm the input sample block pair set X ═ Xi l,xi sAnd (2) comparing the training sample block pair set Y obtained in (1) with { Y ═ Y }i l,yi sCorresponding to the obtained result;
and (B3) utilizing the input sample block pair set obtained in the last step and the training sample block pair set obtained in the step (1) to firstly carry out distance transformation on all sample blocks, and then utilizing the Euclidean distance to obtain N training sample block pairs in the training sample block pairs, wherein the distance between the N training sample block pairs and each input sample block pair is the nearest.
Step (B4), inputting each sample block pair XiAnd as observation nodes, taking the N training sample block pairs which are obtained in the last step and have the closest distance to the N training sample block pairs as hidden node labels, constructing a Markov random field model, and solving an optimal matching block.
And (B5) fusing the optimal high-resolution edge sample blocks obtained through the Markov model in the previous step to obtain a final high-resolution edge image.
Further, the specific content of the depth map super-resolution in the step (3) is as follows:
and (C1) defining a support window, and judging whether other pixels in the support window with the pixel as the center and the current pixel belong to the same side of the edge or not for each pixel in the image by using the high-resolution edge image obtained in the step (3). If yes, the pixel is taken as an effective weight pixel, otherwise, the pixel is taken as an ineffective weight pixel.
And step (C2), finding the corresponding pixel depth value of the same-side pixel of the obtained edge in the input low-resolution depth map, and performing weighted summation on the pixel depth value and the pixel depth value to be used as the final depth value of the high-resolution depth map. And for the pixel points which do not belong to the same side of the edge in the support window, filling corresponding values in the depth map after the input low-resolution depth map is subjected to bicubic interpolation.
Further, the specific content of the iterative processing in the step (4) is as follows:
and (D) taking the high-resolution depth map obtained in the step (3) as an input, returning to the step (1) for iterative processing until the resolution of the target depth map is achieved.
Compared with the prior art, the invention has the advantages that:
the invention uses a single low-resolution depth map as input, and combines the local self-similarity of the image with the high-resolution edge guiding method to realize the super-resolution of the single depth map. Compared with the prior depth map super-resolution method, the method of the invention does not need additional depth image frames or corresponding high-resolution color images, does not need an external data set, only utilizes self information, and has simple realization and good reconstruction effect.
Drawings
FIG. 1 is a schematic flow diagram of super-resolution of a single depth map according to the present invention:
FIGS. 2a to 2d are simulation experiment test images adopted by the present invention, which are low resolution depth maps obtained by performing 4 times down-sampling on "Cones", "Teddy", "Tsukuba" and "Venus" in a Middlebury data set, respectively;
fig. 3a to 3e are comparison graphs of the image reconstruction result of fig. 2a according to the present invention and three methods in the prior art. Wherein fig. 3a is a high resolution depth real map, fig. 3b is a high resolution depth map obtained by a nearest neighbor interpolation method, fig. 3c is a high resolution depth map obtained by a sample-based method (PB), fig. 3d is a high resolution depth map obtained by an edge-guided method (EG), and fig. 3e is a high resolution depth map obtained by the present invention;
FIGS. 4a to 4e are comparison graphs of the super-resolution result of the image of FIG. 2b according to the present invention and three methods in the prior art, the comparison method is the same as that of FIG. 3;
FIGS. 5 a-5 e are comparison graphs of the super-resolution results of the image of FIG. 2c according to the present invention and the present three methods, which are the same as FIG. 3;
FIGS. 6 a-6 e are comparison graphs of the super-resolution results of the image of FIG. 2d according to the present invention and the present three methods, which are the same as those of FIG. 3;
FIGS. 7a to 7d are comparison graphs of super-resolution results of the Kinect real depth map acquired by the present invention and the existing three methods. Fig. 7a is a high-resolution depth map obtained by the nearest interpolation method, fig. 7b is a high-resolution depth map obtained by the sample-based method (PB), fig. 7c is a high-resolution depth map obtained by the edge-guided method (EG), and fig. 7d is a high-resolution depth map obtained by the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and examples:
as shown in FIG. 1, the invention provides a method for super-resolution reconstruction based on a single depth map, which comprises a training data sample set construction step, a high-resolution difficulty edge map reconstruction step and a depth map super-resolution step. The invention is realized concretely as follows:
the method comprises the following steps: and (3) constructing a training data sample set, namely constructing the training data sample set through the low-resolution depth map by utilizing local self-similarity:
in a natural image, after the image is subjected to sampling transformation by a small sampling factor, a local sample block can be similar to itself. Based on the above teaching, a depth map super-resolution strategy is proposed, which is to find the position of a sample block in a low-resolution map by matching the low-frequency component of a high-resolution depth map sample block sampled on a small scale with the low-frequency component of an input low-resolution depth map sample block, and then fill the high-resolution depth map with the high-frequency component of the position of the low-resolution map. In order to construct a training sample set using local self-similarity, an input low-resolution depth map D is first subjected to down-sampling processing with a further set multipleTo obtain a depth map D1. Then using Canny operator to pair D1Carrying out edge extraction to obtain a corresponding depth edge image E1. In addition, for the input low-resolution depth map D, edge extraction is also carried out by using a Canny operator to obtain a corresponding depth edge map E, and then shock filtering processing is carried out on the E to obtain a sharpened depth edge map Els
Then, for the obtained depth edge map El,EsRespectively carrying out blocking processing to respectively obtain sample block sets { yi lAnd { y }i sForm a training sample block pair Y ═ Yi l,yi s}. Here, the division of the sample block into pixel-by-pixel extraction, i.e. from the upper left corner of the image to the lower right corner of the image, the extraction of the sample block is performed with each pixel as the center, where ysBlock size ylBy a factor of 1, for example: y islThe block size is 3 x 3, the set sampling factor is 2, and then y corresponds tosThe block size is 7 × 7.
Step two: and (3) reconstructing a high-resolution depth edge map, namely constructing a Markov random field model by using the input low-resolution depth map and the training sample block pair set obtained in the step one:
first, an input sample block set corresponding to the training sample block set in (1) is constructed. For the input low resolution depth map D and the depth map D obtained by further down-samplinglRespectively performing upsampling on set multiples to obtain DhAnd DmThe upsampling method used here is a bicubic interpolation method. Then using Canny operator to pair DhExtracting the edge to obtain a corresponding depth edge map EhIn the same manner as for DmProcessing to obtain a depth edge map EmAnd to EhCarrying out shock filtering to obtain a sharpened depth edge image Ehs
Then, for the depth edge map Em,EhsRespectively carrying out blocking processing to respectively obtain sample block sets xi lAnd xi sForm the input sample block pair set X ═ Xi l,xi sIs obtained from (1)To training sample block set Y ═ Yi l,yi sAnd (6) corresponding. Note that here the chunk size is consistent with the step one sample chunk size. But unlike the training of sample block construction in step one, where sample blocks are taken pixel by pixel, the entire image is divided into non-overlapping image sample blocks.
Then, for each input sample block pair, the input sample block pair is used as an observation node in the Markov random field, N training sample block pairs with the shortest distance in the training sample block pair set are obtained by the Euclidean distance, and then the N training sample block pairs are used as hidden node labels in the Markov random field to construct the Markov random field model. And the optimal matching block is obtained by using a Markov random model. Wherein the Markov energy function is as follows:
Figure BDA0001376818740000051
wherein E is1And E2For data items, E3For the smoothing term, β and γ are weight coefficients, respectively. First data item E1Weight training sample block yi lAnd input sample block xi lSimilarity between them; second data item E2Weight training sample block yi sAnd input sample block xi sThe similarity between them. . Wherein:
Figure BDA0001376818740000052
Figure BDA0001376818740000053
here, d is the distance transformation of the edge sample block, and the euclidean distance calculation after the distance transformation can perform a better similarity measure for the binary pattern.
Smoothing term E3Enhancing the consistency of the overlapping area of adjacent edge sample blocks, wherein OijIs to the adjacent edge sample block yi s Andyj sthe overlapping area is subjected to an area extraction operation.
And (3) solving an optimal solution for the energy function through a Markov model, and finally fusing the obtained optimal high-resolution edge sample block to obtain a final high-resolution depth edge image.
Step three: and (3) depth map super-resolution, namely obtaining a final high-resolution depth map by using a high-resolution depth edge map and combining an input low-resolution depth map and a modified bilateral filter:
wherein D ishFor a high-resolution depth map of the target, DlFor the input low-resolution depth map, EhFor high resolution edge map, n (p) is a defined support window with a pixel p as the center, p ↓ and q ↓ ] are pixels in the input low resolution depth map, k ↓ and q ↓ -pIs a normalization factor, fd(. is) a Gaussian kernel function, fr() is a binary indicator function defined as follows:
Figure BDA0001376818740000061
by the guidance of the high-resolution depth edge map, only pixels on the same side of the edge will be weighted and retained in the final high-resolution depth map. And if the pixel points belonging to the pixels on the same side of the edge do not exist in the support window, filling the corresponding depth value in the depth map obtained by the input low-resolution depth map through bicubic interpolation.
As shown in table 1, the root mean square error RMSE was used as an evaluation index to compare with the conventional representative method: nearest neighbor interpolation method (NN), sample-based method proposed in o.m. aodha et al in 12 years (PB,2012), and Jun Xie et al in 14 years proposed edge-guided method (EG, 2014). The super-resolution reconstruction method based on the single depth map can obtain results with higher sense of reality and accuracy. The visual effect results are compared as in fig. 3,4,5,6, 7.
TABLE 1 quantitative comparison of different super-resolution methods on Middlebury datasets
Cones Venus Teddy Tsukuba
NN 1.498 0.367 1.348 0.832
PB 1.481 0.337 1.280 0.833
EG 1.157 0.314 1.024 0.765
The invention 1.052 0.247 0.888 0.723
Those skilled in the art will appreciate that the invention may be practiced without these specific details.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (2)

1. A super-resolution reconstruction method based on a single depth map is characterized by comprising the following steps:
step (1), constructing a training data sample set, namely constructing the training data sample set by using local self-similarity and an input low-resolution depth map; the specific content of the training data sample set structure in the step (1) is as follows:
a step (A1) of down-sampling the input low-resolution depth map D by a further set multiple to obtain DlUsing image edge detection operator, for DlExtracting the edge to obtain an edge map ElIn addition, extracting edges of the input low-resolution depth map to obtain E, and performing shock filtering on the E to obtain Els
Step (A2) for edge graph El,ElsRespectively carrying out blocking processing to respectively obtain a sample block set yi lAnd yi sForm training sample block pair set Y ═ Yi l,yi s};
Step (2), reconstructing a high-resolution edge map, namely reconstructing the high-resolution edge map through a Markov random field model by using the input low-resolution edge map and the constructed self-similar sample set; the specific content of the high-resolution edge map reconstruction in the step (2) is as follows:
a step (B1) of adding the low-resolution depth map D and the depth map D obtained by further down-samplinglRespectively performing upsampling on set multiples to obtain DhAnd DmUsing image edge detection operator, for DhExtracting the edge to obtain EhTo D, pairmExtracting the edge to obtain EmAnd to EhPerforming shock filtering to obtain Ehs
Step (B2), for edge graph Em,EhsRespectively carrying out blocking processing to respectively obtain sample block sets xi lAnd xi sForm the input sample block pair set X ═ Xi l,xi s};
Step (B3), for each input sample block pair, respectively carrying out Euclidean distance calculation with all training sample block pairs obtained in step (1), and obtaining N training sample block pairs with the nearest distance to each input sample block pair;
step (B4), inputting each sample block pair XiAs observation nodes, the N training sample blocks which are closest to the observation nodes and obtained in the previous step are used as hidden node labels to construct a Markov random field model;
step (B5), fusing the high-resolution edge sample blocks obtained by the Markov random field model to obtain a final high-resolution edge image;
the depth map super-resolution step (3) is that under the guidance of the reconstructed high-resolution edge map, the high-resolution depth map is restored through modified bilateral filtering; the specific content of the depth map super-resolution in the step (3) is as follows:
step (C1), defining a support window, and determining, for each pixel in the image, whether other pixels in the support window centered on the pixel are on the same side of the edge as the current pixel, by using the high-resolution edge map obtained in step (3);
step (C2), finding out the pixel depth value of the pixel located on the same side of the edge in the low-resolution depth map, and weighting and summing the pixel depth value and the pixel depth value to be used as the final depth value of the high-resolution depth map;
and (4) performing iterative processing, namely performing iterative processing by taking the obtained high-resolution depth map as an input until the target resolution is reached.
2. The method for super-resolution reconstruction based on single depth map according to claim 1, wherein: the specific content of the iterative processing in the step (4) is as follows:
and (D) taking the high-resolution depth map obtained in the step (3) as an input, returning to the step (1) for iterative processing until the resolution of the target depth map is achieved.
CN201710686263.6A 2017-08-11 2017-08-11 Super-resolution reconstruction method based on single depth map Active CN107563963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710686263.6A CN107563963B (en) 2017-08-11 2017-08-11 Super-resolution reconstruction method based on single depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710686263.6A CN107563963B (en) 2017-08-11 2017-08-11 Super-resolution reconstruction method based on single depth map

Publications (2)

Publication Number Publication Date
CN107563963A CN107563963A (en) 2018-01-09
CN107563963B true CN107563963B (en) 2020-01-03

Family

ID=60975395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710686263.6A Active CN107563963B (en) 2017-08-11 2017-08-11 Super-resolution reconstruction method based on single depth map

Country Status (1)

Country Link
CN (1) CN107563963B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680042B (en) * 2017-09-27 2020-03-31 杭州群核信息技术有限公司 Rendering method, device, engine and storage medium combining texture and convolution network
CN109731238B (en) * 2019-01-10 2020-11-10 吕衍荣 Mode switching platform based on field environment
CN111489383B (en) * 2020-04-10 2022-06-10 山东师范大学 Depth image up-sampling method and system based on depth marginal point and color image
CN112308781A (en) * 2020-11-23 2021-02-02 中国科学院深圳先进技术研究院 Single image three-dimensional super-resolution reconstruction method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN103020897A (en) * 2012-09-28 2013-04-03 香港应用科技研究院有限公司 Device for reconstructing based on super-resolution of multi-block single-frame image, system and method thereof
CN104766273A (en) * 2015-04-20 2015-07-08 重庆大学 Infrared image super-resolution reestablishing method based on compressed sensing theory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN103020897A (en) * 2012-09-28 2013-04-03 香港应用科技研究院有限公司 Device for reconstructing based on super-resolution of multi-block single-frame image, system and method thereof
CN104766273A (en) * 2015-04-20 2015-07-08 重庆大学 Infrared image super-resolution reestablishing method based on compressed sensing theory

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Depth Super Resolution by Rigid Body Self-Similarity in 3D;Michael Hornacek 等;《The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20130628;1123-1130 *
Single Depth Map Super-resolution with Local Self-similarity;Xiaochuan Wang 等;《ICVIP 2018 Proceedings of the 2018 the 2nd International Conference on Video and Image Processing》;20181231;198-220 *

Also Published As

Publication number Publication date
CN107563963A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN108230329B (en) Semantic segmentation method based on multi-scale convolution neural network
Wu et al. Fast end-to-end trainable guided filter
CN107563963B (en) Super-resolution reconstruction method based on single depth map
Liu et al. Guided depth enhancement via anisotropic diffusion
Wang et al. Land cover change detection at subpixel resolution with a Hopfield neural network
CN107358576A (en) Depth map super resolution ratio reconstruction method based on convolutional neural networks
Wei et al. Tensor voting guided mesh denoising
CN112232391B (en) Dam crack detection method based on U-net network and SC-SAM attention mechanism
CN106228544A (en) A kind of significance detection method propagated based on rarefaction representation and label
CN102930518B (en) Improved sparse representation based image super-resolution method
CN106067161A (en) A kind of method that image is carried out super-resolution
CN110570440A (en) Image automatic segmentation method and device based on deep learning edge detection
CN107490356B (en) Non-cooperative target rotating shaft and rotation angle measuring method
Dey et al. Voronoi-based feature curves extraction for sampled singular surfaces
CN106257497B (en) Matching method and device for image homonymy points
CN113159232A (en) Three-dimensional target classification and segmentation method
CN109064402B (en) Single image super-resolution reconstruction method based on enhanced non-local total variation model prior
CN111222519A (en) Construction method, method and device of hierarchical colored drawing manuscript line extraction model
CN113516126A (en) Adaptive threshold scene text detection method based on attention feature fusion
CN113705655A (en) Full-automatic classification method for three-dimensional point cloud and deep neural network model
CN114119607A (en) Wine bottle defect sample generation method and system based on deep neural network
Xian et al. Fast generation of high-fidelity RGB-D images by deep learning with adaptive convolution
KR101279484B1 (en) Apparatus and method for processing image
CN112819823B (en) Round hole detection method, system and device for furniture plate
Haque et al. Robust feature-preserving denoising of 3D point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant