CN112348959A - Adaptive disturbance point cloud up-sampling method based on deep learning - Google Patents
Adaptive disturbance point cloud up-sampling method based on deep learning Download PDFInfo
- Publication number
- CN112348959A CN112348959A CN202011321220.6A CN202011321220A CN112348959A CN 112348959 A CN112348959 A CN 112348959A CN 202011321220 A CN202011321220 A CN 202011321220A CN 112348959 A CN112348959 A CN 112348959A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- adaptive
- characteristic value
- point
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000005070 sampling Methods 0.000 title claims abstract description 38
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 230000003044 adaptive effect Effects 0.000 title claims description 25
- 238000000605 extraction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 230000007246 mechanism Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 238000009877 rendering Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Complex Calculations (AREA)
Abstract
The invention relates to a self-adaptive disturbance point cloud up-sampling method based on deep learning, and belongs to the technical field of computer graphic three-dimensional vision. And inputting the sparse noisy point cloud needing to be subjected to upsampling into the upsampling model, and performing upsampling operation to obtain a clean, dense and uniform upsampled point cloud. The first stage, inputting a low-resolution point cloud file training set into a network, and extracting the characteristics of point cloud; the second stage, the extracted point cloud features are subjected to self-adaptive disturbance layer to obtain a 2D self-adaptive disturbance value, and the 2D self-adaptive disturbance value is connected to the point cloud features after multiple copies are carried out; and in the third stage, the connected point cloud characteristics are sent into the self-adaptive residual error layer again to obtain residual error values, then the residual error values are connected to the copied point cloud characteristics, and the point cloud is convolved for multiple times to obtain the up-sampling point cloud. The model has low complexity and small size, achieves good effect on the uniformity and the reproduction geometry of the up-sampling point cloud, and can be used for completing, rendering, reconstructing and other three-dimensional scenes of the point cloud model.
Description
Technical Field
The invention relates to the technical field of computer graphics three-dimensional vision, in particular to a self-adaptive disturbance point cloud up-sampling method based on deep learning.
Background
The three-dimensional point cloud is used as an original representation of three-dimensional data, and is widely applied to immersive equipment experience, three-dimensional city reconstruction, autonomous driving, virtual/augmented reality and the like. Although the three-dimensional sensing technology has been greatly developed in recent years, the obtained point cloud data often has various defects such as sparseness, deficiency, noise point and the like. Therefore, there is a need to improve the acquired raw point cloud data for subsequent use.
The existing point cloud up-sampling method comprises a traditional method and a data driving-based method, and earlier traditional methods such as Alexa et al propose an innovative method for inserting points at the vertex of a Voronoi diagram in a local tangent space, and then up-sampling is carried out on the basis of the points. Huang et al propose a progressive method EAR for point set edge perceptual upsampling. First, the method resamples portions away from edges and then gradually approaches edges and corners. In general, these methods rely heavily on a few a priori conditions, such as smooth surface assumptions, normal estimates, and the like.
In recent years, a deep learning-based method can perform neural network learning through a large amount of data, and further complete various tasks of three-dimensional point cloud, such as classification, segmentation and the like. After Qi et al proposed an innovative working PointNet, a series of point cloud processing methods based on deep learning were proposed. The method for utilizing the neural network to carry out point cloud up-sampling is firstly proposed by Yu et al, namely PU-Net, and the method firstly learns the characteristics of the input point cloud from local to global, then copies and convolves a plurality of parts of the characteristics respectively, and finally regresses to an Euclidean space to obtain the point cloud coordinates after up-sampling. Wang et al propose a multi-step upsampling method MPU based on fragment, when multiple upsampling is needed, the network can be broken into a plurality of small modules, each module is responsible for 2 times upsampling, each layer is independent, and finally a final reconstruction result is obtained by fusing upsampling results of different fragments, but the model is too large and the upsampling speed is too slow. Yu et al have then proposed an up-sampled point cloud model based on a generated countermeasure network, obtained the up-sampled point cloud through the generator, and discriminated by a discriminator, but its performance improvement is mainly due to the introduction of the discriminator.
Disclosure of Invention
The invention aims to provide a depth learning-based adaptive disturbance point cloud up-sampling method, which is used for up-sampling a point cloud needing to be up-sampled, has good result uniformity and can well embody rich geometric details.
In order to achieve the purpose, the self-adaptive disturbance point cloud up-sampling method based on deep learning comprises the steps of inputting a sparse noisy point cloud file to be up-sampled into an up-sampling model, and performing up-sampling operation to obtain a clean, dense and uniform up-sampled point cloud;
the up-sampling model is obtained by the following learning process:
1) inputting original sparse point cloud Nx 3 into a neural network, wherein N is the number of points in the point cloud, and 3 is an Euclidean space coordinate of each point;
2) performing simple dense connection feature extraction based on DenseBlock (dense convolution network) on the N x 3 point cloud to obtain an input point cloud feature value N x Cl;
3) The point cloud characteristic value NxC obtained in the step 2) is subjected to point cloud characteristic value NxClInputting a self-adaptive disturbance layer to obtain an Nx 2 two-dimensional self-adaptive random disturbance value, and repeating the step r times according to the requirement of point cloud sampling multiple to obtain an r N x 2 disturbance value;
4) the characteristic value N multiplied by C of the point cloudlDuplicate r portions to give rNxClEach ligation step 3) gave an Nx 2 perturbation value of r N × (C)l+2), carrying out non-local feature enhancement on the feature value by using a self-attention mechanism unit, and then obtaining an r N multiplied by 3 self-adaptive residual value through a self-adaptive residual layer;
5) mixing rNxClConnecting with the residual error value of r N multiplied by 3, performing non-local feature enhancement by using a self-attention mechanism unit once again, and performing convolution for 3 times to obtain final up-sampling point cloud;
6) comparing the up-sampling point cloud obtained in the step 5) with a corresponding ground truth point cloud to obtain a loss function value, and then performing reverse optimization on network parameters;
7) and repeating the steps 1) to 6) until the model is converged, and obtaining an up-sampling model.
In step 2), the feature extraction process of the simple dense connection based on the DenseBlock includes:
2-1) carrying out convolution once on the point cloud of Nx 3 to obtain a characteristic value Nx C in an implicit space, and sending the characteristic value into a DenseBlock to carry out fine-grained characteristic extraction to obtain a characteristic value Nx G;
2-2) connecting the characteristic value NxG obtained in the step 2-1) with the characteristic dimension NxC to obtain Nx (G + C);
2-3) repeating the steps 2-1) and 2-2) to obtain a point cloud characteristic value Nx (G' + G + C), and performing convolution again to obtain a point cloud characteristic value Nx Cl。
In the step 2-1), the fine-grained feature extraction process of the DenseBlock is as follows:
2-1-1) knowing input characteristics NxC, performing k nearest neighbor search on N points in a C-dimensional implicit space, wherein the search distance is the Euclidean distance of the C-dimensional space in the implicit space, and obtaining index values of each point and k nearest neighbor points of each point;
2-1-2) calculating Euclidean distance C of each point and k nearest neighbors in C-dimensional space according to indexesrelativeObtaining Nxk x CrelativeThe characteristic value of each point is copied by k to obtain N multiplied by k multiplied by C, and the connection is carried out to obtain the characteristic value of the point cluster based on each point, N multiplied by k multiplied by C (C + C)relative);
2-1-3) performing convolution on the point cluster characteristic value again, copying k parts of N × C, connecting to obtain a characteristic value N × k × (C ' + C), performing convolution on the obtained N × k × (C ' + C) once to obtain N × k × C ", connecting to the characteristic value N × k × (C ' + C), and performing maximum pooling operation to obtain an output characteristic value N × G.
In step 3), the process of obtaining the two-dimensional adaptive random disturbance value by using the adaptive disturbance layer is as follows:
3-1) converting the point cloud characteristic value NxClFirstly, carrying out convolution once to obtain NxCl' point cloud perturbation eigenvalue;
3-2) to NxCl' convolution is performed again, the number of channels is half of the first time, and the result isThe point cloud disturbance characteristic value of (1);
3-3) mixingPerforming convolution again on the disturbance characteristic value, wherein the number of channels is 2, and further obtaining the Nx 2 two-dimensional self-adaptive random disturbance; repeating the step 3-1) to the step 3-3) r times according to the requirement of the point cloud encryption multiple, and using different convolution kernels each time to obtain a disturbance value of rNx 2.
In step 4) and step 5), the process of performing non-local feature enhancement by the self-attention mechanism unit is as follows:
for known input rN x (C)l+2) respectively performing convolution based on a 1 × 1 convolution kernel once and three times to obtain three output f, g and h of an implicit space, transposing f, multiplying the transposed f by g, and obtaining an attention map (attention feature map) through Softmax (normalized exponential function); multiplying the attribute map by h to obtain an adaptive attention feature map, and then adding the feature map to input rN (C)l+2) to obtain the feature value with non-local feature enhancement.
In step 4), the process of obtaining the r N × 3 adaptive residual value through the adaptive residual layer is as follows:
4-1) to rN × (C)l+2) point cloud characteristic value is firstly convolved once to obtain rNxC ″lThe adaptive residual eigenvalue of (2);
4-2) mixing rN × C ″lPerforming convolution again until the number of channels is half of the first time to obtainThe residual eigenvalue of (c);
4-3) mixingAnd performing convolution again on the residual error characteristic value, wherein the number of channels is 3, and further obtaining an rNx 3 self-adaptive residual error value.
In step 6), the loss function is expressed as follows:
wherein N is the point cloud number after up-sampling, χR、γRRespectively representing the up-sampled point cloud and the corresponding ground truth point cloud,psi represents the nearest distance mapping of two same dimension spaces, namely the nearest neighbor point corresponding to a certain point in one point set in the other point set; x is the number ofiI point, y, representing the up-sampled point cloudkIndicating the k-th point of the correctly labeled point cloud.
Compared with the prior art, the invention has the advantages that:
the method for point cloud up-sampling provided by the invention has the advantages that the neural network model is simple in design, the occupied space size of the network model is small, the speed of generating the up-sampled point cloud is high, and meanwhile, the generation of the up-sampled point cloud with higher quality can be ensured.
Drawings
FIG. 1 is a schematic structural diagram of a neural network of an adaptive disturbance point cloud upsampling method based on deep learning in an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a DenseBlock in an embodiment of the present invention;
fig. 3 is a visualization result of a prosperous up-sampling point cloud in the embodiment of the present invention, and a poisson surface reconstruction algorithm is used.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the following embodiments and accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments without any inventive step, are within the scope of protection of the invention.
Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The use of the word "comprise" or "comprises", and the like, in the context of this application, is intended to mean that the elements or items listed before that word, in addition to those listed after that word, do not exclude other elements or items.
Examples
Referring to fig. 1, in this embodiment, a sparse point cloud with uneven noise is input to an upsampling model, and a point cloud upsampling process of upsampling by 4 times is performed through the upsampling model, so that a clean, dense and even output point cloud is obtained. The specific process is as follows:
step S100, first, 5000 × 3 sparse point clouds are input to a neural network, where 5000 is the number of points in the point clouds, and 3 is the euclidean space coordinate of each point.
Step S200, performing simple dense connection feature extraction based on DenseBlock on 5000 x 3 point cloud, specifically, performing convolution for one time to obtain a feature value 5000 x 48 in an implicit space, and sending the feature value into DenseBlock to perform simple dense connection feature extraction to obtain a feature value 5000 x 144;
step S300 is to connect the feature dimensions of 5000 × 144 obtained in step S2 and 5000 × 48 of the feature value before entry of DenseBlock to obtain 5000 × (144+ 48).
And S400, repeating the steps S200 and S300, and performing convolution again to finally obtain the point cloud characteristic value of 5000 x 192.
And S500, sending the point cloud characteristic value 5000 multiplied by 192 obtained in the step S400 into a self-adaptive disturbance layer to obtain a two-dimensional self-adaptive random disturbance value of 5000 multiplied by 2, and repeating the step 4 times to obtain a two-dimensional self-adaptive random disturbance value of 4 multiplied by 5000 multiplied by 2.
And S600, copying 4 parts of the point cloud characteristic value 5000 multiplied by 192 to obtain 4 multiplied by 5000 multiplied by 192, connecting 5000 multiplied by 2 disturbance values in the step S500 to each part to obtain an up-sampling point cloud implicit spatial characteristic value of 4 multiplied by 5000 multiplied by 194, performing non-local characteristic enhancement on the characteristic value by using a self-attention mechanism unit, and then obtaining a self-adaptive residual value of 4 multiplied by 5000 multiplied by 3 through a self-adaptive residual layer.
And S700, connecting the extracted feature value of the point cloud with the value of 4 multiplied by 5000 multiplied by 192 with the disturbance value of 4 multiplied by 5000 multiplied by 3, enhancing the non-local features by using a self-attention mechanism once again, and performing convolution for 3 times to obtain the final point cloud of 20000 multiplied by 3 up-sampling.
Finally, the visualization of the up-sampled point cloud of the same input data size as in the present example is given in fig. 3. The result shows that the point distribution uniformity of the embodiment reaches and exceeds some existing methods under the condition that the size of the model is small, and the geometric structure of the underlying surface can be well presented.
Claims (7)
1. A self-adaptive disturbance point cloud up-sampling method based on deep learning is characterized by comprising the steps of inputting a sparse noisy point cloud file to be up-sampled into an up-sampling model, and performing up-sampling operation to obtain a clean, dense and uniform up-sampled point cloud;
the up-sampling model is obtained by the following learning process:
1) inputting original sparse point cloud Nx 3 into a neural network, wherein N is the number of points in the point cloud, and 3 is an Euclidean space coordinate of each point;
2) carrying out feature extraction of simple dense connection based on dense convolution network on the N x 3 point cloud to obtain an input point cloud feature value N x Cl;
3) The point cloud characteristic value NxC obtained in the step 2) is subjected to point cloud characteristic value NxClInputting a self-adaptive disturbance layer to obtain an Nx 2 two-dimensional self-adaptive random disturbance value, and repeating the step r times according to the requirement of point cloud sampling multiple to obtain an rNx 2 disturbance value;
4) the characteristic value N multiplied by C of the point cloudlDuplicate r portions to give rNxClEach N × 2 perturbation value of step 3) is linked to obtain rNx (C)l+2), carrying out non-local feature enhancement on the feature value by using a self-attention mechanism unit, and then obtaining an adaptive residual value of rNx 3 through an adaptive residual layer;
5) mixing rNxClConnecting with residual error value of rNx 3, and using it once againAfter non-local feature enhancement is carried out on the attention mechanism unit, convolution is carried out for 3 times to obtain final up-sampling point cloud;
6) comparing the up-sampling point cloud obtained in the step 5) with the corresponding correct marked point cloud to obtain a loss function value, and then performing reverse optimization on the network parameters;
7) and repeating the steps 1) to 6) until the model is converged, and obtaining an up-sampling model.
2. The adaptive disturbance point cloud up-sampling method based on deep learning of claim 1, wherein in the step 2), the feature extraction process based on simple dense connection of dense convolutional network comprises:
2-1) carrying out primary convolution on the point cloud of Nx 3 to obtain a characteristic value Nx C in an implicit space, and sending the characteristic value into a dense convolution network to carry out fine-grained characteristic extraction to obtain a characteristic value Nx G;
2-2) connecting the characteristic value NxG obtained in the step 2-1) with the characteristic dimension NxC to obtain Nx (G + C);
2-3) repeating the steps 2-1) and 2-2) to obtain a point cloud characteristic value Nx (G' + G + C), and performing convolution again to obtain a point cloud characteristic value Nx Cl。
3. The adaptive disturbance point cloud up-sampling method based on deep learning of claim 2, wherein in the step 2-1), the dense convolution network performs fine-grained feature extraction as follows:
2-1-1) knowing input characteristics NxC, performing k nearest neighbor search on N points in a C-dimensional implicit space, wherein the search distance is the Euclidean distance of the C-dimensional space in the implicit space, and obtaining index values of each point and k nearest neighbor points of each point;
2-1-2) calculating Euclidean distance C of each point and k nearest neighbors in C-dimensional space according to indexesrelativeObtaining Nxk x CrelativeThe characteristic value of each point is copied by k to obtain N multiplied by k multiplied by C, and the connection is carried out to obtain the characteristic value of the point cluster based on each point, N multiplied by k multiplied by C (C + C)relative);
2-1-3) performing convolution on the point cluster characteristic value again, copying k parts of N × C, connecting to obtain a characteristic value N × k × (C ' + C), performing convolution on the obtained N × k × (C ' + C) once to obtain N × k × C ", connecting to the characteristic value N × k × (C ' + C), and performing maximum pooling operation to obtain an output characteristic value N × G.
4. The adaptive disturbance point cloud upsampling method based on deep learning of claim 1, wherein in the step 3), a process of obtaining a two-dimensional adaptive random disturbance value by using an adaptive disturbance layer is as follows:
3-1) converting the point cloud characteristic value NxClFirst, a convolution is performed to obtain N x C'lThe point cloud disturbance characteristic value of (1);
3-2) to NxC'lPerforming convolution again until the number of channels is half of the first time to obtainThe point cloud disturbance characteristic value of (1);
3-3) mixingPerforming convolution again on the disturbance characteristic value, wherein the number of channels is 2, and further obtaining the Nx 2 two-dimensional self-adaptive random disturbance; repeating the step 3-1) to the step 3-3) r times according to the requirement of the point cloud encryption multiple, and using different convolution kernels each time to obtain a disturbance value of rNx 2.
5. The method for sampling point cloud on adaptive disturbance based on deep learning of claim 1, wherein in step 4) and step 5), the process of non-local feature enhancement by the self-attention mechanism unit is as follows:
for known input rN x (C)l+2) respectively carrying out convolution based on 1 × 1 convolution kernel once and three times to obtain output f, g and h of three implicit spaces, transposing f, multiplying the transposed f by g, and obtaining an attention feature map through a normalized exponential function; multiplying the attention feature map by h to obtain an adaptive attention feature map, and then adding the adaptive attention feature map to the input rN (C)l+2) adding to obtainTo feature values for which non-local feature enhancement was performed.
6. The adaptive disturbance point cloud upsampling method based on deep learning of claim 1, wherein in the step 4), the process of obtaining the adaptive residual value of rN x 3 through the adaptive residual layer is as follows:
4-1) to rN × (C)l+2) point cloud characteristic value is firstly convolved once to obtain rNxC ″lThe adaptive residual eigenvalue of (2);
4-2) mixing rN × C ″lPerforming convolution again until the number of channels is half of the first time to obtainThe residual eigenvalue of (c);
7. The adaptive disturbance point cloud up-sampling method based on deep learning of claim 1, wherein in step 6), the loss function is represented as follows:
wherein N is the point cloud number after up-sampling, χR、γRRespectively representing the up-sampled point cloud and the corresponding correctly labeled point cloud,psi represents the nearest distance mapping of two same dimension spaces, namely the nearest neighbor point corresponding to a certain point in one point set in the other point set; x is the number ofiI point, y, representing the up-sampled point cloudkIndicating correct annotationThe kth point of the point cloud.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011321220.6A CN112348959B (en) | 2020-11-23 | 2020-11-23 | Adaptive disturbance point cloud up-sampling method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011321220.6A CN112348959B (en) | 2020-11-23 | 2020-11-23 | Adaptive disturbance point cloud up-sampling method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112348959A true CN112348959A (en) | 2021-02-09 |
CN112348959B CN112348959B (en) | 2024-02-13 |
Family
ID=74365441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011321220.6A Active CN112348959B (en) | 2020-11-23 | 2020-11-23 | Adaptive disturbance point cloud up-sampling method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112348959B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113362437A (en) * | 2021-06-02 | 2021-09-07 | 山东大学 | Point cloud resampling method, system, storage medium and equipment |
CN113436237A (en) * | 2021-08-26 | 2021-09-24 | 之江实验室 | High-efficient measurement system of complicated curved surface based on gaussian process migration learning |
CN115061115A (en) * | 2022-07-26 | 2022-09-16 | 深圳市速腾聚创科技有限公司 | Point cloud encryption method and device, storage medium and laser radar |
WO2022219383A1 (en) * | 2021-04-15 | 2022-10-20 | Sensetime International Pte. Ltd. | Method and apparatus for point cloud data processing, electronic device and computer storage medium |
AU2021204512A1 (en) * | 2021-04-15 | 2022-11-03 | Sensetime International Pte. Ltd. | Method and apparatus for point cloud data processing, electronic device and computer storage medium |
CN115661340A (en) * | 2022-10-13 | 2023-01-31 | 南京航空航天大学 | Three-dimensional point cloud up-sampling method and system based on source information fusion |
WO2023010562A1 (en) * | 2021-08-06 | 2023-02-09 | Oppo广东移动通信有限公司 | Point cloud processing method and apparatus |
CN116030200A (en) * | 2023-03-27 | 2023-04-28 | 武汉零点视觉数字科技有限公司 | Scene reconstruction method and device based on visual fusion |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170193692A1 (en) * | 2015-12-30 | 2017-07-06 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Three-dimensional point cloud model reconstruction method, computer readable storage medium and device |
CN110163799A (en) * | 2019-05-05 | 2019-08-23 | 杭州电子科技大学上虞科学与工程研究院有限公司 | A kind of super-resolution point cloud generation method based on deep learning |
CN111724478A (en) * | 2020-05-19 | 2020-09-29 | 华南理工大学 | Point cloud up-sampling method based on deep learning |
CN111862289A (en) * | 2020-08-04 | 2020-10-30 | 天津大学 | Point cloud up-sampling method based on GAN network |
-
2020
- 2020-11-23 CN CN202011321220.6A patent/CN112348959B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170193692A1 (en) * | 2015-12-30 | 2017-07-06 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Three-dimensional point cloud model reconstruction method, computer readable storage medium and device |
CN110163799A (en) * | 2019-05-05 | 2019-08-23 | 杭州电子科技大学上虞科学与工程研究院有限公司 | A kind of super-resolution point cloud generation method based on deep learning |
CN111724478A (en) * | 2020-05-19 | 2020-09-29 | 华南理工大学 | Point cloud up-sampling method based on deep learning |
CN111862289A (en) * | 2020-08-04 | 2020-10-30 | 天津大学 | Point cloud up-sampling method based on GAN network |
Non-Patent Citations (2)
Title |
---|
张绣亚;孙刘杰;王文举;秦杨;商静静;: "三维点云模型高鲁棒性多重盲水印算法研究", 包装工程, no. 19 * |
杨军;党吉圣;: "采用深度级联卷积神经网络的三维点云识别与分割", 光学精密工程, no. 05 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022219383A1 (en) * | 2021-04-15 | 2022-10-20 | Sensetime International Pte. Ltd. | Method and apparatus for point cloud data processing, electronic device and computer storage medium |
AU2021204512A1 (en) * | 2021-04-15 | 2022-11-03 | Sensetime International Pte. Ltd. | Method and apparatus for point cloud data processing, electronic device and computer storage medium |
AU2021204512B2 (en) * | 2021-04-15 | 2023-02-02 | Sensetime International Pte. Ltd. | Method and apparatus for point cloud data processing, electronic device and computer storage medium |
CN113362437A (en) * | 2021-06-02 | 2021-09-07 | 山东大学 | Point cloud resampling method, system, storage medium and equipment |
CN113362437B (en) * | 2021-06-02 | 2022-06-28 | 山东大学 | Point cloud resampling method, system, storage medium and equipment |
WO2023010562A1 (en) * | 2021-08-06 | 2023-02-09 | Oppo广东移动通信有限公司 | Point cloud processing method and apparatus |
CN113436237A (en) * | 2021-08-26 | 2021-09-24 | 之江实验室 | High-efficient measurement system of complicated curved surface based on gaussian process migration learning |
CN115061115A (en) * | 2022-07-26 | 2022-09-16 | 深圳市速腾聚创科技有限公司 | Point cloud encryption method and device, storage medium and laser radar |
CN115061115B (en) * | 2022-07-26 | 2023-02-03 | 深圳市速腾聚创科技有限公司 | Point cloud encryption method and device, storage medium and laser radar |
CN115661340A (en) * | 2022-10-13 | 2023-01-31 | 南京航空航天大学 | Three-dimensional point cloud up-sampling method and system based on source information fusion |
CN116030200A (en) * | 2023-03-27 | 2023-04-28 | 武汉零点视觉数字科技有限公司 | Scene reconstruction method and device based on visual fusion |
Also Published As
Publication number | Publication date |
---|---|
CN112348959B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112348959A (en) | Adaptive disturbance point cloud up-sampling method based on deep learning | |
CN108564549B (en) | Image defogging method based on multi-scale dense connection network | |
JP2019003615A (en) | Learning autoencoder | |
CN113989340A (en) | Point cloud registration method based on distribution | |
CN114782265A (en) | Image restoration method based on multi-scale and residual multi-channel space attention resistance | |
CN112509021A (en) | Parallax optimization method based on attention mechanism | |
CN116310095A (en) | Multi-view three-dimensional reconstruction method based on deep learning | |
CN116310098A (en) | Multi-view three-dimensional reconstruction method based on attention mechanism and variable convolution depth network | |
CN112967296B (en) | Point cloud dynamic region graph convolution method, classification method and segmentation method | |
CN113240584A (en) | Multitask gesture picture super-resolution method based on picture edge information | |
CN116797456A (en) | Image super-resolution reconstruction method, system, device and storage medium | |
CN116188882A (en) | Point cloud up-sampling method and system integrating self-attention and multipath path diagram convolution | |
CN108921785B (en) | Super-resolution reconstruction method based on wavelet packet | |
CN116843780A (en) | Fetal brain MR image reconstruction method of multiscale fused attention residual error dense network | |
CN113808006B (en) | Method and device for reconstructing three-dimensional grid model based on two-dimensional image | |
CN116188690A (en) | Hand-drawn sketch three-dimensional model reconstruction method based on space skeleton information | |
CN115131245A (en) | Point cloud completion method based on attention mechanism | |
CN115661340A (en) | Three-dimensional point cloud up-sampling method and system based on source information fusion | |
CN109146886B (en) | RGBD image semantic segmentation optimization method based on depth density | |
CN112634281A (en) | Grid segmentation method based on graph convolution network | |
CN116363329B (en) | Three-dimensional image generation method and system based on CGAN and LeNet-5 | |
Zou et al. | Low complexity single image super-resolution with channel splitting and fusion network | |
CN117391959B (en) | Super-resolution reconstruction method and system based on multi-granularity matching and multi-scale aggregation | |
CN113807233B (en) | Point cloud feature extraction method, classification method and segmentation method based on high-order term reference surface learning | |
CN116740300B (en) | Multi-mode-based prime body and texture fusion furniture model reconstruction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |