CN112348959B - Adaptive disturbance point cloud up-sampling method based on deep learning - Google Patents

Adaptive disturbance point cloud up-sampling method based on deep learning Download PDF

Info

Publication number
CN112348959B
CN112348959B CN202011321220.6A CN202011321220A CN112348959B CN 112348959 B CN112348959 B CN 112348959B CN 202011321220 A CN202011321220 A CN 202011321220A CN 112348959 B CN112348959 B CN 112348959B
Authority
CN
China
Prior art keywords
point cloud
sampling
adaptive
point
multiplied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011321220.6A
Other languages
Chinese (zh)
Other versions
CN112348959A (en
Inventor
潘志庚
邱驰
丁丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202011321220.6A priority Critical patent/CN112348959B/en
Publication of CN112348959A publication Critical patent/CN112348959A/en
Application granted granted Critical
Publication of CN112348959B publication Critical patent/CN112348959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to a deep learning-based self-adaptive disturbance point cloud up-sampling method, and belongs to the technical field of computer graphics three-dimensional vision. And (3) inputting the sparse noisy point cloud which needs up-sampling into an up-sampling model, and performing up-sampling operation to obtain a clean, dense and uniform up-sampling point cloud. The first stage, inputting a low-resolution point cloud file training set into a network to extract the characteristics of point cloud; the second stage, the extracted point cloud features are subjected to 2D self-adaptive disturbance values through a self-adaptive disturbance layer, and the 2D self-adaptive disturbance values are connected to the point cloud features subjected to multiple copies; and thirdly, sending the connected point cloud characteristics into the self-adaptive residual layer again to obtain residual values, connecting the residual values to a plurality of copied point cloud characteristics, and convoluting the residual values for a plurality of times to obtain up-sampling point clouds. The model has low complexity and small size, achieves good effects on uniformity and reproduction geometry of up-sampling point cloud, and can be used for supplementing, rendering, reconstructing and other three-dimensional scenes of the point cloud model.

Description

Adaptive disturbance point cloud up-sampling method based on deep learning
Technical Field
The invention relates to the technical field of computer graphics three-dimensional vision, in particular to a self-adaptive disturbance point cloud up-sampling method based on deep learning.
Background
As an original representation of three-dimensional data, three-dimensional point clouds are widely used for immersive device experiences, three-dimensional city reconstruction, autonomous driving, virtual/augmented reality, and the like. Although three-dimensional sensing technology has been greatly developed in recent years, the obtained point cloud data often have various defects such as sparseness, missing, noise and the like. Thus, there is a need to improve the raw point cloud data acquired to facilitate subsequent use.
The existing point cloud upsampling method comprises a traditional method and a data driving-based method, an original traditional method is proposed by an Alexa et al, points are inserted into the vertexes of the Voronoi diagram in a local tangent space, and upsampling is further carried out on the basis. Huang et al propose a progressive approach EAR for point set edge-aware upsampling. First, the method resamples the portion away from the edge and then gradually approaches the edge and corner. Overall, these methods rely heavily on some prior conditions, such as smooth surface assumptions, normal estimates, etc.
In recent years, a deep learning-based method can learn a neural network through a large amount of data, so that various three-dimensional point cloud tasks such as classification, segmentation and the like are completed. After the pioneering work PointNet was presented by Qi et al, a series of deep learning based point cloud processing methods were presented. The method for up-sampling the point cloud by utilizing the neural network is firstly proposed by Yu et al, namely PU-Net, firstly learns the characteristics of the input point cloud from local to global, further copies and respectively convolves the characteristics, and finally returns to European space to obtain the up-sampled point cloud coordinates. Wang et al propose a multi-step upsampling method MPU based on the primitives, when multiple upsampling is required, the network is split into a plurality of small modules, each module is responsible for 2 times upsampling, each layer is mutually independent, and finally the final reconstruction result is obtained by fusing the upsampling results of different primitives, but the model is too large, and the upsampling speed is too slow. Yu et al have then proposed generating an up-sampling point cloud model based on an antagonism network, obtaining the up-sampling point cloud by a generator, and discriminating with a discriminator, but its performance improvement is mainly due to the introduction of the discriminator.
Disclosure of Invention
The invention aims to provide a self-adaptive disturbance point cloud up-sampling method based on deep learning, which is used for up-sampling point clouds to be up-sampled, has good result uniformity and can well embody rich geometric details.
In order to achieve the above purpose, the adaptive disturbance point cloud up-sampling method based on deep learning provided by the invention comprises the steps of inputting a sparse noisy point cloud file to be up-sampled into an up-sampling model, and performing up-sampling operation to obtain a clean, dense and uniform up-sampling point cloud;
the upsampling model is obtained through the following learning process:
1) Inputting original sparse point cloud N multiplied by 3 into a neural network, wherein N is the number of points in the point cloud, and 3 is the European space coordinate dimension of each point;
2) Performing feature extraction of simple dense connection based on DenseBlock (dense convolution network) on N×3 point cloud to obtain input point cloud feature value N×C l
3) The point cloud characteristic value N multiplied by C obtained in the step 2) is obtained l Inputting the adaptive disturbance layer to obtain an Nx2 two-dimensional adaptive random disturbance value, and repeating the step r times according to the requirement of sampling multiple on the point cloud to obtain a r N x 2 disturbance value;
4) Point cloud eigenvalue n×c l Duplicating r parts to obtain rN×C l N.times.2 disturbance values of step 3) were joined in each portion to give r N X (C) l +2) carrying out non-local characteristic enhancement on the characteristic value by using a self-attention mechanism unit, and then obtaining a r N multiplied by 3 self-adaptive residual value through a self-adaptive residual layer;
5) Will be rN×C l Connecting with a r N multiplied by 3 residual value, performing non-local feature enhancement by using a self-attention mechanism unit once, and performing convolution for 3 times to obtain a final up-sampling point cloud;
6) Comparing the up-sampling point cloud obtained in the step 5) with the corresponding group trunk point cloud to obtain a loss function value, and then reversely optimizing network parameters;
7) Repeating the steps 1) to 6) until the model converges to obtain an up-sampling model.
In the step 2), the feature extraction process of the simple dense connection based on DenseBlock comprises the following steps:
2-1) carrying out primary convolution on the point cloud of N multiplied by 3 to obtain a characteristic value N multiplied by C in an implicit space, and sending the characteristic value into DenseBlock to carry out fine-granularity characteristic extraction to obtain a characteristic value N multiplied by G;
2-2) connecting the characteristic value NxG obtained in the step 2-1) with NxC in characteristic dimension to obtain Nx (G+C);
2-3) repeating the steps 2-1) and 2-2) to obtain a point cloud characteristic value Nx (G' +G+C), and then performing convolution again to obtain a point cloud characteristic value NxC l
In the step 2-1), the fine-grained feature extraction process by DenseBlock is as follows:
2-1-1) known input features N multiplied by C, carrying out k nearest neighbor search on N points in the C-dimensional implicit space, wherein the search distance is Euclidean distance of the C-dimensional space in the implicit space, and obtaining index values of each point and k nearest neighbors of each point;
2-1-2) computing Euclidean distance C of C-dimensional space of each point and k nearest neighbors from the index relative Obtaining NxkxC relative The eigenvalue of each point is duplicated in k copies to obtain NxkxC, and the point cluster eigenvalue Nxkx (C+C) based on each point is obtained by connection relative );
2-1-3) convoluting the cluster eigenvalue once again and copying k parts of NxC connection to obtain eigenvalue Nxk× (C ' +C), convoluting the obtained Nxk× (C ' +C) once to obtain Nxk×C ", connecting with eigenvalue Nxk× (C ' +C), and using maximum pooling operation to obtain output eigenvalue NxG.
In the step 3), the process of obtaining the two-dimensional self-adaptive random disturbance value by using the self-adaptive disturbance layer is as follows:
3-1) Point cloud eigenvalues NxC l First, a convolution is carried out to obtain NxC l ' Point cloud disturbance eigenvalues;
3-2) vs. NxC l ' performing convolution again with half the number of channels to obtainIs a point cloud disturbance characteristic value;
3-3) willCarrying out convolution again on the disturbance characteristic value of the channel number 2, and further obtaining N multiplied by 2 two-dimensional self-adaptive random disturbance; repeating the steps 3-1) to 3-3) r times according to the demand of the point cloud encryption multiple, and obtaining the disturbance value of rN multiplied by 2 by using different convolution kernels each time.
In step 4) and step 5), the procedure of non-local feature enhancement by the self-attention mechanism unit is as follows:
for known input rN×(C l +2) performing convolution based on a 1×1 convolution kernel once respectively for three times to obtain three implicit space outputs f, g and h, transposed f and multiplied by g, and obtaining an attention map through Softmax (normalized exponential function); multiplying the attention map by h to obtain an adaptive self-attention map feature map, and then inputting rN× (C l +2) adding to obtain the characteristic value with non-local characteristic enhancement.
In the step 4), the process of obtaining the r N ×3 adaptive residual value through the adaptive residual layer is as follows:
4-1) vs rN× (C) l The point cloud eigenvalue of +2) is first convolved once to obtain rN×C l "adaptive residual eigenvalues;
4-2) rN×C l The convolution is carried out again, the number of channels is half of that of the first one, and the result isIs a residual characteristic value of (2);
4-3) willAnd then convolving the residual characteristic value of (3) with the channel number to obtain an adaptive residual value of rN multiplied by 3.
In step 6), the loss function is expressed as follows:
wherein N is the up-sampled point cloud number, χ R 、γ R Respectively representing an up-sampling point cloud and a corresponding group trunk point cloud,psi respectively represents the nearest distance mapping of two spaces with the same dimension, namely, nearest neighbors corresponding to a certain point in one point set in the other point set; x is x i The i-th point, y, representing the up-sampling point cloud p Representation ofAnd correctly labeling the p-th point of the point cloud.
Compared with the prior art, the invention has the following advantages:
the method for up-sampling the point cloud has the advantages that the neural network model is simple in design, the occupied space of the network model is small in size, the up-sampling point cloud is generated quickly, and meanwhile, the up-sampling point cloud with higher quality can be generated.
Drawings
Fig. 1 is a schematic structural diagram of a neural network based on a deep learning adaptive disturbance point cloud up-sampling method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a DenseBlock structure according to an embodiment of the present invention;
fig. 3 shows the result of visualization of a prosperous up-sampling point cloud in an embodiment of the present invention, and the surface reconstruction algorithm used is poisson surface reconstruction.
Detailed Description
The present invention will be further described with reference to the following examples and drawings for the purpose of making the objects, technical solutions and advantages of the present invention more apparent. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, based on the described embodiments, which a person of ordinary skill in the art would obtain without inventive faculty, are within the scope of the invention.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used in this specification, the word "comprising" or "comprises", and the like, means that the element or article preceding the word is meant to encompass the element or article listed thereafter and equivalents thereof without excluding other elements or articles.
Examples
Referring to fig. 1, in this embodiment, a noisy uneven sparse point cloud is input into an upsampling model, and a 4-time upsampling point cloud upsampling process is performed by the upsampling model, so as to obtain a clean, dense and even output point cloud. The specific process is as follows:
step S100, first, a sparse point cloud is input into a neural network with 5000×3, where 5000 is the number of points in the point cloud and 3 is the euclidean space coordinate dimension of each point.
Step S200, performing simple dense connection feature extraction based on DenseBlock on a 5000×3 point cloud, specifically, performing convolution for one time to obtain a feature value 5000×48 in an implicit space, and sending the feature value to DenseBlock for simple dense connection feature extraction to obtain a feature value 5000×144;
step S300, the 5000X 144 obtained in the step 2 is connected with the characteristic value 5000X 48 before being sent into DenseBlock to obtain 5000X (144+48).
Step S400, repeating steps S200 and S300, and carrying out convolution again to finally obtain the point cloud characteristic value 5000 multiplied by 192.
And S500, sending the point cloud characteristic value 5000 multiplied by 192 obtained in the step S400 into an adaptive disturbance layer to obtain a 5000 multiplied by 2 two-dimensional adaptive random disturbance value, and repeating the step for 4 times to obtain a 4 multiplied by 5000 multiplied by 2 two-dimensional adaptive random disturbance value.
And S600, copying the point cloud characteristic value 5000 multiplied by 192 for 4 parts to obtain 4 multiplied by 5000 multiplied by 192, connecting 5000 multiplied by 2 disturbance values of the step S500 for each part to obtain an up-sampling point cloud implicit space characteristic value 4 multiplied by 5000 multiplied by 194, carrying out non-local characteristic enhancement on the characteristic value by using a self-attention mechanism unit, and then obtaining a self-adaptive residual value 4 multiplied by 5000 multiplied by 3 through a self-adaptive residual layer.
In step S700, the point cloud extracted feature value of 4×5000×192 is connected with the 4×5000×3 disturbance value, and then the non-local feature is enhanced by using a self-attention mechanism, and then convolved for 3 times, so as to obtain the final 20000×3 up-sampling point cloud.
Finally, the result of the visualization of the up-sampled point cloud of the same input data size as in the present example is given in fig. 3. The results show that the point distribution uniformity of the embodiment reaches and exceeds that of some existing methods under the condition of small model size, and the underlying surface geometry can be better represented.

Claims (3)

1. The adaptive disturbance point cloud up-sampling method based on deep learning is characterized by comprising the steps of inputting a sparse noisy point cloud file to be up-sampled into an up-sampling model, and performing up-sampling operation to obtain a clean, dense and uniform up-sampling point cloud;
the upsampling model is obtained through the following learning process:
1) Inputting original sparse point cloud N multiplied by 3 into a neural network, wherein N is the number of points in the point cloud, and 3 is the European space coordinate dimension of each point;
2) Performing feature extraction based on simple dense connection of dense convolution network on N×3 point cloud to obtain input point cloud feature value N×C l The characteristic extraction process is as follows:
2-1) carrying out primary convolution on the point cloud of N multiplied by 3 to obtain a characteristic value N multiplied by C in an implicit space, and sending the characteristic value into a dense convolution network to carry out fine-granularity characteristic extraction to obtain a characteristic value N multiplied by G;
the process of fine granularity feature extraction by the dense convolution network is as follows:
2-1-1) known input features N multiplied by C, carrying out k nearest neighbor search on N points in the C-dimensional implicit space, wherein the search distance is Euclidean distance of the C-dimensional space in the implicit space, and obtaining index values of each point and k nearest neighbors of each point;
2-1-2) computing Euclidean distance C of C-dimensional space of each point and k nearest neighbors from the index relative Obtaining NxkxC relative The eigenvalue of each point is duplicated in k copies to obtain NxkxC, and the point cluster eigenvalue Nxkx (C+C) based on each point is obtained by connection relative );
2-1-3) performing convolution on the point cluster eigenvalue again and then connecting with k copies of NxC to obtain an eigenvalue Nxk× (C ' +C), performing convolution on the obtained Nxk× (C ' +C) to obtain Nxk×C ", connecting with the eigenvalue Nxk× (C ' +C), and performing maximum pooling operation to obtain an output eigenvalue NxG;
2-2) connecting the characteristic value NxG obtained in the step 2-1) with NxC in characteristic dimension to obtain Nx (G+C);
2-3) repeating the steps 2-1) and 2-2) to obtain the point cloud characteristic value NAfter X (G' +G+C), the point cloud characteristic value N multiplied by C is obtained by convolution again l
3) The point cloud characteristic value N multiplied by C obtained in the step 2) is obtained l Inputting an adaptive disturbance layer to obtain an Nx2 two-dimensional adaptive random disturbance value, repeating the step r times according to the requirement of sampling multiple on a point cloud to obtain a r N x 2 disturbance value, wherein the data processing process of the adaptive disturbance layer is as follows:
3-1) Point cloud eigenvalues NxC l First, a convolution is carried out to obtain NxC l ' Point cloud disturbance eigenvalues;
3-2) vs. NxC l ' performing convolution again with half the number of channels to obtainIs a point cloud disturbance characteristic value;
3-3) willCarrying out convolution again on the disturbance characteristic value of the channel number 2, and further obtaining N multiplied by 2 two-dimensional self-adaptive random disturbance; repeating the steps 3-1) to 3-3) r times according to the demand of the point cloud encryption multiple, and obtaining a disturbance value of rN multiplied by 2 by using different convolution kernels each time;
4) Point cloud eigenvalue n×c l Duplicating r parts to obtain rN×C l N.times.2 disturbance values of step 3) were joined in each portion to give r N X (C) l +2) carrying out non-local characteristic enhancement on the characteristic value by using a self-attention mechanism unit, and then obtaining a r N multiplied by 3 self-adaptive residual value through a self-adaptive residual layer;
the process of non-local feature enhancement by the self-attention mechanism unit is as follows:
for a known input rN× (C l +2) respectively carrying out convolution based on a 1 multiplied by 1 convolution kernel once for three times to obtain three implicit space outputs f, g and h, carrying out transposition on f and multiplying the f by g, and obtaining an attention characteristic diagram through a normalized exponential function; multiplying the attention profile by h to obtain an adaptive self-attention profile, and then inputting rN× (C l +2) adding to obtain the characteristic value with non-local characteristic enhancement
5) Will be rN×C l Connecting with a r N multiplied by 3 residual value, performing non-local feature enhancement by using a self-attention mechanism unit once, and performing convolution for 3 times to obtain a final up-sampling point cloud;
6) Comparing the up-sampling point cloud obtained in the step 5) with the corresponding correct labeling point cloud to obtain a loss function value, and then reversely optimizing network parameters;
7) Repeating the steps 1) to 6) until the model converges to obtain an up-sampling model.
2. The adaptive disturbance point cloud upsampling method based on deep learning according to claim 1, wherein in step 4), the process of obtaining the r N ×3 adaptive residual value through the adaptive residual layer is as follows:
4-1) vs rN× (C) l The point cloud eigenvalue of +2) is first convolved once to obtain rN×C l "adaptive residual eigenvalues;
4-2) rN×C l The convolution is carried out again, the number of channels is half of that of the first one, and the result isIs a residual characteristic value of (2);
4-3) willAnd then convolving the residual characteristic value of (3) with the channel number to obtain an adaptive residual value of rN multiplied by 3.
3. The adaptive disturbance point cloud upsampling method based on deep learning according to claim 1, wherein in step 6), the loss function is expressed as follows:
wherein N is up-sampledPoint cloud number, χ R 、γ R Respectively representing up-sampling point cloud and corresponding correct labeling point cloud,psi respectively represents the nearest distance mapping of two spaces with the same dimension, namely, nearest neighbors corresponding to a certain point in one point set in the other point set; x is x i The i-th point, y, representing the up-sampling point cloud p Representing the p-th point of the correctly labeled point cloud.
CN202011321220.6A 2020-11-23 2020-11-23 Adaptive disturbance point cloud up-sampling method based on deep learning Active CN112348959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011321220.6A CN112348959B (en) 2020-11-23 2020-11-23 Adaptive disturbance point cloud up-sampling method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011321220.6A CN112348959B (en) 2020-11-23 2020-11-23 Adaptive disturbance point cloud up-sampling method based on deep learning

Publications (2)

Publication Number Publication Date
CN112348959A CN112348959A (en) 2021-02-09
CN112348959B true CN112348959B (en) 2024-02-13

Family

ID=74365441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011321220.6A Active CN112348959B (en) 2020-11-23 2020-11-23 Adaptive disturbance point cloud up-sampling method based on deep learning

Country Status (1)

Country Link
CN (1) CN112348959B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022219383A1 (en) * 2021-04-15 2022-10-20 Sensetime International Pte. Ltd. Method and apparatus for point cloud data processing, electronic device and computer storage medium
AU2021204512B2 (en) * 2021-04-15 2023-02-02 Sensetime International Pte. Ltd. Method and apparatus for point cloud data processing, electronic device and computer storage medium
CN113362437B (en) * 2021-06-02 2022-06-28 山东大学 Point cloud resampling method, system, storage medium and equipment
WO2023010562A1 (en) * 2021-08-06 2023-02-09 Oppo广东移动通信有限公司 Point cloud processing method and apparatus
CN113436237B (en) * 2021-08-26 2021-12-21 之江实验室 High-efficient measurement system of complicated curved surface based on gaussian process migration learning
CN115061115B (en) * 2022-07-26 2023-02-03 深圳市速腾聚创科技有限公司 Point cloud encryption method and device, storage medium and laser radar
CN115661340B (en) * 2022-10-13 2024-05-28 南京航空航天大学 Three-dimensional point cloud up-sampling method and system based on source information fusion
CN116030200B (en) * 2023-03-27 2023-06-13 武汉零点视觉数字科技有限公司 Scene reconstruction method and device based on visual fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163799A (en) * 2019-05-05 2019-08-23 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of super-resolution point cloud generation method based on deep learning
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN111862289A (en) * 2020-08-04 2020-10-30 天津大学 Point cloud up-sampling method based on GAN network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107223268B (en) * 2015-12-30 2020-08-07 中国科学院深圳先进技术研究院 Three-dimensional point cloud model reconstruction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163799A (en) * 2019-05-05 2019-08-23 杭州电子科技大学上虞科学与工程研究院有限公司 A kind of super-resolution point cloud generation method based on deep learning
CN111724478A (en) * 2020-05-19 2020-09-29 华南理工大学 Point cloud up-sampling method based on deep learning
CN111862289A (en) * 2020-08-04 2020-10-30 天津大学 Point cloud up-sampling method based on GAN network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
三维点云模型高鲁棒性多重盲水印算法研究;张绣亚;孙刘杰;王文举;秦杨;商静静;;包装工程(第19期);全文 *
采用深度级联卷积神经网络的三维点云识别与分割;杨军;党吉圣;;光学精密工程(第05期);全文 *

Also Published As

Publication number Publication date
CN112348959A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112348959B (en) Adaptive disturbance point cloud up-sampling method based on deep learning
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN110020989B (en) Depth image super-resolution reconstruction method based on deep learning
US20240202871A1 (en) Three-dimensional point cloud upsampling method, system and device, and medium
CN111127346A (en) Multi-level image restoration method based on partial-to-integral attention mechanism
CN108564549B (en) Image defogging method based on multi-scale dense connection network
CN110634170B (en) Photo-level image generation method based on semantic content and rapid image retrieval
CN110223304B (en) Image segmentation method and device based on multipath aggregation and computer-readable storage medium
CN111340814A (en) Multi-mode adaptive convolution-based RGB-D image semantic segmentation method
CN112509021B (en) Parallax optimization method based on attention mechanism
CN113421187B (en) Super-resolution reconstruction method, system, storage medium and equipment
CN113989340A (en) Point cloud registration method based on distribution
CN111783862A (en) Three-dimensional significant object detection technology of multi-attention-directed neural network
CN115936992A (en) Garbage image super-resolution method and system of lightweight transform
CN102013020A (en) Method and system for synthesizing human face image
CN115619645A (en) Image super-resolution reconstruction method based on multi-stage residual jump connection network
CN103413351B (en) Three-dimensional face fast reconstructing method based on compressive sensing theory
Mun et al. Texture preserving photo style transfer network
Chen et al. A finger vein recognition algorithm based on deep learning
Liu et al. PU-refiner: A geometry refiner with adversarial learning for point cloud upsampling
CN116843780A (en) Fetal brain MR image reconstruction method of multiscale fused attention residual error dense network
CN116596809A (en) Low-illumination image enhancement method based on Residual-Unet network
CN113808006B (en) Method and device for reconstructing three-dimensional grid model based on two-dimensional image
CN116188882A (en) Point cloud up-sampling method and system integrating self-attention and multipath path diagram convolution
CN116309774A (en) Dense three-dimensional reconstruction method based on event camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant