CN114663600A - Point cloud reconstruction method and system based on self-encoder - Google Patents

Point cloud reconstruction method and system based on self-encoder Download PDF

Info

Publication number
CN114663600A
CN114663600A CN202210400962.0A CN202210400962A CN114663600A CN 114663600 A CN114663600 A CN 114663600A CN 202210400962 A CN202210400962 A CN 202210400962A CN 114663600 A CN114663600 A CN 114663600A
Authority
CN
China
Prior art keywords
local
encoder
determining
data
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210400962.0A
Other languages
Chinese (zh)
Inventor
于耀
曾庆吉
周余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202210400962.0A priority Critical patent/CN114663600A/en
Publication of CN114663600A publication Critical patent/CN114663600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a point cloud reconstruction method and system based on an autoencoder. The method comprises the steps of obtaining current data of a laser radar system; performing space division on the current data to determine a plurality of data blocks; determining a local distance function field by adopting a trained self-encoder network according to the data block; determining the surface of a local scene by using an isosurface extraction algorithm according to the local distance function field; and splicing the local scene surfaces to determine the scene surfaces. The invention can incrementally reconstruct a high-quality complete scene by a plurality of continuous frame point clouds acquired by the laser radar, has low scene storage cost and high reconstruction speed, and can deal with reconstruction work of most outdoor scenes.

Description

Point cloud reconstruction method and system based on self-encoder
Technical Field
The invention relates to the field of computer vision and computer graphics, in particular to a point cloud reconstruction method and system based on an auto-encoder.
Background
Three-dimensional reconstruction refers to the process of generating a three-dimensional model from two-dimensional image data or three-dimensional point cloud data. Three-dimensional reconstruction has very important applications in the fields of computer vision, computer graphics and robotics, and is the basis of applications such as autopilot, terrain generation and augmented reality.
Three-dimensional reconstruction is mainly divided into two types, namely three-dimensional reconstruction based on camera images and three-dimensional reconstruction based on laser radars. The three-dimensional reconstruction based on the image mainly comprises the steps of camera calibration, feature point extraction, feature point matching calculation pose, dense matching estimation depth and surface reconstruction. However, the three-dimensional reconstruction based on the image depends on texture and illumination, and cannot work under the condition of texture deficiency or insufficient illumination at night, and large errors occur particularly in the two steps of calculating pose matching and depth estimation of the feature points.
At present, the three-dimensional reconstruction based on the multi-line laser radar has high distance perception precision and is not influenced by illumination change, and the stability and the robustness are greatly improved compared with the three-dimensional reconstruction based on images. The point cloud reconstruction algorithm mainly comprises two types, one is that the traditional reconstruction algorithm comprises the steps of solving the pose by using an iterative nearest neighbor algorithm and reconstructing the surface by using a voxel fusion algorithm, and the method can be used for incremental three-dimensional reconstruction, but for large-scale scenes, the storage cost is greatly increased when a map is updated. And secondly, a distance function field can be stored by using shape coding based on an algorithm of deep learning, but the existing network needs to train a specific network for a specific object, does not have generalization and cannot process incremental point cloud data.
In order to solve the problems existing in the prior art in the field, a new point cloud reconstruction method needs to be provided.
Disclosure of Invention
The invention aims to provide a point cloud reconstruction method and a point cloud reconstruction system based on a self-encoder, which can reconstruct a high-quality complete scene by a plurality of continuous frames of point clouds acquired by a laser radar in an incremental manner, have low scene storage overhead and high reconstruction speed, and can deal with reconstruction work of most outdoor scenes.
In order to achieve the purpose, the invention provides the following scheme:
a point cloud reconstruction method based on an auto-encoder comprises the following steps:
acquiring current data of a laser radar system; the current data includes: scanning by a multi-line laser radar to obtain three-dimensional point cloud data of the current environment, obtaining a space coordinate by a global positioning system and obtaining the current acceleration by an inertial navigation element;
performing space division on the current data to determine a plurality of data blocks; the data block is a plurality of continuous frame point clouds in a local space time domain;
determining a local distance function field by adopting a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for correspondingly encoding the data blocks with different time sequences into the shape encoding of a geometric shape domain; the maximum pooling is used for coding shapes of different time sequence frames to obtain local space shape codes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
determining the surface of a local scene by using an isosurface extraction algorithm according to the local distance function field;
and splicing the local scene surfaces to determine the scene surfaces.
Optionally, the determining, according to the data block, the local distance function field by using the trained self-encoder network specifically includes:
acquiring historical data of a laser radar system;
carrying out space division on historical data;
determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
determining a training set according to the divided historical data and the corresponding local distance function field;
and training the self-encoder network by using the training set to determine the trained self-encoder network.
Optionally, the loss function of the trained self-encoder network includes:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function;
the testing stage comprises: a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupancy loss function, and a point cloud to patch loss function.
Optionally, the determining the scene surface by splicing the local scene surfaces specifically includes:
taking the space absolute coordinates obtained by the global positioning system as an initial value, and integrating the acceleration obtained by the inertial navigation element to obtain the current sensor external parameter data;
and splicing according to the local scene surface by utilizing the current sensor external parameters to determine the scene surface.
An auto-encoder based point cloud reconstruction system, comprising:
the current data acquisition module is used for acquiring current data of the laser radar system; the current data includes: the method comprises the steps that multi-line laser radar scans to obtain three-dimensional point cloud data of a current environment, a global positioning system obtains space coordinates, and an inertial navigation element obtains current acceleration;
the current data dividing module is used for carrying out space division on the current data and determining a plurality of data blocks; the data block is a plurality of continuous frame point clouds in a local space time domain;
the local distance function field determining module is used for determining a local distance function field by adopting a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for correspondingly encoding the data blocks with different time sequences into the shape encoding of a geometric shape domain; the maximum pooling is used for coding shapes of different time sequence frames to obtain local space shape codes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
the local scene surface determining module is used for determining a local scene surface by utilizing an isosurface extraction algorithm according to the local distance function field;
and the scene surface determining module is used for splicing the local scene surfaces to determine the scene surfaces.
Optionally, the local distance function field determining module specifically includes:
the historical data acquisition unit is used for acquiring historical data of the laser radar system;
the historical data dividing unit is used for carrying out space division on the historical data;
the local distance function field determining unit is used for determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
the training set determining unit is used for determining a training set according to the divided historical data and the corresponding local distance function field;
and the trained self-encoder network determining unit is used for determining the trained self-encoder network by utilizing the training set to train the self-encoder network.
Optionally, the loss function of the trained self-encoder network includes:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function;
the testing stage comprises: a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupancy loss function, and a point cloud to patch loss function.
Optionally, the scene surface determination module specifically includes:
the current sensor external parameter data determining unit is used for integrating the acceleration obtained by the inertial navigation element by taking the space absolute coordinate obtained by the global positioning system as an initial value to obtain the current sensor external parameter data;
and the scene surface determining unit is used for splicing and determining the scene surface according to the local scene surface by utilizing the current sensor external parameters.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a point cloud reconstruction method and a point cloud reconstruction system based on a self-encoder, which are characterized in that continuous multi-frame point clouds collected by a laser radar are divided on a space domain, then local space point clouds are generated into a local directed distance function field through a self-encoder network, a shape encoder maps local point cloud blocks of different time sequence frames to corresponding geometric shape domains, a maximum pooling layer fuses the geometric shape domains of the different time sequence frames, a shape decoder maps the geometric shape domains to the corresponding local directed distance function field, then a local scene surface is generated through an isosurface extraction algorithm, and finally the local scene surface is spliced into a complete scene. The method can generate a high-quality scene surface, and compared with the traditional voxel fusion algorithm, the storage cost is greatly reduced; compared with other point cloud reconstruction algorithms based on neural networks, the method does not need to train a specific network for a specific object, so that the method has the advantages of higher generalization, higher reconstruction speed and richer local details.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a point cloud reconstruction method based on a self-encoder according to the present invention;
FIG. 2 is a schematic diagram illustrating a principle of a point cloud reconstruction method based on an auto-encoder according to the present invention;
fig. 3 is a schematic structural diagram of a point cloud reconstruction system based on an auto-encoder according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a point cloud reconstruction method and a point cloud reconstruction system based on a self-encoder, which can incrementally reconstruct a high-quality complete scene by a plurality of continuous frames of point clouds acquired by a laser radar, have low scene storage overhead and high reconstruction speed, and can deal with reconstruction work of most outdoor scenes.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of a point cloud reconstruction method based on a self-encoder provided by the present invention, fig. 2 is a schematic principle diagram of a point cloud reconstruction method based on a self-encoder provided by the present invention, as shown in fig. 1 and fig. 2, the point cloud reconstruction method based on a self-encoder provided by the present invention includes:
s101, acquiring current data of a laser radar system; the current data includes: the method comprises the steps that multi-line laser radar scans to obtain three-dimensional point cloud data of a current environment, a global positioning system obtains space coordinates, and an inertial navigation element obtains current acceleration;
s102, performing space division on the current data, and determining a plurality of data blocks; the data block is a plurality of continuous frames of point clouds in a local space time domain; the data block is a cube with the side length of 10 meters, and a plurality of continuous frame point clouds in a built-in time domain; the different timing frames include: when the number of the frame point clouds in the local cubic space is too small at a single moment, the system can automatically filter the point clouds, and meanwhile, most of the local cubic space is kept receiving 3 continuous frame point clouds.
S103, determining a local distance function field by adopting a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for correspondingly encoding the data blocks with different time sequences into the shape encoding of a geometric shape domain; the maximum pooling is used for coding shapes of different time sequence frames to obtain local space shape codes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
s103 specifically comprises the following steps:
acquiring historical data of a laser radar system;
carrying out space division on historical data;
determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
determining a training set according to the divided historical data and the corresponding local distance function field;
and training the self-encoder network by using the training set to determine the trained self-encoder network.
The loss function of the trained self-encoder network comprises:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function; the shape decoder D is mainly composed of eight fully-connected layers, i.e., multilayer perceptron (MLP), each fully-connected layer is followed by a batch normalization layer (BatchNorm), and the last two fully-connected layers of the shape decoder D are connected to two outputs, which respectively output a directed distance function field and a sigmoid layer output surface occupation field through a tanh active layer.
Wherein, the constraint of the self-encoder in the training stage comprises a directed distance function field distance loss function, a directed distance field direction loss function and a directed distance field occupation loss function, namely:
L=ω1*LSDF2*Lsign3*Loccu(ii) a Wherein, [ omega ]1,ω2,ω3]Is the corresponding weighting coefficient;
directed distance function the field distance loss function is:
Lsign=sigmoid(-ω20*fθ(x)*s);
wherein, ω is20Often set to a large value, e.g., 10000, to control when fθWith s opposite sign, the constraint is very large, and fθAnd s is the same number, the constraint is extremely large.
The directed distance field occupancy loss function is:
Loccu=-(Onet*logOreal+(1-Onet)*log(1-Oreal));
wherein, OnetProbability of occupation of vertex x output by the network, and OrealE {0, 1} is the probability of occupation of the true vertex x.
The testing stage comprises a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupation loss function and a point cloud to patch loss function, namely:
L=ω1*LSDF2*Lsign3*Loccu4*Lrecon
wherein, [ omega ]1,ω2,ω3,ω4]Are the corresponding weighting coefficients.
Wherein, [ omega ]1,ω2,ω3,ω4]Are the corresponding weighting coefficients.
The reconstruction constraint, that is, the loss function from the point cloud to the patch, is introduced below, because the isosurface extraction is implemented outside the network, the loss is only used to measure the reconstruction effect after the network training is completed, that is:
Figure BDA0003600146260000071
wherein, M is the surface obtained by reconstruction, y is the coordinate of the input real vertex, and delta (M, y) calculates the distance from all point clouds to the scene surface.
The traditional voxel fusion algorithm specifically comprises the following steps:
and transforming different frame point clouds to the same scale space by sensor external reference data, obtaining a symbolic distance value of the voxel field from the implicit surface based on a reverse ray projection (Raycasting) algorithm, and performing symbolic distance value weighted fusion based on Gaussian weight to obtain a final distance function field.
S104, determining the surface of a local scene by using an isosurface extraction algorithm according to the local distance function field;
and S105, splicing the local scene surfaces to determine the scene surfaces.
S105 specifically comprises the following steps:
taking the space absolute coordinates obtained by the global positioning system as an initial value, and integrating the acceleration obtained by the inertial navigation element to obtain the current sensor external parameter data;
and splicing according to the local scene surface by utilizing the current sensor external parameters to determine the scene surface.
Fig. 3 is a schematic structural diagram of a point cloud reconstruction system based on a self-encoder provided by the present invention, and as shown in fig. 3, the point cloud reconstruction system based on a self-encoder provided by the present invention includes:
a current data acquiring module 301, configured to acquire current data of the laser radar system; the current data includes: scanning by a multi-line laser radar to obtain three-dimensional point cloud data of the current environment, obtaining a space coordinate by a global positioning system and obtaining the current acceleration by an inertial navigation element;
a current data partitioning module 302, configured to perform spatial partitioning on the current data, and determine a plurality of data blocks; the data block is a plurality of continuous frames of point clouds in a local space time domain;
a local distance function field determining module 303, configured to determine a local distance function field by using a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for correspondingly encoding the data blocks with different time sequences into the shape encoding of a geometric shape domain; the maximum pooling is used for coding shapes of different time sequence frames to obtain local space shape codes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
a local scene surface determining module 304, configured to determine a local scene surface by using an iso-surface extraction algorithm according to the local distance function field;
and a scene surface determining module 305, configured to splice the local scene surfaces to determine a scene surface.
The local distance function field determining module 303 specifically includes:
the historical data acquisition unit is used for acquiring historical data of the laser radar system;
the historical data dividing unit is used for carrying out space division on the historical data;
the local distance function field determining unit is used for determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
the training set determining unit is used for determining a training set according to the divided historical data and the corresponding local distance function field;
and the trained self-encoder network determining unit is used for determining the trained self-encoder network by utilizing the training set to train the self-encoder network.
The loss function of the trained self-encoder network comprises:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function;
the testing stage comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupancy loss function, and a point cloud to patch loss function.
The scene surface determination module 304 specifically includes:
the current sensor external parameter data determining unit is used for integrating the acceleration obtained by the inertial navigation element by taking the space absolute coordinate obtained by the global positioning system as an initial value to obtain the current sensor external parameter data;
and the scene surface determining unit is used for splicing the local scene surfaces according to the current external sensor parameters to determine the scene surfaces.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principle and the embodiment of the present invention are explained by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A point cloud reconstruction method based on an auto-encoder is characterized by comprising the following steps:
acquiring current data of a laser radar system; the current data includes: scanning by a multi-line laser radar to obtain three-dimensional point cloud data of the current environment, obtaining a space coordinate by a global positioning system and obtaining the current acceleration by an inertial navigation element;
performing space division on the current data to determine a plurality of data blocks; the data block is a plurality of continuous frames of point clouds in a local space time domain;
determining a local distance function field by adopting a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for encoding the shape of the data blocks with different time sequences; the maximum pooling is used for coding the shapes of different time sequence frames to obtain codes corresponding to geometric shape domains to local space shapes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
determining the surface of a local scene by using an isosurface extraction algorithm according to the local distance function field;
and splicing the local scene surfaces to determine the scene surfaces.
2. The point cloud reconstruction method based on the self-encoder according to claim 1, wherein the determining the local distance function field by using the trained self-encoder network according to the data block specifically comprises:
acquiring historical data of a laser radar system;
carrying out space division on historical data;
determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
determining a training set according to the divided historical data and the corresponding local distance function field;
and training the self-encoder network by using the training set to determine the trained self-encoder network.
3. An auto-encoder based point cloud reconstruction method according to claim 2, wherein the loss function of the trained auto-encoder network comprises:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function;
the testing stage comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupancy loss function, and a point cloud to patch loss function.
4. The point cloud reconstruction method based on the self-encoder as claimed in claim 1, wherein the determining the scene surface by stitching the local scene surface specifically comprises:
taking the space absolute coordinates obtained by the global positioning system as an initial value, and integrating the acceleration obtained by the inertial navigation element to obtain the current sensor external parameter data;
and splicing according to the local scene surface by utilizing the current sensor external parameters to determine the scene surface.
5. An auto-encoder based point cloud reconstruction system, comprising:
the current data acquisition module is used for acquiring current data of the laser radar system; the current data includes: scanning by a multi-line laser radar to obtain three-dimensional point cloud data of the current environment, obtaining a space coordinate by a global positioning system and obtaining the current acceleration by an inertial navigation element;
the current data dividing module is used for carrying out space division on the current data and determining a plurality of data blocks; the data block is a plurality of continuous frame point clouds in a local space time domain;
the local distance function field determining module is used for determining a local distance function field by adopting a trained self-encoder network according to the data block; the trained self-encoder network takes a data block as input and takes a local distance function field as output; the trained self-encoder network comprises: a shape encoder, max pooling, and shape decoder; the shape encoder is used for correspondingly encoding the data blocks with different time sequences into the shape encoding of a geometric shape domain; the maximum pooling is used for coding shapes of different time sequence frames to obtain local space shape codes; the shape decoder is used for coding the local space shape to obtain a local space distance function field;
the local scene surface determining module is used for determining a local scene surface by utilizing an isosurface extraction algorithm according to the local distance function field;
and the scene surface determining module is used for splicing the local scene surfaces to determine the scene surfaces.
6. The self-encoder based point cloud reconstruction system of claim 5, wherein the local distance function field determination module specifically comprises:
the historical data acquisition unit is used for acquiring historical data of the laser radar system;
the historical data dividing unit is used for carrying out space division on the historical data;
the local distance function field determining unit is used for determining a local distance function field for the divided historical data by adopting a traditional voxel fusion algorithm;
the training set determining unit is used for determining a training set according to the divided historical data and the corresponding local distance function field;
and the trained self-encoder network determining unit is used for determining the trained self-encoder network by utilizing the training set to train the self-encoder network.
7. An auto-encoder based point cloud reconstruction system according to claim 6, wherein the loss function of the trained auto-encoder network comprises:
the training phase comprises the following steps: a directed distance function field distance loss function, a directed distance field direction loss function, and a directed distance field occupancy loss function;
the testing stage comprises: a directed distance function field distance loss function, a directed distance field direction loss function, a directed distance field occupancy loss function, and a point cloud to patch loss function.
8. The point cloud reconstruction system based on an auto-encoder as claimed in claim 5, wherein the scene surface determination module comprises:
the current sensor external parameter data determining unit is used for integrating the acceleration obtained by the inertial navigation element by taking the space absolute coordinate obtained by the global positioning system as an initial value to obtain the current sensor external parameter data;
and the scene surface determining unit is used for splicing and determining the scene surface according to the local scene surface by utilizing the current sensor external parameters.
CN202210400962.0A 2022-04-18 2022-04-18 Point cloud reconstruction method and system based on self-encoder Pending CN114663600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210400962.0A CN114663600A (en) 2022-04-18 2022-04-18 Point cloud reconstruction method and system based on self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210400962.0A CN114663600A (en) 2022-04-18 2022-04-18 Point cloud reconstruction method and system based on self-encoder

Publications (1)

Publication Number Publication Date
CN114663600A true CN114663600A (en) 2022-06-24

Family

ID=82035486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210400962.0A Pending CN114663600A (en) 2022-04-18 2022-04-18 Point cloud reconstruction method and system based on self-encoder

Country Status (1)

Country Link
CN (1) CN114663600A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474047A (en) * 2022-09-13 2022-12-13 福州大学 LiDAR point cloud encoding method and decoding method based on enhanced map correlation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474047A (en) * 2022-09-13 2022-12-13 福州大学 LiDAR point cloud encoding method and decoding method based on enhanced map correlation

Similar Documents

Publication Publication Date Title
CN111462329B (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
CN108921926B (en) End-to-end three-dimensional face reconstruction method based on single image
CN110009674B (en) Monocular image depth of field real-time calculation method based on unsupervised depth learning
WO2019127445A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN113936139B (en) Scene aerial view reconstruction method and system combining visual depth information and semantic segmentation
CN112085836A (en) Three-dimensional face reconstruction method based on graph convolution neural network
CN114359509B (en) Multi-view natural scene reconstruction method based on deep learning
CN109945802B (en) Structured light three-dimensional measurement method
CN114782634B (en) Monocular image dressing human body reconstruction method and system based on surface hidden function
CN113962858A (en) Multi-view depth acquisition method
CN110500957A (en) A kind of active three-D imaging method, device, equipment and storage medium
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN112906675A (en) Unsupervised human body key point detection method and system in fixed scene
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
CN115222917A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN115908753A (en) Whole body human mesh surface reconstruction method and related device
CN114663600A (en) Point cloud reconstruction method and system based on self-encoder
CN115049794A (en) Method and system for generating dense global point cloud picture through deep completion
Jia et al. Depth measurement based on a convolutional neural network and structured light
CN113284249B (en) Multi-view three-dimensional human body reconstruction method and system based on graph neural network
CN114396877A (en) Intelligent three-dimensional displacement field and strain field measurement method oriented to material mechanical properties
CN113763539A (en) Implicit function three-dimensional reconstruction method based on image and three-dimensional input
CN107240149A (en) Object dimensional model building method based on image procossing
CN116824433A (en) Visual-inertial navigation-radar fusion self-positioning method based on self-supervision neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination