CN112258626A - Three-dimensional model generation method and system for generating dense point cloud based on image cascade - Google Patents

Three-dimensional model generation method and system for generating dense point cloud based on image cascade Download PDF

Info

Publication number
CN112258626A
CN112258626A CN202010986409.0A CN202010986409A CN112258626A CN 112258626 A CN112258626 A CN 112258626A CN 202010986409 A CN202010986409 A CN 202010986409A CN 112258626 A CN112258626 A CN 112258626A
Authority
CN
China
Prior art keywords
point cloud
image
model
dense point
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010986409.0A
Other languages
Chinese (zh)
Inventor
刘丽
王萍
田甜
张静静
王天时
张化祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202010986409.0A priority Critical patent/CN112258626A/en
Publication of CN112258626A publication Critical patent/CN112258626A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a three-dimensional model generation method and a system for generating dense point cloud based on image cascade, which comprises the following steps: carrying out pre-reconstruction processing on the obtained image to obtain a corresponding sparse point cloud model; sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain a dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing; and reconstructing the dense point cloud model and outputting a three-dimensional graph model associated with the image. The high-resolution target point cloud is reconstructed by combining image pre-reconstruction and double point cloud uniform up-sampling, joint optimization is realized by using staged training and network fine adjustment, so that the generated dense point cloud has the characteristic of uniform distribution, the visual reality of a dense point cloud model is enhanced, and the degree of association between the dense point cloud and an original single image is higher by adopting an image re-description mechanism.

Description

Three-dimensional model generation method and system for generating dense point cloud based on image cascade
Technical Field
The invention relates to the technical field of stereo design, in particular to a three-dimensional model generation method and a three-dimensional model generation system for generating dense point clouds based on image cascade.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Image-based reconstruction of a three-dimensional model refers to the generation of a three-dimensional model with a realistic effect using one or more images, and the generation of a high-resolution model from a given image is very important for some practical applications, such as robotics, computer vision, and automatic driving, and particularly in the field of computer vision, the reconstruction of a high-resolution three-dimensional model of a target object is required.
The human visual system is capable of processing a retinal image of a target object to extract its underlying three-dimensional structure, and its three-dimensional perception capability is not limited to the reconstruction of the overall shape, but also to capture local details of the object's surface.
Similar to the human visual system, machines can also learn and perceive various target objects in the three-dimensional world, wherein the point cloud is a representation form of the machine learning and perceiving target objects, and compared with a three-dimensional model represented by geometric figures and simple grids, the point cloud may not be efficient enough in representing the underlying geometric structure, but because it does not need to define multiple basic units or connection ways, the point cloud also has many advantages, such as simple and uniform three-dimensional structure, easy learning, easy geometric transformation and deformation, and full capture of surface information of the model.
Point clouds are widely used due to their vectorization and compactness of information, but the unordered and discrete geometric features reduce the accuracy of reconstructing the point cloud from the image; the inventors of the present invention found that reconstructing a three-dimensional object from a single image has the following problems: early three-dimensional model reconstruction methods were mainly based on the study of geometric features, one being reconstruction by finding the best-fit parameters between the model projection and the input image; the other is to restore the geometric shape according to the three-dimensional information of the image, such as shadow, texture, contour and the like; these methods require a large amount of a priori information and have high requirements on illumination and environment, so it is often difficult to reconstruct a high quality object model.
Disclosure of Invention
In order to solve the problems, the invention provides a three-dimensional model generation method and a three-dimensional model generation system for generating dense point clouds based on image cascade connection.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for generating a three-dimensional model of dense point cloud based on image cascade, comprising:
carrying out pre-reconstruction processing on the obtained image to obtain a corresponding sparse point cloud model;
sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain a dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
and reconstructing the dense point cloud model and outputting a three-dimensional graph model associated with the image.
In a second aspect, the present invention provides a three-dimensional model generation system for generating dense point clouds based on image cascading, comprising:
the pre-reconstruction module is used for carrying out pre-reconstruction processing on the acquired image to obtain a corresponding sparse point cloud model;
the dense point cloud model generation module is used for sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain the dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
and the three-dimensional model module is used for reconstructing the dense point cloud model and outputting a three-dimensional map model associated with the image.
In a third aspect, the present invention provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein when the computer instructions are executed by the processor, the method of the first aspect is performed.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
the method comprises the steps of reconstructing a high-resolution target point cloud through image pre-reconstruction and point cloud up-sampling, firstly combining pre-reconstruction and uniform up-sampling to generate a high-resolution point cloud, and realizing joint optimization through staged training; and then realizing bidirectional association through an image re-description mechanism and enhancing semantic consistency between the image and the point cloud.
According to the method, the accuracy of the generated dense point cloud is improved through the synergistic effect of pre-reconstruction, uniform up-sampling and image re-description, the generated dense point cloud has the characteristic of uniform distribution, the visual reality of a dense point cloud model is enhanced, and the generated dense point cloud is higher in association degree with the original single image.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of a three-dimensional model generation method for generating a dense point cloud based on image cascade according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a framework for cascade generation of dense point clouds according to embodiment 1 of the present invention.
The specific implementation mode is as follows:
the invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example 1
As shown in fig. 1, the present embodiment provides a method for generating a three-dimensional model of dense point cloud based on image cascade, including:
s1: carrying out pre-reconstruction processing on the obtained image to obtain a corresponding sparse point cloud model;
s2: sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain a dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
s3: and reconstructing the dense point cloud model and outputting a three-dimensional graph model associated with the image.
The visual technology and the three-dimensional model technology are fused by extracting the information of a single image in the dense point cloud generating process, the pre-reconstruction, uniform up-sampling and image re-description of the three-dimensional model are realized, the three-dimensional model meeting the design intention of a user is finally generated, and the reconstruction of the three-dimensional model based on the single image is realized.
In this embodiment, the three-dimensional design object is exemplified by a chair, and it is understood that in other embodiments, the design object may be a vehicle, an airplane, a table or a house, as long as the design object can provide visual images.
As shown in fig. 2, the process of generating the dense point cloud model from the single image of the chair specifically includes:
in step S1, a pre-reconstruction process is proposed in order to generate dense point clouds of visual reality and semantic consistency based on a given single image;
the pre-reconstruction includes:
s1-1: inputting a single RGB image into a codec network consisting of a plurality of convolutional layers and a full link layer;
s1-2: extracting the characteristics of a single RGB image through the network and outputting a sparse point cloud;
s1-3: for better training of the network, the pre-reconstruction loss function is defined as:
Figure BDA0002689411280000051
wherein, Pi preAnd
Figure BDA0002689411280000052
respectively representing the point cloud of the i-th reconstructed sample (pre) and the point cloud of the i-th real sample (T),
Figure BDA0002689411280000053
representing a point cloud Pi preAnd
Figure BDA0002689411280000054
the model distance may be a Chamfer Distance (CD) or a distance of Earth Movement (EMD).
In the step S2, the cascade generation process of the dense point cloud aims at performing double uniform upsampling on the sparse point cloud generated in the pre-reconstruction process to realize uniform generation of the dense point cloud, and the dense point cloud has higher visual reality and semantic consistency, and the uniform upsampling is composed of a feature extraction network, a residual image volume block and an upsampling block;
the feature extraction network is used for guiding the generation of dense point cloud by acquiring the features of the point cloud as auxiliary information so as to enable the generated point cloud to have higher visual reality and semantic consistency; in this embodiment, a feature extraction network block is designed, and a point cloud is used as an input to output a feature f corresponding to the point cloud, specifically:
for each point P with the shape of 1 × 3, k nearest neighbors N with the shape of z × 3 are obtained, and the point P is calculated to be N-P; converting P into a feature f of 1 × c shape by convolving points by pointp(ii) a In the present embodiment, k is set to 6, c is set to 64, and the number of convolution layers is set to 3.
The residual image volume block takes the point cloud and the corresponding features as input, and further extracts residual features;
the core of the residual map volume block is G-conv, which is defined on the map G ═ v (e), and is calculated as follows:
Figure BDA0002689411280000061
wherein the content of the first and second substances,
Figure BDA0002689411280000065
features representing a jth layer vertex p; w is aiIs a hyper-parameter of one; v (p) is all vertices connected to p, defined by the adjacency matrix ε, q is a point belonging to V (p); since a predefined adjacency matrix epsilon for the point cloud is not available, V (p) is defined as the k nearest neighbors of p in Euclidean space whose coordinates are given by the input point cloud xinAnd (4) defining.
For a better training generator G, we propose a residual map convolution loss function:
Figure BDA0002689411280000062
where λ is a hyper-parameter, used to control the distance loss term
Figure BDA0002689411280000063
And generating a loss term
Figure BDA0002689411280000064
Balance between them; x is the number ofiIs an input sparse point cloud; y isiIs with xiCorresponding real dense point clouds; g (x)i) Is a dense point cloud generated by a point cloud up-sampling network; in addition, distance loss term
Figure BDA0002689411280000074
And generating a loss term
Figure BDA0002689411280000075
The meanings of (A) are explained below:
Figure BDA0002689411280000076
for measuring y and
Figure BDA0002689411280000077
the distance between:
Figure BDA0002689411280000071
wherein the content of the first and second substances,
Figure BDA0002689411280000078
representing the generated dense point cloud; in addition to this, the present invention is,
Figure BDA0002689411280000079
is defined as:
Figure BDA0002689411280000072
in order to achieve faster convergence and better result of the training of the network, in addition to G-conv, the present embodiment introduces a residual network, and verifies that the residual network is helpful for finding the similarity between the low-resolution point cloud and the corresponding high-resolution point cloud.
The up-sampling block is used for point cloud xinAnd features f corresponding theretoinAs input, to predict xinAnd xoutResidual error between, not a direct regression xout
The shape is formed by G-conv layer
Figure BDA00026894112800000710
F of (a)inIs converted into a shape of
Figure BDA00026894112800000712
The tensor of (a);
remodeling the tensor into
Figure BDA00026894112800000711
And is noted as deltax
By adding x point by pointinAnd deltaxObtaining an upsampled point cloud xoutEach point in the point cloud is converted into two points after being up-sampled.
Output characteristic f corresponding to output point cloud pout
Figure BDA0002689411280000073
Wherein, V [ x ]in](p) representing the point cloud xinThe k nearest neighbors of the midpoint p.
In this embodiment, the method further includes performing uniform upsampling on the sparse point cloud once to obtain a dense point cloud, and distinguishing the dense point cloud and the real point cloud of the same scale to judge the authenticity of the generated dense point cloud; and then inputting the denser point cloud of the current stage into the next uniform up-sampling network to generate the final dense point cloud.
In this embodiment, the cascade generation network structure for reconstructing the dense point cloud based on the single image adopts an end-to-end generation mode, and the generated dense point cloud has the characteristic of uniform distribution.
In the step S3, after generating a dense point cloud model based on a single image, an image re-description mechanism is proposed to improve the visual reality of the point cloud, so as to realize bidirectional association and enhance semantic consistency between the image and the point cloud;
the image re-description module maps the originally input single image and the finally generated dense point cloud to two networks in a common semantic space, namely an image encoder and a model encoder, and the two encoders can measure the similarity between the image and the point cloud so as to calculate the image regeneration loss for target point cloud reconstruction.
Wherein the image encoder employs an inclusion-v 3 pre-trained on ImageNet to map the input image to an image semantic space; the input image is firstly rescaled to 299 x 299 pixel points, then input into the image encoder, after passing through the image encoder, the image feature vector
Figure BDA0002689411280000081
Is extracted from the last pooling layer of inclusion-v 3.
The model encoder is a one-dimensional CNN and is used for mapping the generated point cloud into a model semantic space; the embodiment extracts the model feature vector by using a one-dimensional convolution and feature transformation method
Figure BDA0002689411280000082
Converting the model features into a common semantic space with the image features by adding a feature projection network with three full connection layers; the calculation formula is as follows:
Figure BDA0002689411280000083
wherein, FPA network of projection of the features is represented,
Figure BDA0002689411280000084
is a model feature vector in the image semantic space.
In the embodiment, a dense point cloud model is generated based on a single image, a plurality of modules are taken as main bodies, the image is introduced under the synergistic action of a plurality of network modules to generate loss, the loss of the original single image and the finally generated dense point cloud model is calculated by converting the original single image and the finally generated dense point cloud model into a common space, the loss can represent the visual reality of the generated dense point cloud, and the association between the single image and the finally generated dense point cloud model is enhanced;
the image regeneration loss
Figure BDA0002689411280000091
To enhance the semantic correlation between the reconstructed point cloud and the given image, the loss function is as follows:
Figure BDA0002689411280000092
wherein d isEuc(x, y) represents the euclidean distance between vectors x and y;
the final generation loss function of the generator in the cascade generation network based on the single image reconstruction dense point cloud is as follows:
Figure BDA0002689411280000093
wherein λ is12Is a hyper-parameter that balances losses of the various items,
Figure BDA0002689411280000094
a pre-reconstruction loss function in the network is generated for reconstructing the concatenation of dense point clouds,
Figure BDA0002689411280000095
as a function of the total loss in the residual map volume block,
Figure BDA0002689411280000096
a loss function is regenerated for the image of the image re-description module.
In the embodiment, in order to generate dense point cloud with higher visual reality, a point cloud discrimination process is provided, and a discriminator D consists of a feature extraction network, a residual image volume block and a pooling block;
an input point with the shape of 4n multiplied by 3 is subjected to point cloud x by adopting a Furthest Point Sampling (FPS) methodinOutput point cloud x converted into n x 3 shapeoutObtaining the characteristics of the corresponding point p:
Figure BDA0002689411280000097
where max represents the maximization operation performed point by point.
In this embodiment, the local arbiter of the construction graph generation countermeasure network replaces the global arbiter used in the previous method, and specifically, the local arbiter takes the form of down-sampling the input multiple times so that the output contains more than 1 point.
The loss function of discriminator D is intended to distinguish between real and generated dense point clouds by minimizing the loss:
Figure BDA0002689411280000101
in the embodiment, a single image in the visual aspect of a design object is used as input, a pre-reconstruction module is firstly adopted to perform feature extraction on the single image, a sparse point cloud is pre-reconstructed, then uniform up-sampling is adopted to generate dense point cloud on the basis of the sparse point cloud, an image encoder and a model encoder are utilized to perform feature extraction on the single image and the finally generated dense point cloud respectively according to an image re-description module, and loss is calculated after conversion to a public space, so that the finally generated dense point cloud is ensured to have visual reality and semantic consistency.
Example 2
The embodiment provides a three-dimensional model generation system for generating dense point cloud based on image cascade, which comprises:
the pre-reconstruction module is used for carrying out pre-reconstruction processing on the acquired image to obtain a corresponding sparse point cloud model;
the dense point cloud model generation module is used for sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain the dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
and the three-dimensional model module is used for reconstructing the dense point cloud model and outputting a three-dimensional map model associated with the image.
It should be noted that the above modules correspond to steps S1 to S3 in embodiment 1, and the above modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In further embodiments, there is also provided:
an electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of embodiment 1. For brevity, no further description is provided herein.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. A three-dimensional model generation method for generating dense point cloud based on image cascade is characterized by comprising the following steps:
carrying out pre-reconstruction processing on the obtained image to obtain a corresponding sparse point cloud model;
sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain a dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
and reconstructing the dense point cloud model and outputting a three-dimensional graph model associated with the image.
2. The method of generating a three-dimensional model of dense point cloud based on image cascading of claim 1, wherein the pre-reconstruction process comprises: inputting the acquired image into a coder-decoder network consisting of a plurality of convolutional layers and full-link layers, extracting image characteristics by defining a pre-reconstruction loss function, and outputting a sparse point cloud model.
3. The method of generating a three-dimensional model of dense point cloud based on image cascading of claim 2, wherein the pre-reconstruction loss function is:
Figure FDA0002689411270000011
wherein, Pi preAnd
Figure FDA0002689411270000012
respectively representing the point cloud of the i-th reconstructed sample pre and the point cloud of the i-th real sample T,
Figure FDA0002689411270000013
represents Pi preAnd
Figure FDA0002689411270000014
the model distance between.
4. The method of claim 1, wherein the feature extraction is extracting features corresponding to each point in the sparse point cloud model;
or, the residual map convolution is to take each point in the sparse point cloud model and the corresponding feature as input, and extract the residual feature by defining a residual map convolution loss function;
or, the up-sampling is to convert the features corresponding to each point into tensors, and after the tensors are reshaped, the point cloud model after up-sampling is obtained by adding each point in the sparse point cloud model and the reshaped tensors point by point.
5. The method of claim 4, wherein the residual map volume loss function is:
Figure FDA0002689411270000021
where λ is the hyperparameter, the control distance loss term
Figure FDA0002689411270000023
And generating a loss term
Figure FDA0002689411270000024
Balance between them; x is the number ofiIs an input sparse point cloud; y isiIs with xiCorresponding real dense point clouds; g (x)i) Is a dense point cloud generated by a point cloud upsampling network.
6. The method for generating the three-dimensional model based on image cascade generation of the dense point cloud as claimed in claim 1, wherein the dense point cloud is subjected to image re-description processing, an image re-generation loss function is calculated according to the acquired image and the dense point cloud model, and the visual reality of the dense point cloud model is judged.
7. The method of generating a three-dimensional model of dense point cloud based on image cascading of claim 6, wherein the image regeneration loss function is:
Figure FDA0002689411270000022
wherein d isEuc(x, y) represents the Euclidean distance between vectors x and y,
Figure FDA0002689411270000025
is a model feature vector, fIIs an image feature vector.
8. A three-dimensional model generation system for generating dense point clouds based on image cascading, comprising:
the pre-reconstruction module is used for carrying out pre-reconstruction processing on the acquired image to obtain a corresponding sparse point cloud model;
the dense point cloud model generation module is used for sequentially carrying out double uniform up-sampling on the sparse point cloud model to obtain the dense point cloud model, wherein the double uniform up-sampling comprises feature extraction, residual image convolution and up-sampling processing;
and the three-dimensional model module is used for reconstructing the dense point cloud model and outputting a three-dimensional map model associated with the image.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, the computer instructions when executed by the processor performing the method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 7.
CN202010986409.0A 2020-09-18 2020-09-18 Three-dimensional model generation method and system for generating dense point cloud based on image cascade Pending CN112258626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010986409.0A CN112258626A (en) 2020-09-18 2020-09-18 Three-dimensional model generation method and system for generating dense point cloud based on image cascade

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010986409.0A CN112258626A (en) 2020-09-18 2020-09-18 Three-dimensional model generation method and system for generating dense point cloud based on image cascade

Publications (1)

Publication Number Publication Date
CN112258626A true CN112258626A (en) 2021-01-22

Family

ID=74231273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010986409.0A Pending CN112258626A (en) 2020-09-18 2020-09-18 Three-dimensional model generation method and system for generating dense point cloud based on image cascade

Country Status (1)

Country Link
CN (1) CN112258626A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205104A (en) * 2021-04-23 2021-08-03 广西大学 Point cloud completion method based on deep learning
CN113628338A (en) * 2021-07-19 2021-11-09 香港中文大学(深圳) Sampling reconstruction method and device, computer equipment and storage medium
CN114092653A (en) * 2022-01-11 2022-02-25 深圳先进技术研究院 Method, device and equipment for reconstructing 3D image based on 2D image and storage medium
CN114821251A (en) * 2022-04-28 2022-07-29 北京大学深圳研究生院 Method and device for determining point cloud up-sampling network
CN115578265A (en) * 2022-12-06 2023-01-06 中汽智联技术有限公司 Point cloud enhancement method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685842A (en) * 2018-12-14 2019-04-26 电子科技大学 A kind of thick densification method of sparse depth based on multiple dimensioned network
CN111192313A (en) * 2019-12-31 2020-05-22 深圳优地科技有限公司 Method for robot to construct map, robot and storage medium
CN111563923A (en) * 2020-07-15 2020-08-21 浙江大华技术股份有限公司 Method for obtaining dense depth map and related device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685842A (en) * 2018-12-14 2019-04-26 电子科技大学 A kind of thick densification method of sparse depth based on multiple dimensioned network
CN111192313A (en) * 2019-12-31 2020-05-22 深圳优地科技有限公司 Method for robot to construct map, robot and storage medium
CN111563923A (en) * 2020-07-15 2020-08-21 浙江大华技术股份有限公司 Method for obtaining dense depth map and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUIKAI WU 等: ""Point Cloud Super Resolution with Adversarial Residual Graph Networks"", 《ARXIV:1908.02111V1 [CS.GR]》 *
TIANSHI WANG 等: ""HIGH-RESOLUTION POINT CLOUD RECONSTRUCTION FROM A SINGLE IMAGE BY REDESCRIPTION"", 《IEEE》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205104A (en) * 2021-04-23 2021-08-03 广西大学 Point cloud completion method based on deep learning
CN113628338A (en) * 2021-07-19 2021-11-09 香港中文大学(深圳) Sampling reconstruction method and device, computer equipment and storage medium
CN114092653A (en) * 2022-01-11 2022-02-25 深圳先进技术研究院 Method, device and equipment for reconstructing 3D image based on 2D image and storage medium
CN114821251A (en) * 2022-04-28 2022-07-29 北京大学深圳研究生院 Method and device for determining point cloud up-sampling network
CN114821251B (en) * 2022-04-28 2024-04-12 北京大学深圳研究生院 Method and device for determining point cloud up-sampling network
CN115578265A (en) * 2022-12-06 2023-01-06 中汽智联技术有限公司 Point cloud enhancement method, system and storage medium
CN115578265B (en) * 2022-12-06 2023-04-07 中汽智联技术有限公司 Point cloud enhancement method, system and storage medium

Similar Documents

Publication Publication Date Title
CN112258626A (en) Three-dimensional model generation method and system for generating dense point cloud based on image cascade
CN109509152B (en) Image super-resolution reconstruction method for generating countermeasure network based on feature fusion
CN112308200B (en) Searching method and device for neural network
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN110570522B (en) Multi-view three-dimensional reconstruction method
CN115482241A (en) Cross-modal double-branch complementary fusion image segmentation method and device
CN112396645B (en) Monocular image depth estimation method and system based on convolution residual learning
CN114463511A (en) 3D human body model reconstruction method based on Transformer decoder
CN113792641B (en) High-resolution lightweight human body posture estimation method combined with multispectral attention mechanism
CN111127538A (en) Multi-view image three-dimensional reconstruction method based on convolution cyclic coding-decoding structure
CN113962858A (en) Multi-view depth acquisition method
CN113436237B (en) High-efficient measurement system of complicated curved surface based on gaussian process migration learning
CN109447897B (en) Real scene image synthesis method and system
CN112634438A (en) Single-frame depth image three-dimensional model reconstruction method and device based on countermeasure network
CN115484410A (en) Event camera video reconstruction method based on deep learning
CN113077545B (en) Method for reconstructing clothing human body model from image based on graph convolution
CN111311732A (en) 3D human body grid obtaining method and device
CN116091762A (en) Three-dimensional target detection method based on RGBD data and view cone
KR102461111B1 (en) Texture mesh reconstruction system based on single image and method thereof
CN112785684B (en) Three-dimensional model reconstruction method based on local information weighting mechanism
CN112785498B (en) Pathological image superscore modeling method based on deep learning
CN113096239B (en) Three-dimensional point cloud reconstruction method based on deep learning
Wen et al. Mrft: Multiscale recurrent fusion transformer based prior knowledge for bit-depth enhancement
CN116524111B (en) On-orbit lightweight scene reconstruction method and system for supporting on-demand lightweight scene of astronaut
CN113112585B (en) Method for reconstructing three-dimensional shape of high-quality target from single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122