CN111311662B - Method and device for reconstructing three-dimensional scene in real time - Google Patents

Method and device for reconstructing three-dimensional scene in real time Download PDF

Info

Publication number
CN111311662B
CN111311662B CN202010088693.XA CN202010088693A CN111311662B CN 111311662 B CN111311662 B CN 111311662B CN 202010088693 A CN202010088693 A CN 202010088693A CN 111311662 B CN111311662 B CN 111311662B
Authority
CN
China
Prior art keywords
texture
dimensional scene
reconstructing
real time
reconstruction process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010088693.XA
Other languages
Chinese (zh)
Other versions
CN111311662A (en
Inventor
方璐
韩磊
顾思远
王好谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202010088693.XA priority Critical patent/CN111311662B/en
Publication of CN111311662A publication Critical patent/CN111311662A/en
Application granted granted Critical
Publication of CN111311662B publication Critical patent/CN111311662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method and apparatus for reconstructing a three-dimensional scene in real time, the method includes a geometric reconstruction process and a texture reconstruction process; the geometric reconstruction process comprises the following steps: inputting RGB images and depth images of a scene to be reconstructed in real time, dividing the RGB images and the depth images into key frames and non-key frames according to frame similarity, and processing the key frames, wherein the processing of the key frames comprises the steps of storing characteristic points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids through a moving cube algorithm to generate the geometry of the three-dimensional scene model; the texture reconstruction process comprises the following steps: and restoring in a texture mapping mode to realize texture optimization so as to reconstruct color information of the three-dimensional scene model, wherein images under a plurality of view angles are selected from an input RGB image sequence, and texture pieces are extracted and spliced into complete textures and mapped. The invention can realize the real-time reconstruction of the three-dimensional model with high quality appearance.

Description

Method and device for reconstructing three-dimensional scene in real time
Technical Field
The invention relates to computer vision, in particular to a method and a device for reconstructing a three-dimensional scene in real time.
Background
The scene reconstruction can be widely applied to the fields of machine vision, automatic driving and the like. Reconstructing a realistic three-dimensional scene model is particularly important for developing interactive virtual reality (VR/AR) applications. The three-dimensional scene reconstruction algorithm involves both geometric and texture reconstruction. Many mature technical schemes for achieving large-scale high precision and high efficiency are available at present in the aspect of geometric reconstruction, but texture reconstruction technology is limited by offline reconstruction, and a high-efficiency and high-quality solution is lacking.
One type of Real-time reconstruction technique, namely, based on RGBD input images, uses TSDF functions as spatial representation, such as document [1] (A. Dai, M. Nie. Beta. Ner, M.Zollhofer, S.Izadi and C.Theobalt, bundleFusion: real-time globally consistent 3d reconstruction using on-the-fly surface reconstruction ratio. ACM Transactions on Graphics, vol.36, no.3, p.24, 2017) and document [2] (L. Han and L.Fang, flashFusion: real-time globally consistent dense 3d reconstruction using cpu computing,in Robotics:Science and Systems,2018). The technology uses RGB input sequences to carry out frame matching and pose registration, corresponding depth information is expressed as a TSDF function, weighting fusion is carried out, and the final model surface is the zero point set of the TSDF function. The state of the art enables efficient geometric reconstruction of large dimensions (> 5 m) on mobile devices. However, when the RGB color information of the three-dimensional model is restored, the schemes often use a fusion method for carrying out weighted average on the observed values on the vertexes, which is simple and efficient, but the reconstructed model appearance details are fuzzy and ghost phenomena are easy to generate.
One type of offline texture reconstruction technique is described in document [3] (v. Lemithsky and d. Ivanov, seamless mosaicking of image-based texture maps, in 2007IEEE Conference on Computer Vision and Pattern Recognition,2007). Such off-line techniques also split texture slices from the view images into a monolithic texture for texture mapping, often resulting in a sharp high quality texture. The technique mainly involves the selection of viewing angles (which often translates into optimization problems for the MRF field) and color adjustment of the texture of the mosaic. These two parts of content are typically of relatively high complexity and are therefore limited to off-line reconstruction.
Disclosure of Invention
The main object of the present invention is to overcome at least one of the above technical drawbacks, and to provide a method and apparatus for high quality real-time reconstruction of three-dimensional scenes.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a method for reconstructing a three-dimensional scene in real time comprises a geometric reconstruction process and a texture reconstruction process;
the geometric reconstruction process comprises: inputting RGB images and depth images of a scene to be reconstructed in real time, dividing the RGB images and the depth images into key frames and non-key frames according to frame similarity, and processing the key frames, wherein the processing of the key frames comprises the steps of storing characteristic points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids through a moving cube algorithm to generate the geometry of the three-dimensional scene model;
the texture reconstruction process comprises the following steps: and restoring in a texture mapping mode to realize texture optimization so as to reconstruct color information of the three-dimensional scene model, wherein images under a plurality of view angles are selected from an input RGB image sequence, and texture pieces are extracted and spliced into complete textures and mapped.
Further:
the geometric reconstruction process adopts a two-stage structure, wherein the two-stage structure organizes 8 multiplied by 8 voxels into a square, and each effective square containing the model surface is managed by adopting a hash table; when the TSDF and the color value of the new key frame are fused, the depth observation value is compared with the projection distance of the square block to the camera plane, so that the effectiveness of the square block is judged.
In the texture reconstruction process, the process of texture optimization, namely the process of texture selection and splicing, is equivalent to the problem of solution Markov Random Field (MRF) optimization.
For solving the Markov random field MRF optimization problem, where the overall objective function is min (E data +E smooth ) Data item
Figure BDA0002382960480000021
For integration of absolute values of gradients on texture slices, I represents color intensity, p represents pixel points, and the term overall represents detail richness of textures; smooth item E smooth =[L i =L j ]·[(i,j)∈ε]L is the label of the source image of the texture piece i and j, epsilon is the set of adjacent texture piece pairs, the smooth item uses a Potts model to punish adjacent texture pieces with inconsistent sources, and the item overall represents the consistency of textures.
Besides the geometric reconstruction process, the TSDF values and the color values are stored in blocks formed by 8 multiplied by 8 voxels, and meanwhile, in the texture reconstruction process, the grid model takes the blocks as basic units of super nodes and MRF graphs, and each block is searched and modified through a spatial hash function.
In the texture reconstruction process, a local color migration algorithm is applied, wherein a traditional vertex fusion color is used as a target image for color migration, and tone migration is carried out on a texture piece of a source image.
And solving the optimal linear transformation on the texture sheet by utilizing the color migration, so that the mean variance difference between the transformed texture sheet and the fusion color is minimum.
An apparatus for reconstructing a three-dimensional scene in real time includes a computer-readable storage medium and a processor, the computer-readable storage medium storing an executable program which, when executed by the processor, implements the method for reconstructing a three-dimensional scene in real time.
A computer readable storage medium storing an executable program which, when executed by a processor, implements the method of reconstructing a three-dimensional scene in real time.
The beneficial effects of the invention include:
the invention provides a real-time texture reconstruction method and device, which can achieve the purpose of reconstructing a 3D model with high quality appearance in real time. The invention proposes to use texture mapping to represent the appearance of the model surface in three-dimensional reconstruction, the integral texture being obtained by stitching rather than fusing image blocks from various perspectives. Compared with a real-time reconstruction method as in document [1], the texture mapping mode can enable the appearance of the model to be clearer. In a preferred scheme, in order to realize a high-efficiency reconstruction scheme suitable for mobile equipment, the invention provides a method for dividing an integral grid structure by using a voxel hash method in the texture reconstruction process, and realizes the flow of texture generation and mapping in real-time three-dimensional reconstruction.
Through experimental verification, the invention can reconstruct a three-dimensional model with high texture quality under a data set and a real scene, and the total reconstruction flow can reach a frame rate of 25Hz. The invention can support real-time three-dimensional reconstruction of portable equipment.
Drawings
FIG. 1 is a basic flow diagram of a real-time texture reconstruction method according to one embodiment of the present invention.
FIG. 2 is a schematic diagram of texture selection and stitching in an embodiment of the present invention.
FIG. 3 is a graph of model surface modeling as an undirected graph in MRF problem by supernode approach in an embodiment of the present invention.
Fig. 4 is a schematic view of reconstruction effect according to an embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail. It should be emphasized that the following description is merely exemplary in nature and is in no way intended to limit the scope of the invention or its applications.
FIG. 1 is a basic flow diagram of a real-time texture reconstruction method according to one embodiment of the present invention. Referring to fig. 1, the real-time texture reconstruction method provided by the embodiment of the present invention includes the following geometric reconstruction process and texture reconstruction process.
The geometric reconstruction process comprises: inputting RGB images and depth images of a scene to be reconstructed in real time, dividing the RGB images and the depth images into key frames and non-key frames according to frame similarity, and processing the key frames, wherein the processing of the key frames comprises the steps of storing characteristic points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, the grids are extracted through a moving cube algorithm to generate the geometry of the three-dimensional scene model.
The texture reconstruction process comprises the following steps: and restoring in a texture mapping mode to realize texture optimization so as to reconstruct color information of the three-dimensional scene model, wherein images under a plurality of view angles are selected from an input RGB image sequence, and texture pieces are extracted and spliced into complete textures and mapped.
Specific preferred embodiments of the present invention are further described below with reference to the accompanying drawings.
The real-time texture reconstruction method of the preferred embodiment includes a geometric reconstruction section and a texture reconstruction section as follows.
The input information for the geometric reconstruction is an RGB image and a depth image. The embodiment of the invention divides the image into the key frame and the non-key frame for processing according to the frame similarity when processing the image. Frame similarity as described in document [2], the similarity between frames can be calculated from the binary features of the feature points extracted for each frame. When the similarity between frames with similar time intervals is high, the camera pose change can be considered to be small. For the key frames, the processing flow is to store the feature points, perform loop detection to search for similar frames and perform global pose optimization and registration. For non-key frames, then register with the relative pose of the relative key frame. The geometry of the 3D model is generated by TSDF field fusion followed by extraction of grids by a mobile cubic algorithm. Where TSDF (truncated signed distance function) is a measure of the distance of a spatial point to the surface of an object. It is noted that in order to reduce the storage space and improve the searching efficiency, a two-stage structure proposed in document [2] is adopted in the present scheme. This structure organizes 8x8x8 voxels into a square, and hash table management is used for each valid square that contains the model surface. When the TSDF and the color value of the new key frame are fused, the effectiveness of the square can be judged by comparing the depth observation value and the projection distance of the square to the camera plane. This data structure also plays an important role in the subsequent texture optimization step.
The texture optimization part is the main contribution point of the present invention. The color information of the reconstructed 3D model is restored by a texture mapping mode, namely, the texture slices extracted from the images under a plurality of visual angles are selected from the input RGB sequence to be spliced into complete textures and mapped. The process of texture optimization, i.e. the process of texture selection and concatenation, can be equivalent to the solution Markov Random Field (MRF) optimization problem, with an overall objective function of min (E data +E smooth ). Data item
Figure BDA0002382960480000041
I represents the color intensity and p represents the pixel point, which is the integral of the absolute value of the gradient on the texture tile. The item overall characterizes the detail richness of the texture, and the item E is smooth smooth =[L i =L j ]·[(i,j)∈ε]Then using the Potts model to punish adjacent texture slices with inconsistent sources, wherein L is the label of the texture slice i and j source images, and epsilon is the set of adjacent texture slice pairs. This term generally characterizes texture consistency. Fig. 2 is a schematic diagram of texture selection and stitching in an embodiment of the present invention, where the values of k1 and k2 are determined by the solution of the optimization equation.
Document [5 ] is used for solving the MRF problem]M in (2)An RF solver. Typically for an indoor scale scene, there is 10 for a mesh model at 1cm voxel resolution 6 ~10 7 A grid of the order of magnitude. Document [5 ]]The original solver based on tree dynamic programming in (M.Waechter, N.Moehrle, and m.goselee, let there be color | Large-scale texturing of 3d reconstructions.In ECCV 2014,2014) takes tens of seconds to process a grid (variable) of such scale, which can have a significant impact on the actual frame rate even with a key frame strategy. One of the scheme contributions is to expand the way in which three-dimensional model information is stored in voxel divisions. Except for the geometric reconstruction, the TSDF values and color values are stored in blocks of 8x8 voxels, and the mesh model is also built with such blocks as the basic unit of the supernode and MRF map during texture reconstruction, see fig. 3. FIG. 3 is a graph of model surface modeling as an undirected graph in MRF problem by supernode approach in an embodiment of the present invention. Each block is searched and modified through a space hash function, so that the efficiency is greatly improved in solving the Markov random field optimization problem, and the running frame rate can reach 25Hz in real machine demonstration (a computing platform used for testing is Surface Pro with 16GB memory, and a sensor is Xtion Pro Live). Meanwhile, due to the local continuity of texture at high voxel resolution, the texture quality comparison document [5 ] generated by this embodiment]The off-line reconstruction result is not obviously reduced, and the accuracy of the off-line reconstruction method is obviously superior to that of the traditional color fusion method.
In addition, in order to prevent color inconsistency between texture tiles due to illumination variation during texture optimization and stitching, the preferred embodiment applies a local color migration algorithm at the same time. The specific idea is to take the traditional vertex fusion color as a target image of color migration, and perform tone migration on a source image (texture piece). Color migration step (reference [4 ]]F.Pitie and A.Kokaram,The linear monge-kantorovitch linear colour mapping for example-based colour transfer in 4 th European Conference on Visual Media Production, 2007) solves for an optimal linear transformation on the texture tile such that the mean variance difference of the transformed texture tile and the fused color is minimized. Through inspection, the scheme can effectively eliminate illumination differenceDifferent from each other. In the prior offline texture reconstruction technology, the global leveling function is generally used to force the colors on the boundaries of texture slices to be consistent, and the complexity of solving the global leveling equation increases with the scene scale. The local color adjustment strategy of the present embodiment makes the complexity of the texture color adjustment step only positively correlated with the current updated scene scale (number of blocks), and is compatible with the real-time three-dimensional reconstruction process.
Fig. 4 is a schematic view of reconstruction effect according to an embodiment of the present invention. Textured room reconstruction model (left), the texture effect of the present example (top right row of fig. 4) is compared to that of document [2] (top right row of fig. 4), non-textured room reconstruction model (bottom right). According to the invention, through experimental verification, a three-dimensional model with high texture quality can be reconstructed in a data set and a real scene, and meanwhile, the total reconstruction flow can reach a frame rate of 25Hz.
In different embodiments, different TSDF-based real-time geometric reconstruction frameworks may be employed in conjunction with texture reconstruction.
In different embodiments, different color compensation algorithms may be used.
In different embodiments, different sensors, such as structured light, may be used.
The background section of the present invention may contain background information about the problems or environments of the present invention and is not necessarily descriptive of the prior art. Accordingly, inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a further detailed description of the invention in connection with specific/preferred embodiments, and it is not intended that the invention be limited to such description. It will be apparent to those skilled in the art that several alternatives or modifications can be made to the described embodiments without departing from the spirit of the invention, and these alternatives or modifications should be considered to be within the scope of the invention. In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "preferred embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art may combine and combine the features of the different embodiments or examples described in this specification and of the different embodiments or examples without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims.

Claims (8)

1. A method for reconstructing a three-dimensional scene in real time, which is characterized by comprising a geometric reconstruction process and a texture reconstruction process;
the geometric reconstruction process comprises: inputting RGB images and depth images of a scene to be reconstructed in real time, dividing the RGB images and the depth images into key frames and non-key frames according to frame similarity, and processing the key frames, wherein the processing of the key frames comprises the steps of storing characteristic points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids through a moving cube algorithm to generate the geometry of the three-dimensional scene model; the geometric reconstruction process adopts a two-stage structure, wherein the two-stage structure organizes 8 multiplied by 8 voxels into a square, and hash table management is adopted for each effective square containing the model surface; when the TSDF and the color value of the new key frame are fused, comparing the depth observation value with the projection distance from the square to the camera plane to judge the effectiveness of the square;
the texture reconstruction process comprises the following steps: and restoring in a texture mapping mode to realize texture optimization so as to reconstruct color information of the three-dimensional scene model, wherein images under a plurality of view angles are selected from an input RGB image sequence, and texture pieces are extracted and spliced into complete textures and mapped.
2. The method for reconstructing a three-dimensional scene in real time according to claim 1, wherein in the process of reconstructing the texture, the process of texture optimization, namely the process of selecting and stitching the texture, is equivalent to a solution Markov Random Field (MRF) optimization problem.
3. A method of reconstructing a three-dimensional scene in real time according to claim 2, characterized in that for the solution markov random field MRF optimization problem, the overall objective function is min (E data +E smooth ) Data item
Figure FDA0004061716390000011
For integration of absolute values of gradients on texture slices, I represents color intensity, p represents pixel points, and the term overall represents detail richness of textures; smooth item E smooth =[L i =L j ]·[(i,j)∈ε]L is the label of the source image of the texture piece i and j, epsilon is the set of adjacent texture piece pairs, the smooth item uses a Potts model to punish adjacent texture pieces with inconsistent sources, and the item overall represents the consistency of textures.
4. The method of reconstructing a three-dimensional scene in real time according to claim 2, wherein in addition to storing TSDF values and color values in blocks of 8x8 voxels in the geometrical reconstruction process, the grid model is also searched and modified by a spatial hash function using the blocks as basic units of supernodes and MRF map in the texture reconstruction process.
5. The method of real-time reconstructing a three-dimensional scene according to any of claims 1 to 4, wherein a local color migration algorithm is applied during the texture reconstruction process, wherein a source image texture tile is subjected to a tone migration using a conventional vertex fusion color as a color migrated target image.
6. The method of reconstructing a three-dimensional scene in real time according to claim 5, wherein the optimal linear transformation on the texture tile is solved using the color migration such that the variance of the mean of the transformed texture tile and the fused color is minimized.
7. An apparatus for reconstructing a three-dimensional scene in real time, comprising a computer readable storage medium and a processor, said computer readable storage medium storing an executable program, wherein the method for reconstructing a three-dimensional scene in real time according to any one of claims 1 to 6 is implemented when said executable program is executed by said processor.
8. A computer readable storage medium storing an executable program, wherein the executable program when executed by a processor implements the method of reconstructing a three-dimensional scene in real time as claimed in any one of claims 1 to 6.
CN202010088693.XA 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time Active CN111311662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010088693.XA CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010088693.XA CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Publications (2)

Publication Number Publication Date
CN111311662A CN111311662A (en) 2020-06-19
CN111311662B true CN111311662B (en) 2023-05-09

Family

ID=71148922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010088693.XA Active CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Country Status (1)

Country Link
CN (1) CN111311662B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640180B (en) * 2020-08-03 2020-11-24 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device
CN113763559B (en) * 2021-07-01 2024-04-09 清华大学 Geometric motion detail reconstruction method for fitting depth image
CN114429495B (en) * 2022-03-14 2022-08-30 荣耀终端有限公司 Three-dimensional scene reconstruction method and electronic equipment
CN114742884B (en) * 2022-06-09 2022-11-22 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system
CN117197365A (en) * 2023-11-07 2023-12-08 江西求是高等研究院 Texture reconstruction method and system based on RGB-D image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108898630B (en) * 2018-06-27 2020-12-15 清华-伯克利深圳学院筹备办公室 Three-dimensional reconstruction method, device, equipment and storage medium
CN109658449B (en) * 2018-12-03 2020-07-10 华中科技大学 Indoor scene three-dimensional reconstruction method based on RGB-D image

Also Published As

Publication number Publication date
CN111311662A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111311662B (en) Method and device for reconstructing three-dimensional scene in real time
CN112258390B (en) High-precision microscopic virtual learning resource generation method
Johnson et al. Registration and integration of textured 3D data
CN104616345B (en) Octree forest compression based three-dimensional voxel access method
CN110223370B (en) Method for generating complete human texture map from single-view picture
CN111882668B (en) Multi-view three-dimensional object reconstruction method and system
CN110633628B (en) RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
CN113781621A (en) Three-dimensional reconstruction processing method, device, equipment and storage medium
Chen et al. Research on 3D reconstruction based on multiple views
Kholil et al. 3D reconstruction using structure from motion (SFM) algorithm and multi view stereo (MVS) based on computer vision
Schreiberhuber et al. Scalablefusion: High-resolution mesh-based real-time 3D reconstruction
Zollhöfer et al. Low-cost real-time 3D reconstruction of large-scale excavation sites
CN112200906B (en) Entity extraction method and system for inclined three-dimensional model
JP2832463B2 (en) 3D model reconstruction method and display method
Li et al. 3d shape reconstruction of furniture object from a single real indoor image
Rong et al. A survey of multi view stereo
CN114049423A (en) Automatic realistic three-dimensional model texture mapping method
Zollhöfer et al. Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites using an RGB-D Camera.
Pan et al. Rapid 3D modelling from live video
Chu et al. Hole-filling framework by combining structural and textural information for the 3D Terracotta Warriors
Jiang et al. Visual odometry based 3D-reconstruction
Georgiou et al. Projective Urban Texturing
Pantazis et al. Are the morphing techniques useful for cartographic generalization?
Ding et al. Fragmented cultural relics restoration based on point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant