CN111311662A - Method and device for reconstructing three-dimensional scene in real time - Google Patents

Method and device for reconstructing three-dimensional scene in real time Download PDF

Info

Publication number
CN111311662A
CN111311662A CN202010088693.XA CN202010088693A CN111311662A CN 111311662 A CN111311662 A CN 111311662A CN 202010088693 A CN202010088693 A CN 202010088693A CN 111311662 A CN111311662 A CN 111311662A
Authority
CN
China
Prior art keywords
texture
dimensional scene
real time
reconstruction process
reconstructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010088693.XA
Other languages
Chinese (zh)
Other versions
CN111311662B (en
Inventor
方璐
韩磊
顾思远
王好谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202010088693.XA priority Critical patent/CN111311662B/en
Publication of CN111311662A publication Critical patent/CN111311662A/en
Application granted granted Critical
Publication of CN111311662B publication Critical patent/CN111311662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method and apparatus for reconstructing three-dimensional scene in real time, the method includes geometric reconstruction process and texture reconstruction process; the geometric reconstruction process includes: inputting an RGB image and a depth image of a scene to be reconstructed in real time, dividing the image into key frames and non-key frames according to the frame similarity, and processing the key frames, wherein the processing of the key frames comprises storing feature points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids by a moving cube algorithm to generate the geometry of the three-dimensional scene model; the texture reconstruction process includes: and restoring by a texture mapping mode to realize texture optimization so as to reconstruct the color information of the three-dimensional scene model, wherein images under a plurality of visual angles are selected from the input RGB image sequence, and texture slices are extracted to be spliced into complete texture and mapped. The invention can realize real-time reconstruction of a three-dimensional model with high-quality appearance.

Description

Method and device for reconstructing three-dimensional scene in real time
Technical Field
The invention relates to computer vision, in particular to a method and a device for reconstructing a three-dimensional scene in real time.
Background
The scene reconstruction can be widely applied to the fields of machine vision, automatic driving and the like. Reconstructing realistic three-dimensional scene models is particularly important for developing interactive virtual reality (VR/AR) applications. The three-dimensional scene reconstruction algorithm relates to two aspects of geometric reconstruction and texture reconstruction. At present, a plurality of mature technical schemes achieving large-scale high-precision and high-efficiency exist in the aspect of geometric reconstruction, and the texture reconstruction technology is limited by off-line reconstruction and lacks a solution with high efficiency and high quality.
One type of Real-time reconstruction technique, i.e., a Real-time reconstruction technique based on RGBD input images and using TSDF function as a spatial representation, such as document [1] (a.dai, m.nie β ner, m.zollhofer, s.iaddi and c.theobart, bundle fusion: Real-time global constraint 3d reconstruction on-the-while simulation integration. acm transformations on Graphics, vol.36, No.3, p.24,2017) and document [2] (l.han and l.fang, flash fusion: Real-time global constraint 3d reconstruction cpu computing, in boundaries and Systems,2018) uses input sequences for frame matching and registration, i.e., a method for efficiently reconstructing a weighted RGB weighted representation of a corresponding three-dimensional texture map using a RGB weighted average of the RGB vertex information, which is a simple weighted reconstruction method, and a method for efficiently generating a weighted reconstruction of RGB vertex information on a RGB surface, i.e., a method for efficiently reconstructing a weighted representation of a large-dimensional visual appearance and pose by using RGB weighted Reconstruction (RGB) is often a simple and zero weighted reconstruction method for performing a weighted reconstruction on RGB weighted reconstruction of a corresponding vertex information.
One class of offline texture reconstruction techniques is disclosed in literature [3] (V.Lempiness and D.Ivanov, Seamlessmosaicking of image-based texture maps, in 2007IEEE Conference on computer Vision and Pattern Recognition, 2007). Such off-line techniques also stitch together the texture patches from each view image into a whole texture for texture mapping, often resulting in a sharp high quality texture. The technique mainly involves color adjustment for the selection of viewing angles (which can often be translated into optimization problems for MRF fields) and stitched textures. These two portions of content typically have a relatively high complexity and are therefore limited to off-line reconstruction.
Disclosure of Invention
The main object of the present invention is to overcome at least one of the above technical drawbacks and to provide a method and apparatus for high quality real-time reconstruction of three-dimensional scenes.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for reconstructing a three-dimensional scene in real time comprises a geometric reconstruction process and a texture reconstruction process;
the geometric reconstruction process includes: inputting an RGB image and a depth image of a scene to be reconstructed in real time, dividing the image into key frames and non-key frames according to the frame similarity, and processing the key frames, wherein the processing of the key frames comprises storing feature points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids by a moving cube algorithm to generate the geometry of the three-dimensional scene model;
the texture reconstruction process includes: and restoring by a texture mapping mode to realize texture optimization so as to reconstruct the color information of the three-dimensional scene model, wherein images under a plurality of visual angles are selected from the input RGB image sequence, and texture slices are extracted to be spliced into complete texture and mapped.
Further:
the geometric reconstruction process adopts a two-stage structure, the two-stage structure organizes 8 multiplied by 8 voxels into a square, and each effective square containing a model surface is managed by a hash table; when the TSDF and the color value of the new key frame are fused, the depth observation value and the projection distance from the square block to the camera plane are compared to judge the effectiveness of the square block.
In the texture reconstruction process, the texture optimization process, namely the process of selecting and splicing the textures, is equivalent to the problem of optimization of a solution Markov Random Field (MRF).
MRF optimization problem for a solution Markov random field, where the overall objective function is min (E)data+Esmooth) Data item in the formula
Figure BDA0002382960480000021
The integral of the gradient absolute value on the texture slice is shown, I represents the color intensity, p represents a pixel point, and the item generally represents the detail richness of the texture; smoothing term Esmooth=[Li=Lj]·[(i,j)∈ε]And L is a label of a source image from a texture slice i, j, and epsilon is a set of adjacent texture slice pairs, and the smooth item uses a Potts model to punish adjacent texture slices with inconsistent sources, and the item generally represents texture consistency.
Except that the TSDF values and the color values are stored in the square blocks formed by 8x8 voxels in the geometric reconstruction process, the grid model also takes the square blocks as the basic units of the super node and the MRF graph in the texture reconstruction process, and each square block is searched and modified by a spatial hash function.
In the texture reconstruction process, a local color migration algorithm is applied, wherein the traditional vertex fusion color is used as a target image of color migration, and the tone migration is carried out on the source image texture slice.
And solving the optimal linear transformation on the texture slice by using the color migration, so that the mean variance difference between the transformed texture slice and the fusion color is minimum.
An apparatus for reconstructing a three-dimensional scene in real time includes a computer-readable storage medium and a processor, where the computer-readable storage medium stores an executable program, and the executable program, when executed by the processor, implements the method for reconstructing a three-dimensional scene in real time.
A computer readable storage medium having stored thereon an executable program which, when executed by a processor, implements the method of reconstructing a three-dimensional scene in real time.
The beneficial effects of the invention include:
the invention provides a real-time texture reconstruction method and a real-time texture reconstruction device, which can achieve the purpose of reconstructing a high-quality appearance 3D model in real time. The invention provides a method for representing the appearance of a model surface in three-dimensional reconstruction by using a texture mapping mode, wherein the whole texture is formed by splicing image blocks from all visual angles instead of being fused. Compared with a real-time reconstruction method such as the method in the document [1], the texture mapping method can enable the appearance of the model to be clearer. In the preferred scheme, in order to realize a high-efficiency reconstruction scheme suitable for mobile equipment, the invention provides a method for dividing an integral grid structure by using a voxel hash method in the texture reconstruction process, and realizes the flow of texture generation and mapping in real-time three-dimensional reconstruction.
Through experimental inspection, the three-dimensional model with high texture quality can be reconstructed in a data set and a real scene, and meanwhile, the total reconstruction flow can reach the frame rate of 25 Hz. The invention can support real-time three-dimensional reconstruction of portable equipment.
Drawings
FIG. 1 is a basic flowchart of a real-time texture reconstruction method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of texture selection and stitching according to an embodiment of the present invention.
Fig. 3 is a model surface modeled as an undirected graph in an MRF problem in a supernode manner in the embodiment of the present invention.
Fig. 4 is a schematic diagram of a reconstruction effect according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
FIG. 1 is a basic flowchart of a real-time texture reconstruction method according to an embodiment of the present invention. Referring to fig. 1, a real-time texture reconstruction method according to an embodiment of the present invention includes a geometric reconstruction process and a texture reconstruction process as follows.
The geometric reconstruction process includes: inputting an RGB image and a depth image of a scene to be reconstructed in real time, dividing the image into key frames and non-key frames according to the frame similarity, and processing the key frames, wherein the processing of the key frames comprises storing feature points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, the grid is extracted through the moving cube algorithm to generate the geometry of the three-dimensional scene model.
The texture reconstruction process includes: and restoring by a texture mapping mode to realize texture optimization so as to reconstruct the color information of the three-dimensional scene model, wherein images under a plurality of visual angles are selected from the input RGB image sequence, and texture slices are extracted to be spliced into complete texture and mapped.
Specific preferred embodiments of the present invention are further described below with reference to the accompanying drawings.
The real-time texture reconstruction method of the preferred embodiment includes a geometric reconstruction section and a texture reconstruction section as follows.
The input information for the geometric reconstruction is an RGB image and a depth image. According to the embodiment of the invention, when the image is processed, the image is divided into the key frame and the non-key frame according to the frame similarity for processing. The frame similarity, as described in document [2], can calculate the similarity between frames from the binary features of the feature points extracted for each frame. When the similarity between the frames with similar time intervals is higher, the pose change of the camera can be considered to be smaller. For the key frame, the processing flow is to store the feature points, carry out loop detection to search for similar frames and carry out global pose optimization and registration. For non-key frames, it is registered with the relative pose of the relative key frame. The geometry of the 3D model is generated by extracting grids by a moving cube algorithm after TSDF field fusion. Where tsdf (truncated design function) is a measure of the distance of a spatial point to the surface of an object. It should be noted that, in order to reduce the storage space and improve the search efficiency, the scheme adopts a two-stage structure proposed in document [2 ]. This structure organizes 8x8x8 voxels into a square, and hash table management is used for each valid square containing a model surface. When fusing the TSDF and color values of the new keyframe, comparing the depth observation with the projected distance of the tile to the camera plane can determine the validity of the tile. This data structure also plays an important role in the following texture optimization step.
The texture optimization part is the main contribution point of the invention. It was mentioned above that the color information for reconstructing the 3D model is restored by means of texture mapping, i.e. from the inputAnd selecting image extraction texture slices under a plurality of visual angles from the RGB sequence, splicing the image extraction texture slices into complete texture and mapping the texture. The process of texture optimization, i.e. the process of selecting and splicing textures, can be equivalent to the problem of optimization of a solution Markov Random Field (MRF), with a total objective function of min (E)data+Esmooth). Data item in formula
Figure BDA0002382960480000041
And the integral of the absolute value of the gradient on the texture slice, wherein I represents the color intensity and p represents a pixel point. This term generally characterizes the richness of the texture, the smooth term Esmooth=[Li=Lj]·[(i,j)∈ε]Punishment is carried out on adjacent texture tiles with inconsistent sources by using a Potts model, wherein L is the label of the source image from the texture tile i, j, and epsilon is the set of the adjacent texture tile pairs. This term generally characterizes texture consistency. Fig. 2 is a schematic diagram of texture selection and splicing in the embodiment of the present invention, wherein values of k1 and k2 are determined by a solution of an optimization equation.
For the solution of the MRF problem, document [5 ] is used]MRF solver in (1). Typically for a room scale scene, the mesh model at 1cm voxel resolution has 106~107An order of magnitude grid. Document [5 ]]The original solver based on tree-form dynamic programming in (m.waechter, n.moehrel, and m.goesele, Let other be color |. Large-scale texture of 3 driving constraints. in ECCV 2014,2014) needs tens of seconds to process the grid (variable) of such a size, and even if the key frame strategy is adopted, the actual frame rate is greatly affected. One of the solution contributions is to extend the way in which three-dimensional model information is stored in voxel divisions. Except that TSDF values and color values are stored in blocks of 8 × 8 × 8 voxels in the geometric reconstruction, the mesh model also takes such blocks as the basic units of the super node and MRF map in the texture reconstruction process, see fig. 3. Fig. 3 is a model surface modeled as an undirected graph in an MRF problem in a supernode manner in the embodiment of the present invention. Each block is searched and modified through a spatial hash function, so that the efficiency is greatly improved in the process of solving the Markov random field optimization problem, and the efficiency is improved in real machine demonstration (calculation used in test)The platform is Surface Pro with 16GB memory, the sensor is Xtion Pro Live) and the running frame rate can reach 25 Hz. Texture quality contrast document [5 ] generated by this example, due to local continuity of texture at high voxel resolution]The off-line reconstruction result does not generate obvious reduction, and has obvious advantage compared with the precision of the traditional color fusion method.
In addition, in the process of texture optimization and splicing, in order to prevent color inconsistency between texture pieces caused by illumination change, the preferred embodiment simultaneously applies a local color migration algorithm. The specific idea is to use the traditional vertex fusion color as a target image for color migration and perform hue migration on a source image (texture slice). Color migration procedure (reference [4 ]]F.Pitie and A.Kokaram,The linear monge-kantorovitch linear colour mappingfor example-based colour transfer in 4thEuropean Conference on Visual media production,2007) solves for the optimal linear transformation on the texture slice, such that the mean variance difference between the transformed texture slice and the blended color is minimal. The scheme can effectively eliminate the illumination difference through inspection. In the conventional off-line texture reconstruction technology, a global leveling function is usually used to force the colors on the boundary of a texture slice to be consistent, and the complexity of solving a global leveling equation is increased along with the scale of a scene. The local color adjustment strategy of this embodiment makes the complexity of the texture color adjustment step positively correlated to the current updated scene size (number of blocks), and is suitable for the real-time three-dimensional reconstruction process.
Fig. 4 is a schematic diagram of a reconstruction effect according to an embodiment of the present invention. Textured room reconstruction model (left), embodiment of the invention (top right first row in fig. 4) contrasts with the texture effect of document [2] (top right second row in fig. 4), non-textured room reconstruction model (bottom right). According to the invention, through experimental inspection, a three-dimensional model with high texture quality can be reconstructed in a data set and a real scene, and meanwhile, the total reconstruction flow can reach a frame rate of 25 Hz.
In different embodiments, different TSDF-based real-time geometric reconstruction frameworks may be employed in conjunction with texture reconstruction.
In different embodiments, different color compensation algorithms may be used.
In different embodiments, different sensors may be used, such as structured light.
The background of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the claims.

Claims (9)

1. A method for reconstructing a three-dimensional scene in real time is characterized by comprising a geometric reconstruction process and a texture reconstruction process;
the geometric reconstruction process includes: inputting an RGB image and a depth image of a scene to be reconstructed in real time, dividing the image into key frames and non-key frames according to the frame similarity, and processing the key frames, wherein the processing of the key frames comprises storing feature points, performing loop detection to search for similar frames, and performing global pose optimization and registration; for non-key frames, registering with the relative pose of the relative key frame; after TSDF field fusion, extracting grids by a moving cube algorithm to generate the geometry of the three-dimensional scene model;
the texture reconstruction process includes: and restoring by a texture mapping mode to realize texture optimization so as to reconstruct the color information of the three-dimensional scene model, wherein images under a plurality of visual angles are selected from the input RGB image sequence, and texture slices are extracted to be spliced into complete texture and mapped.
2. The method of real-time reconstruction of a three-dimensional scene as claimed in claim 1, wherein the geometric reconstruction process employs a two-stage structure, wherein the two-stage structure organizes 8x8 voxels into a square, and the hash table management is applied to each valid square containing a model surface; when the TSDF and the color value of the new key frame are fused, the depth observation value and the projection distance from the square block to the camera plane are compared to judge the effectiveness of the square block.
3. The method for reconstructing a three-dimensional scene in real time as claimed in claim 1 or 2, wherein the texture reconstruction process, i.e. the process of selecting and splicing the textures, is equivalent to a solution Markov Random Field (MRF) optimization problem.
4. A method of reconstructing a three-dimensional scene in real time as claimed in claim 3, characterized in that the MRF optimization problem for the solution markov random field is such that the overall objective function is min (E)data+Esmooth) Data item in the formula
Figure FDA0002382960470000011
Is the integral of the absolute value of the gradient on the texture slice, I denotes the color intensity, p denotes the pixelPoints, which collectively characterize the level of detail richness of the texture; smoothing term Esmooth=[Li=Lj]·[(i,j)∈ε]And L is a label of a source image from a texture slice i, j, and epsilon is a set of adjacent texture slice pairs, and the smooth item uses a Potts model to punish adjacent texture slices with inconsistent sources, and the item generally represents texture consistency.
5. A method for reconstructing a three-dimensional scene in real time as claimed in claim 3, wherein said blocks are used as the basic units of the super node and MRF graph in the texture reconstruction process, and each block is searched and modified by a spatial hash function, except that TSDF values and color values are stored in the blocks consisting of 8x8 voxels in the geometric reconstruction process, and the grid model is also used as the basic unit of the super node and MRF graph in the texture reconstruction process.
6. The method for reconstructing a three-dimensional scene in real time as claimed in any one of claims 1 to 5, wherein during the texture reconstruction process, a local color migration algorithm is applied, wherein the source image texture tile is subjected to hue migration with a conventional vertex blending color as a target image for color migration.
7. The method of reconstructing a three-dimensional scene in real time as recited in claim 6, wherein said color migration is utilized to solve for an optimal linear transformation on the texture tile such that the difference in mean variance between the transformed texture tile and the blended color is minimized.
8. An apparatus for reconstructing a three-dimensional scene in real time, comprising a computer-readable storage medium and a processor, the computer-readable storage medium storing an executable program, wherein the executable program, when executed by the processor, implements the method for reconstructing a three-dimensional scene in real time as claimed in any one of claims 1 to 7.
9. A computer-readable storage medium storing an executable program, wherein the executable program, when executed by a processor, implements a method of reconstructing a three-dimensional scene in real time as claimed in any one of claims 1 to 7.
CN202010088693.XA 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time Active CN111311662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010088693.XA CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010088693.XA CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Publications (2)

Publication Number Publication Date
CN111311662A true CN111311662A (en) 2020-06-19
CN111311662B CN111311662B (en) 2023-05-09

Family

ID=71148922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010088693.XA Active CN111311662B (en) 2020-02-12 2020-02-12 Method and device for reconstructing three-dimensional scene in real time

Country Status (1)

Country Link
CN (1) CN111311662B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640180A (en) * 2020-08-03 2020-09-08 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device
CN113763559A (en) * 2021-07-01 2021-12-07 清华大学 Geometric motion detail reconstruction method and device for fitting depth image
CN114429495A (en) * 2022-03-14 2022-05-03 荣耀终端有限公司 Three-dimensional scene reconstruction method and electronic equipment
CN114742884A (en) * 2022-06-09 2022-07-12 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system
CN117197365A (en) * 2023-11-07 2023-12-08 江西求是高等研究院 Texture reconstruction method and system based on RGB-D image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364344A (en) * 2018-02-08 2018-08-03 重庆邮电大学 A kind of monocular real-time three-dimensional method for reconstructing based on loopback test
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEI H *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640180A (en) * 2020-08-03 2020-09-08 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment
CN113129352A (en) * 2021-04-30 2021-07-16 清华大学 Sparse light field reconstruction method and device
CN113763559A (en) * 2021-07-01 2021-12-07 清华大学 Geometric motion detail reconstruction method and device for fitting depth image
CN113763559B (en) * 2021-07-01 2024-04-09 清华大学 Geometric motion detail reconstruction method for fitting depth image
CN114429495A (en) * 2022-03-14 2022-05-03 荣耀终端有限公司 Three-dimensional scene reconstruction method and electronic equipment
CN114742884A (en) * 2022-06-09 2022-07-12 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system
CN117197365A (en) * 2023-11-07 2023-12-08 江西求是高等研究院 Texture reconstruction method and system based on RGB-D image

Also Published As

Publication number Publication date
CN111311662B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN111311662B (en) Method and device for reconstructing three-dimensional scene in real time
CN112002014B (en) Fine structure-oriented three-dimensional face reconstruction method, system and device
CN112258390B (en) High-precision microscopic virtual learning resource generation method
US8860712B2 (en) System and method for processing video images
CN104616345B (en) Octree forest compression based three-dimensional voxel access method
US20120032948A1 (en) System and method for processing video images for camera recreation
CN110223370B (en) Method for generating complete human texture map from single-view picture
US20080259073A1 (en) System and method for processing video images
CN109712223B (en) Three-dimensional model automatic coloring method based on texture synthesis
CN111882668B (en) Multi-view three-dimensional object reconstruction method and system
Sarkar et al. Learning quadrangulated patches for 3d shape parameterization and completion
CN112530005B (en) Three-dimensional model linear structure recognition and automatic restoration method
CN113781621A (en) Three-dimensional reconstruction processing method, device, equipment and storage medium
Li et al. Fast texture mapping adjustment via local/global optimization
CN109685879A (en) Determination method, apparatus, equipment and the storage medium of multi-view images grain distribution
Arikan et al. Large-scale point-cloud visualization through localized textured surface reconstruction
Li et al. Animated 3D human avatars from a single image with GAN-based texture inference
CN113593001A (en) Target object three-dimensional reconstruction method and device, computer equipment and storage medium
Zollhöfer et al. Low-cost real-time 3D reconstruction of large-scale excavation sites
Verykokou et al. A Comparative analysis of different software packages for 3D Modelling of complex geometries
JP2832463B2 (en) 3D model reconstruction method and display method
Tlusty et al. Modifying non-local variations across multiple views
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
Zollhöfer et al. Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites using an RGB-D Camera.
Guo et al. Photo-realistic face images synthesis for learning-based fine-scale 3D face reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant