CN110751684B - Object three-dimensional reconstruction method based on depth camera module - Google Patents

Object three-dimensional reconstruction method based on depth camera module Download PDF

Info

Publication number
CN110751684B
CN110751684B CN201910994807.4A CN201910994807A CN110751684B CN 110751684 B CN110751684 B CN 110751684B CN 201910994807 A CN201910994807 A CN 201910994807A CN 110751684 B CN110751684 B CN 110751684B
Authority
CN
China
Prior art keywords
voxel
new
hash
hash table
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910994807.4A
Other languages
Chinese (zh)
Other versions
CN110751684A (en
Inventor
耿立帅
朝红阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201910994807.4A priority Critical patent/CN110751684B/en
Publication of CN110751684A publication Critical patent/CN110751684A/en
Application granted granted Critical
Publication of CN110751684B publication Critical patent/CN110751684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of three-dimensional reconstruction in the field of computer vision, in particular to an object three-dimensional reconstruction method based on a depth camera module. The invention adopts a new hash method in the voxel hash algorithm process: MD5, the speed of data insertion, searching and indexing can be greatly improved, and collision is reduced; in addition, the invention provides a new memory allocation mode, which solves the defect of one-time allocation of the memory with fixed size in the prior method, thereby being capable of automatically allocating the memory in the reconstruction process and realizing the purpose of dynamically expanding the reconstruction area; according to the method, a novel alignment mode of front and rear frames is adopted, orb characteristic points of the color maps of the front and rear frames are calculated, corresponding point pairs are selected, world coordinates of the front frame and camera coordinates of the rear frame are obtained from the depth map by utilizing the corresponding points, so that camera external parameters of the rear frame are solved, alignment of the front and rear frames is achieved, the corresponding points can be found more accurately, and the drift problem in three-dimensional reconstruction is reduced.

Description

Object three-dimensional reconstruction method based on depth camera module
Technical Field
The invention relates to the technical field of three-dimensional reconstruction in the field of computer vision, in particular to an object three-dimensional reconstruction method based on a depth camera module.
Background
In recent years, computer vision technology has been vigorously developed, and three-dimensional reconstruction using computer vision technology has become more efficient and convenient, has many applications in cultural relics protection and restoration, industrial inspection, and 3D printing, and is also an important component of VR and AR application technologies. Early three-dimensional reconstruction techniques generally take two-dimensional images as input to reconstruct a three-dimensional model in a scene, and because of the data problem of the reconstruction mode, the reconstructed three-dimensional model is easy to have phenomena of holes and distortion. In recent years, the development of three-dimensional reconstruction techniques has been greatly promoted by the invention of various depth cameras, such as kinect, TOF, realSense, and more techniques are beginning to favor real-time three-dimensional reconstruction by using depth cameras.
Real-time three-dimensional reconstructionThe technology utilizes depth data under different angles and converts the depth data into the same coordinate system so as to realize reconstruction and rendering of the surface, and it is very difficult and challenging to consider high reconstruction quality, large reconstruction scale and fast reconstruction speed. The three-dimensional reconstruction technology based on Kinectfusion algorithm basically meets the real-time requirement, but only can process the reconstruction task of a small scene, and the reconstruction process consumes the video memory very much. The three-dimensional reconstruction method based on the voxel hash algorithm is a superior reconstruction method, adopts the voxel hash method, meets the requirements of speed and scene size, but still has the following disadvantages: 1. hash function H (x, y, z) = (x·p) for voxel hashing, an important step on which the technique depends 1 ⊕y·p 2 ⊕z·p 3 ) The probability of generating hash collision is high, the execution efficiency of an algorithm can be seriously slowed down by more hash collision times, and the risk of memory overflow is increased; 2. the technology needs to allocate memory space with fixed size at one time in the process of executing voxel hash, and limits the expandability of three-dimensional reconstruction.
Disclosure of Invention
The invention provides an object three-dimensional reconstruction method based on a depth camera module, which overcomes the defects that the probability of hash collision generated by a hash function used in voxel hash in the prior art is higher, the execution efficiency of a severe slow algorithm is increased, and the risk of memory overflow is increased.
In order to solve the technical problems, the invention adopts the following technical scheme: an object three-dimensional reconstruction method based on a depth camera module adopts an algorithm based on voxel hash to realize three-dimensional reconstruction; the method is characterized in that a new hash function method is adopted to calculate the hash value in the voxel hash process, and the new hash function method comprises the following steps:
first, three-dimensional coordinates are converted into one-dimensional indexes through the following formula:
wherein, delta is the size of the resolution of the current device, namely the size of one voxel;
then, the calculated one-dimensional index data is converted into a hash value by an MD5 method.
Further, the MD5 conversion process specifically includes the following steps:
s21, adding 1 and a plurality of 0 s to the one-dimensional index data, so that byte length is modulo 512 to 448, the length of the data before being filled is represented by the last 64 bits, and the length of the filled data is an integer multiple of 512; the length before padding is represented by the last 64 bits means that the 2-system number of the last 64 bits can be converted into a 10-system number, which is the represented data length before padding;
s22, processing filled data by taking 512 bits as packets, wherein each packet is divided into 16 32-bit sub-blocks, and 4 32-bit register circulation processing sub-blocks are used; MD5 is initialized with four linked variables as parameters, these 4 variables being: a=0x67452301, b=0xefcdab89, c=0x98badcfe, d=0x 10325476;
s23, four-wheel cyclic compression operation is carried out, each wheel has 16 steps, each step uses a message, and the message refers to data taking 512 bits as a group; updating the variables a, b, c, d with the step functions, respectively; the step function is Q i+1 =Q i+1 +((Q i-3 +f i (Q i ,Q i-1 ,Q i-2 )+w i +t i )<<<s i );
S24, cascading the 4 32-bit sub-blocks to obtain a 128-bit value, namely a final hash value. Each parameter of abcd has 32 bits, and the compression result obtained after all messages are processed finally is the cascade value of abcd 32×4=128.
Furthermore, the invention provides a memory dynamic management method, the existing hash table of voxel hash adopts a method of pre-distributing memory, so that although the index efficiency can be improved, the range of a three-dimensional reconstruction scene is limited, the invention uses a hash map (hash matching) mode in a standard template library in c++11 to realize the dynamic expansion of the memory, and when the pre-distributed memory is full, the memory of the hash table is dynamically expanded, so that the three-dimensional reconstruction in a larger range is realized. The dynamic memory management method comprises the following steps: adding a shaping variable in the hash table, wherein the variable represents the number of available positions in the current hash table, stopping inserting the hash table when the value of the variable is close to 0, opening up a new hash table with the same size, pointing the table head of the new table to the table tail of the old table, and reconstructing the hash table; assuming that the original hash table is n in size, the length of the new table becomes 2n, a new bucket number is obtained by modulo 2n on the hash values of all the previously inserted elements, and the elements in the previous old table are moved into the new table after expansion; and after the reconstruction is completed, the new hash table can be inserted. When the calculation is performed, after the hash value is obtained, the N is added to obtain the number of barrels in which the hash value is to be placed, and after the expansion is performed, the number of barrels is changed from the original N to 2N, so that the hash value of the previous element is added to 2N.
Further, the object three-dimensional reconstruction method based on the depth camera module concretely comprises the following steps:
s1, setting a world view under a current architecture: the three-dimensional reconstruction technology essentially builds a large enough space voxel set to wrap the object to be reconstructed therein, and the parent space voxel set can be divided into three layers of substructures: the chunk is a primary substructure, and the space voxel set comprises n x n chunk structures; the block is a secondary substructure, and each chunk contains m-m block structures; the voxel is a three-level substructure, and each block structure comprises t x t voxel substructures; the variables m, n and t are positive integer variables and can be set according to specific conditions;
s2, establishing a 'video stream' based on the TOF module, acquiring a depth map, an RGB map and a point cloud at the current time, and reading and storing a camera internal parameter K of the TOF module and a camera external parameter T at the current gesture into related parameters;
s3, calculating orb characteristic points of the color maps of the front frame and the rear frame based on the color maps of the front frame and the rear frame, selecting a good corresponding point pair, and obtaining world coordinates of the front frame and camera coordinates of the rear frame from the depth map by utilizing the corresponding points, so as to solve camera external parameters of the rear frame, and enabling point cloud pairs under the current frame to be in a world coordinate system of a standard frame (a first frame);
s4, establishing an extensible dynamic hash table at the GPU equipment end through a voxel hash method based on an MD5 coding mode; moving blocks in the view cone range from a host end to a hash table of a device end based on the view cone principle;
s5, selecting effective blocks in a depth region cut-off range based on a depth map of a current frame and combining information in a hash table, establishing a new hash table capable of being dynamically allocated in a GPU (graphics processing unit) equipment end through a memory dynamic management method, moving the effective blocks into the new hash table, and recording position information corresponding to all the effective blocks and positions of a voxel array pointed by the effective blocks in the new hash table;
s6, traversing all block positions in a new hash table, utilizing cuda emission multithreading to traverse all voxels in the block, calculating the world coordinate system position of each voxel according to the initial position of a starting point of a parent space voxel set in a world coordinate system and the real length of each voxel, utilizing a back projection formula to project the world coordinate system position of each voxel to a pixel coordinate system position, judging whether the world coordinate position of each voxel is truly visible in the depth frame range or not, if not, not processing, and otherwise utilizing a TSDF-truncated directed distance field formula to update the TSDF value and the current weight;
s7, the TSDF value is equivalent to a set of isosurfaces, the position of the TSDF value of 0 is equivalent to the surface of the object, and a triangular surface grid is generated by using a Maring cube technology and is rendered;
s8, copying all the voxels in the new hash table from the GPU equipment end to the host end, recording the content of the voxels at the position of the host end, releasing the hash table of the GPU equipment end, and preventing leakage of the video memory;
s9, reading a new RGB image, a new depth image and a new point cloud at the current moment from the TOF module, and jumping to the step S3.
Compared with the prior art, the beneficial effects are that:
1. according to the object three-dimensional reconstruction method based on the depth camera module, the MD5 method is adopted to encode the hash value, so that the conflict times are effectively reduced, and the operations of inserting, inquiring and deleting the hash table can be realized more quickly;
2. the new memory allocation mode provided by the invention can allocate new memory in the real-time scanning process, thereby being applicable to the scene reconstruction work in a larger range and improving the practicability of the algorithm;
3. according to the method, a novel alignment mode of front and rear frames is adopted, orb characteristic points of the color maps of the front and rear frames are calculated, corresponding point pairs are selected, world coordinates of the front frame and camera coordinates of the rear frame are obtained from the depth map by utilizing the corresponding points, so that camera external parameters of the rear frame are solved, alignment of the front and rear frames is achieved, and compared with the method adopting an ICP algorithm, the corresponding points can be found more accurately, and drift problems in three-dimensional reconstruction are reduced.
Drawings
FIG. 1 is a flow chart of the overall three-dimensional reconstruction method of the present invention.
Fig. 2 is a schematic diagram of the MD5 encoding scheme of the present invention.
Fig. 3 is a schematic diagram of the TSDF calculation of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationship described in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
Example 1:
as shown in fig. 1, the object three-dimensional reconstruction method based on the depth camera module specifically includes the following steps:
s1, setting a world view under a current architecture: the three-dimensional reconstruction technology essentially builds a large enough space voxel set to wrap the object to be reconstructed therein, and the parent space voxel set can be divided into three layers of substructures: the chunk is a primary substructure, and the space voxel set comprises n x n chunk structures; the block is a secondary substructure, and each chunk contains m-m block structures; the voxel is a three-level substructure, and each block structure comprises t x t voxel substructures; the variables m, n and t are positive integer variables and can be set according to specific conditions; in this embodiment, m is 128, n is 8,t is 5, and generally, the larger the variable is, the better the surface fidelity will be, but conversely, the certain operation speed will be reduced.
S2, establishing a 'video stream' based on the TOF module, acquiring a depth map, an RGB map and a point cloud at the current time, and reading and storing a camera internal parameter K of the TOF module and a camera external parameter T at the current gesture into related parameters.
S3, calculating orb characteristic points of the color maps of the front frame and the rear frame based on the color maps of the front frame and the rear frame, selecting a good corresponding point pair, and obtaining world coordinates of the front frame and camera coordinates of the rear frame from the depth map by utilizing the corresponding points, so that camera external parameters of the rear frame are solved, and point cloud pairs under the current frame are in the world coordinate system of the standard frame (the first frame).
S4, establishing an extensible dynamic hash table at the GPU equipment end through a voxel hash method based on an MD5 coding mode; and moving the blocks in the view cone range from the host side to the hash table of the equipment side based on the view cone principle.
S5, selecting effective blocks in a depth region cut-off range based on a depth map of a current frame and combining information in a hash table, establishing a new hash table capable of being dynamically allocated in a GPU (graphics processing unit) equipment end through a memory dynamic management method, moving the effective blocks into the new hash table, and recording position information corresponding to all the effective blocks and positions of a voxel array pointed by the effective blocks in the new hash table.
S6, traversing all block positions in the new hash table, utilizing cuda emission multithreading to traverse all voxels in the block, calculating the world coordinate system position of each voxel according to the initial position of a starting point of a parent space voxel set in a world coordinate system and the real length of each voxel, utilizing a back projection formula to project the world coordinate system position of each voxel to a pixel coordinate system position, judging whether the world coordinate position of each voxel is truly visible in the depth frame range or not, if not, not processing, otherwise utilizing a TSDF-truncated directed distance field formula to update the TSDF value and the current weight, as shown in figure 3.
S7, the TSDF value is equivalent to the set of the isosurface, the position of the TSDF value of 0 is equivalent to the surface of the object, and the triangular surface grid is generated and rendered by using the Maring cube technology.
S8, copying all the voxels in the new hash table from the GPU equipment end to the host end, recording the content of the voxels at the position of the host end, releasing the hash table of the GPU equipment end, and preventing leakage of the video memory.
S9, reading a new RGB image, a new depth image and a new point cloud at the current moment from the TOF module, and jumping to the step S3.
In this embodiment, in the hash calculation, the key value of the hash table is obtained by using a hash function from the coordinates of the voxel center point, and in this embodiment, a new hash function method is used to calculate the hash value, where the new hash function method includes the following steps:
first, three-dimensional coordinates are converted into one-dimensional indexes through the following formula:
wherein, delta is the size of the resolution of the current device, namely the size of one voxel;
then, the calculated one-dimensional index data is converted into a hash value by an MD5 method.
As shown in fig. 2, the MD5 conversion process specifically includes the following steps:
s21, adding 1 and a plurality of 0 s to the one-dimensional index data, so that byte length is modulo 512 to 448, the length of the data before being filled is represented by the last 64 bits, and the length of the filled data is an integer multiple of 512;
s22, processing filled data by taking 512 bits as packets, wherein each packet is divided into 16 32-bit sub-blocks, and 4 32-bit registers (4 symbols) are used for circularly processing the sub-blocks; MD5 is initialized with four linked variables as parameters, these 4 variables being: a=0x67452301, b=0xefcdab89, c=0x98badcfe, d=0x 10325476;
s23, four-wheel cyclic compression operation is carried out, each wheel has 16 steps, each step uses a message, and the message refers to data taking 512 bits as a group; updating the variables a, b, c, d with the step functions, respectively; step function is Q i+1 =Q i+1 +((Q i-3 +f i (Q i ,Q i-1 ,Q i-2 )+w i +t i )<<<s i );
S24, cascading the 4 32-bit sub-blocks to obtain a 128-bit value, namely a final hash value.
In this embodiment, a memory dynamic management method is provided, and the existing hash table of voxel hash adopts a method of pre-allocating memory, so that although the index efficiency can be improved, the range of a three-dimensional reconstruction scene is limited. The dynamic memory management method comprises the following steps: adding a shaping variable in the hash table, wherein the variable represents the number of available positions in the current hash table, stopping inserting the hash table when the value of the variable is close to 0, opening up a new hash table with the same size, pointing the table head of the new table to the table tail of the old table, and reconstructing the hash table; assuming that the original hash table is n in size, the length of the new table becomes 2n, a new bucket number is obtained by modulo 2n on the hash values of all the previously inserted elements, and the elements in the previous old table are moved into the new table after expansion; and after the reconstruction is completed, the new hash table can be inserted.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (2)

1. An object three-dimensional reconstruction method based on a depth camera module adopts an algorithm based on voxel hash to realize three-dimensional reconstruction; the method is characterized in that a new hash function method is adopted to calculate the hash value in the voxel hash process, and the new hash function method comprises the following steps:
firstly, converting three-dimensional coordinates into one-dimensional indexes through the following formula:
wherein, delta is the size of the resolution of the current device, namely the size of one voxel;
then, converting the calculated one-dimensional index data into a hash value by an MD5 method; the method specifically comprises the following steps:
s1, setting a world view under a current architecture: the three-dimensional reconstruction technique essentially builds a large enough set of spatial voxels that the object to be reconstructed is wrapped in, and the set of parent spatial voxels is divided down into three sub-structures: the chunk is a primary substructure, and the space voxel set comprises n x n chunk structures; the block is a secondary substructure, and each chunk contains m-m block structures; the voxel is a three-level substructure, and each block structure comprises t x t voxel substructures; the variables m, n and t are positive integer variables, and are set according to specific conditions;
s2, establishing a 'video stream' based on the TOF module, acquiring a depth map, an RGB map and a point cloud at the current time, and reading and storing a camera internal parameter K of the TOF module and a camera external parameter T at the current gesture into related parameters;
s3, calculating orb characteristic points of the color maps of the front frame and the rear frame based on the color maps of the front frame and the rear frame, selecting a good corresponding point pair, and obtaining world coordinates of the front frame and camera coordinates of the rear frame from the depth map by utilizing the corresponding points, so as to solve camera external parameters of the rear frame, and enabling point cloud pairs under the current frame to be in a world coordinate system of a standard frame;
s4, establishing an extensible dynamic hash table at the GPU equipment end through a voxel hash method based on an MD5 coding mode; moving blocks in the view cone range from a host end to a hash table of a device end based on the view cone principle;
s5, selecting effective blocks in a depth region cut-off range based on a depth map of a current frame and combining information in a hash table, establishing a new hash table capable of being dynamically allocated in a GPU (graphics processing unit) equipment end through a memory dynamic management method, moving the effective blocks into the new hash table, and recording position information corresponding to all the effective blocks and positions of a voxel array pointed by the effective blocks in the new hash table;
s6, traversing all block positions in a new hash table, utilizing cuda emission multithreading to traverse all voxels in the block, calculating the world coordinate system position of each voxel according to the initial position of a starting point of a parent space voxel set in a world coordinate system and the real length of each voxel, utilizing a back projection formula to project the world coordinate system position of each voxel to a pixel coordinate system position, judging whether the world coordinate position of each voxel is truly visible in the depth frame range or not, if not, not processing, and otherwise utilizing a TSDF-truncated directed distance field formula to update the TSDF value and the current weight;
s7, the TSDF value is equivalent to a set of isosurfaces, the position of the TSDF value of 0 is equivalent to the surface of the object, and a triangular surface grid is generated by using a Maring cube technology and is rendered;
s8, copying all the voxels in the new hash table from the GPU equipment end to the host end, recording the content of the voxels at the position of the host end, releasing the hash table of the GPU equipment end, and preventing leakage of the video memory;
s9, reading a new RGB image, a new depth image and a new point cloud at the current moment from the TOF module, and jumping to the step S3.
2. The object three-dimensional reconstruction method based on the depth camera module according to claim 1, wherein when the pre-allocation memory of the hash table of the voxel hash is full, a memory dynamic management method is adopted to realize dynamic expansion of the memory of the hash table, and the dynamic memory management method comprises the following steps: adding a shaping variable in the hash table, wherein the variable represents the number of available positions in the current hash table, stopping inserting the hash table when the value of the variable is close to 0, opening up a new hash table with the same size, pointing the table head of the new table to the table tail of the old table, and reconstructing the hash table; assuming that the original hash table is n in size, the length of the new table becomes 2n, a new bucket number is obtained by modulo 2n on the hash values of all the previously inserted elements, and the elements in the previous old table are moved into the new table after expansion; after the reconstruction is completed, the new hash table can be inserted.
CN201910994807.4A 2019-10-18 2019-10-18 Object three-dimensional reconstruction method based on depth camera module Active CN110751684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910994807.4A CN110751684B (en) 2019-10-18 2019-10-18 Object three-dimensional reconstruction method based on depth camera module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910994807.4A CN110751684B (en) 2019-10-18 2019-10-18 Object three-dimensional reconstruction method based on depth camera module

Publications (2)

Publication Number Publication Date
CN110751684A CN110751684A (en) 2020-02-04
CN110751684B true CN110751684B (en) 2023-09-29

Family

ID=69278876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910994807.4A Active CN110751684B (en) 2019-10-18 2019-10-18 Object three-dimensional reconstruction method based on depth camera module

Country Status (1)

Country Link
CN (1) CN110751684B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529671A (en) * 2022-02-15 2022-05-24 国网山东省电力公司建设公司 Substation construction site three-dimensional reconstruction method based on GPU parallel computation and Hash table
CN117952614A (en) * 2024-03-14 2024-04-30 深圳市金政软件技术有限公司 Payment duplicate prevention method, device, terminal equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961390A (en) * 2018-06-08 2018-12-07 华中科技大学 Real-time three-dimensional method for reconstructing based on depth map
CN110310362A (en) * 2019-06-24 2019-10-08 中国科学院自动化研究所 High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053571B2 (en) * 2011-06-06 2015-06-09 Microsoft Corporation Generating computer models of 3D objects
US10891779B2 (en) * 2016-07-13 2021-01-12 Naked Labs Austria Gmbh Efficient volumetric reconstruction with depth sensors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961390A (en) * 2018-06-08 2018-12-07 华中科技大学 Real-time three-dimensional method for reconstructing based on depth map
CN110310362A (en) * 2019-06-24 2019-10-08 中国科学院自动化研究所 High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
郑顺义 等.三维点云数据实时管理的 Hash map方法.《测绘学报》.2018,第47卷(第6期),第825-832页. *
陈尚文.多传感器融合的三维场景感知.《中国博士学位论文全文数据库信息科技辑》.2018,多传感器融合的三维场景感知. *

Also Published As

Publication number Publication date
CN110751684A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN108961390B (en) Real-time three-dimensional reconstruction method based on depth map
CN108810571B (en) Method and apparatus for encoding and decoding two-dimensional point clouds
JP2022542419A (en) Mesh compression via point cloud representation
JP2023514853A (en) projection-based mesh compression
CN110751684B (en) Object three-dimensional reconstruction method based on depth camera module
CN112017228B (en) Method and related equipment for three-dimensional reconstruction of object
RU2767771C1 (en) Method and equipment for encoding/decoding point cloud representing three-dimensional object
CN102447925A (en) Method and device for synthesizing virtual viewpoint image
CN114930395A (en) Context determination of a pattern of planes in octree-based point cloud codec
US11908169B2 (en) Dense mesh compression
CN103024421A (en) Method for synthesizing virtual viewpoints in free viewpoint television
CN106162140A (en) The compression method of a kind of panoramic video and device
CN105828081A (en) Encoding method and encoding device
CN107203962B (en) Method for making pseudo-3D image by using 2D picture and electronic equipment
CN116363290A (en) Texture map generation method for large-scale scene three-dimensional reconstruction
GB2551426A (en) Hardware optimisation for generating 360° images
JP2023515602A (en) Method and apparatus for point cloud coding
Lin et al. Fast multi-view image rendering method based on reverse search for matching
TW202036483A (en) Patch extension method, encoder and decoder
CN110111380A (en) 3D rendering transmission and method for reconstructing based on depth camera
KR102681496B1 (en) Method and apparatus for converting image
US20210217229A1 (en) Method and apparatus for plenoptic point clouds generation
JP2004234476A (en) Image data encoding method, image data decoding method, and image data decoding device
KR102417480B1 (en) Method of compressing volumetric contents and apparatus performing the same
CN113409436B (en) Volume rendering method for diamond pixel arrangement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant