CN107862733B - Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm - Google Patents

Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm Download PDF

Info

Publication number
CN107862733B
CN107862733B CN201711087652.3A CN201711087652A CN107862733B CN 107862733 B CN107862733 B CN 107862733B CN 201711087652 A CN201711087652 A CN 201711087652A CN 107862733 B CN107862733 B CN 107862733B
Authority
CN
China
Prior art keywords
voxel
sight line
sdf
dimensional
distance value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711087652.3A
Other languages
Chinese (zh)
Other versions
CN107862733A (en
Inventor
周余
章坚
于耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201711087652.3A priority Critical patent/CN107862733B/en
Publication of CN107862733A publication Critical patent/CN107862733A/en
Application granted granted Critical
Publication of CN107862733B publication Critical patent/CN107862733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Abstract

The invention discloses a method and a system for large-scale scene real-time three-dimensional reconstruction based on a sight line updating algorithm, and belongs to the field of computer vision and robots. The invention solves the problems that: aiming at the problem that the real-time reconstruction of large-scale point cloud data of a laser radar cannot be realized by a common algorithm, a method for generating a sight line according to the current three-dimensional point cloud, realizing data updating and realizing real-time reconstruction is provided. The method mainly comprises the steps of three-dimensional point cloud acquisition and sensor external parameter data calculation, the symbolic distance value of an implicit surface is calculated based on a sight line algorithm, the symbolic distance value is subjected to weighted fusion, volume rendering and storage are carried out on volume data, and the reconstruction effect is displayed in real time. According to the method, the implicit surface is updated in real time by introducing the sight updating algorithm, so that large-scale scene real-time reconstruction based on depth sensors such as radars is realized, great advantages are achieved in speed, and good effects are achieved in reconstruction quality.

Description

Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm
Technical Field
The invention belongs to the field of computer vision and robots, and relates to a large-scale scene real-time three-dimensional reconstruction method and system based on a sight updating algorithm.
Background
Real-time three-dimensional reconstruction refers to a process of processing three-dimensional point cloud or image data in real time and generating a three-dimensional model. The three-dimensional reconstruction plays an important role in the fields of computer vision and robots, and is the basis of applications such as virtual reality, intelligent monitoring, robot path planning and the like. Because the laser radar can collect hundreds of thousands of points per second, the data volume is huge, and meanwhile, the collected data contains large noise, and the reconstruction of a three-dimensional model of a large-range scene in real time is very challenging.
Three-dimensional reconstruction is mainly divided into two categories: three-dimensional reconstruction based on camera images and three-dimensional reconstruction based on lidar. The three-dimensional reconstruction based on the image mainly comprises the steps of image acquisition, camera calibration, feature point extraction, feature point matching, surface reconstruction and the like. The accuracy and robustness of image-based three-dimensional reconstruction on feature point matching are poor, and real-time three-dimensional reconstruction cannot be achieved.
At present, a reconstruction algorithm based on a laser radar mainly adopts a method of triangularization after point cloud simplification, incremental updating of a surface model is difficult to achieve, and meanwhile real-time processing of each frame of three-dimensional point cloud data cannot be achieved. In addition, the reconstruction quality is low because the attitude information of each frame of sensor is not fully utilized. In order to solve the problems in the prior art, a new method needs to be provided.
Disclosure of Invention
The invention mainly aims to solve the problems in the prior art and provides a method and a system for real-time three-dimensional reconstruction of a large-scale scene based on a sight line updating algorithm.
The invention provides a large-scale scene real-time three-dimensional reconstruction method based on a sight updating algorithm, which comprises the following steps:
step 1: acquiring three-dimensional point cloud and calculating sensor external parameter data;
step 2: computing symbolic distance values for implicit surfaces based on a line-of-sight algorithm
And step 3: symbol distance value weighted fusion
And 4, step 4: the volume data is subjected to volume rendering and storage, and the reconstruction effect is displayed in real time
Further, the step 1 is further specifically: the method comprises the steps that a laser radar obtains three-dimensional point cloud data of a current environment, feature point matching is conducted on the radar point cloud, and current sensor external reference data are calculated by further combining a global positioning system and an inertial navigation element under the condition that the global positioning system and the inertial navigation element are arranged.
Further, the step 2 is further specifically: and carrying out voxelization on the space within a certain range by taking the space position of the current laser radar as the center. The volume data structure is as follows:
(pk,n,dk,n,wk,n)
wherein p isk,nRepresenting the three-dimensional spatial coordinates of the nth voxel in the kth frame volume data structure, dk,n,wk,nThe sign distance value and weight corresponding to the voxel, respectively, start at dk,nInitialized to Nan, representing the voxel unused, wk,nThe initialization is 0.
And (4) adopting a line-of-sight algorithm to calculate the symbolic distance values of the relevant voxels in parallel. For the k frame point cloud, taking each three-dimensional point in the point cloud as an end point, and obtaining the current sensor space positiono as a starting point, calculating a gaze direction vector
Figure BSA0000153276190000021
xk,n∈Xk,xk,nRepresenting the k frame point cloud XkAnd (4) the three-dimensional coordinate of the nth point in the drawing, wherein o represents the current three-dimensional coordinate of the laser radar. In the direction of the line of sight, in xk,nTaking a point as a center, and taking m points in front and back, wherein m belongs to Z+(Z+Representing a set of positive integers, m being a system parameter and being set to be in a range of 1 to 10), three-dimensional coordinates r of 2m points are obtained, and r represents each xk,n2m points correlated. Setting the unit side length of the voxel as l (l is a system parameter), and l belongs to R+(R+Representing a positive number set), when the value of l is larger, less details are obtained after three-dimensional reconstruction, otherwise, more details are reserved. For this 2m +1 points (including x)k,nR) rounding in x, y and z directions respectively by taking the side length l of the voxel as a unit to obtain the three-dimensional coordinate p of the related voxel in the sight line directionk,nCalculating a symbol distance value sdfk,nTo obtain (p) corresponding to the relevant voxelk,n,sdfk,n)。
Wherein sdf is calculatedk,nThe specific mode of (1):
sdfk,n=‖xk,n-pk,n2
the specific method for calculating r is as follows:
Figure BSA0000153276190000022
Figure BSA0000153276190000023
Figure BSA0000153276190000024
respectively represent vectors
Figure BSA0000153276190000025
Components in x, y, z directions, comparing
Figure BSA0000153276190000026
Selecting the maximum component for normalization in the component sizes in the x, y and z directions to obtain
Figure BSA0000153276190000027
t is an equation variable, and the value of t is as follows: -m, -m +1, …, 0, …, m. By t to xk,nAdding or subtracting to obtain xk,nThe correlation point r.
The line-of-sight algorithm described above can be either a serial or parallel computation. In parallel computing, the sdf obtained by computingk,nNot directly written to the current bulk data structure, but saved (p)k,n,sdfk,n). After waiting for the parallel computation to complete, step 3 is performed. When the sight line algorithm runs in series, each time (p) is obtainedk,n,sdfk,n) Step 3 can be performed.
Further, the step 3 is further specifically: according to (p) saved in step 2k,n,sdfk,n) Using pk,nIndexing the corresponding d in the current volume data structurek,n,wk,n. New symbol distance value (d)k,n)newBy weighted averaging to obtain:
(wk,n)new=wk,n+1
dmax=m·l
Figure BSA0000153276190000031
Figure BSA0000153276190000032
wherein m is the system parameter set in step 2, and l is the voxel side length set in step 2. New (p) of corresponding voxel is obtainedk,n,(dk,n)new,(wk,n)new) Then directly replace the original (p) with itk,n,dk,n,wk,n) And updating is realized. After weighted fusion is completed on each sight line of a frame of point cloud, updating of the frame body data is completed.
Further, the step 4 is further specifically: for the current volume data, a volume rendering method is adopted, such as a ray casting algorithm, to display the reconstructed surface in real time.
According to another aspect of the invention, a large-scale scene real-time three-dimensional reconstruction system based on a sight line updating algorithm is provided, and comprises the following modules:
the data acquisition module is mainly used for acquiring current point cloud data through a laser radar;
a computer processing module, comprising: a. the system comprises a sensor external parameter calculation sub-module, a sight line algorithm calculation symbol distance value sub-module, a c symbol distance value weighting fusion sub-module, a d volume data drawing sub-module and an e volume data storage sub-module.
a. The sensor external parameter calculation submodule comprises: and calculating external reference data of the laser radar at the moment by a global positioning system, an inertial navigation unit (IMU) or a radar point cloud characteristic point matching method.
b. The sight line algorithm calculates the symbol distance value submodule: and generating a sight line equal to the point number for the current frame point cloud in parallel, calculating and storing a corresponding symbol distance value in the volume data, and storing a corresponding voxel coordinate.
c. A symbol distance value weighted fusion submodule: and indexing related voxels according to the voxel coordinates calculated by the last module, and performing weighted fusion on the newly calculated symbol distance values and the symbol distance values stored in the volume data.
d. Volume data drawing submodule: after the symbol distance fusion of the current frame is completed, the current reconstruction result is displayed in real time by using a volume rendering algorithm such as a ray casting algorithm (raycasting).
e. The volume data storage submodule: and when the external parameters of the sensor are greatly changed, the volume data structure at the moment is saved in a file system.
The method can process point cloud data containing hundreds of thousands of points in each frame in real time, can be applied to three-dimensional reconstruction in a large scene, and has great advantages in the aspect of removing data noise by a weighting fusion method. In the fusion process, the external parameter data of the current sensor is utilized, and the quality of surface reconstruction is improved. Through real-time reconstruction, places with poor reconstruction quality can be scanned and fused for multiple times according to reconstruction effects. The invention can realize real-time high-quality reconstruction of complex large scenes.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. The drawings illustrate the following:
FIG. 1 is a flow chart of the method of the present invention. Fig. 2 is a hardware composition diagram of the system of the present invention.
Detailed Description
As shown in fig. 1, the method for real-time three-dimensional reconstruction of a large-scale scene based on a gaze updating algorithm of the present invention comprises the following steps:
step 1: and (5) calculating the external parameters of the sensor.
And (3) integrating the acceleration obtained by an inertial navigation unit (IMU) by taking the space absolute coordinate obtained by the current global positioning system as an initial value, calculating the current sensor external reference data, or extracting and matching the feature points of each frame of radar point cloud, and calculating the external reference data of the laser radar at the moment.
Step 2: the line of sight algorithm calculates a symbol distance value.
And carrying out voxelization on the space within a certain range by taking the space position of the current laser radar as the center. The volume data structure is as follows:
(pk,n,dk,n,wk,n)
wherein p isk,nRepresenting the three-dimensional spatial coordinates of the nth voxel in the kth frame volume data structure, dk,n,wk,nThe sign distance value and weight corresponding to the voxel, respectively, start at dk,nInitialisation to NanRepresenting the voxel unused, wk,nThe initialization is 0.
And (4) adopting a line-of-sight algorithm to calculate the symbolic distance values of the relevant voxels in parallel. For the kth frame of point cloud, each three-dimensional point in the point cloud is used as an end point, the current sensor space position o is used as a starting point, and a sight line direction vector is calculated
Figure BSA0000153276190000047
xk,n∈Xk,xk,nRepresenting the k frame point cloud XkAnd (4) the three-dimensional coordinate of the nth point in the drawing, wherein o represents the current three-dimensional coordinate of the laser radar. In the direction of the line of sight, in xk,nTaking a point as a center, and taking m points in front and back, wherein m belongs to Z+(Z+Representing a set of positive integers, m being a system parameter and being set to be in a range of 1 to 10), three-dimensional coordinates r of 2m points are obtained, and r represents each xk,n2m points correlated. Setting the unit side length of the voxel as l (l is a system parameter), and l belongs to R+(R+Representing a positive number set), when the value of l is larger, less details are obtained after three-dimensional reconstruction, otherwise, more details are reserved. For this 2m +1 points (including x)k,nR) rounding in x, y and z directions respectively by taking the side length l of the voxel as a unit to obtain the three-dimensional coordinate p of the related voxel in the sight line directionk,nCalculating a symbol distance value sdfk,nTo obtain (p) corresponding to the relevant voxelk,n,sdfk,n)。
Wherein sdf is calculatedk,nThe specific mode of (1):
sdfk,n=‖xk,n-pk,n2
the specific method for calculating r is as follows:
Figure BSA0000153276190000041
Figure BSA0000153276190000042
Figure BSA0000153276190000043
respectively represent vectors
Figure BSA0000153276190000044
Components in x, y, z directions, comparing
Figure BSA0000153276190000045
Selecting the maximum component for normalization in the component sizes in the x, y and z directions to obtain
Figure BSA0000153276190000046
t is an equation variable, and the value of t is as follows: -m, -m +1, …, 0, …, m. By t to xk,nAdding or subtracting to obtain xk,nThe correlation point r.
The line-of-sight algorithm described above can be either a serial or parallel computation. In parallel computing, the sdf obtained by computingk,nNot directly written to the current bulk data structure, but saved (p)k,n,sdfk,n). After waiting for the parallel computation to complete, step 3 is performed. When the sight line algorithm runs in series, each time (p) is obtainedk,n,sdfk,n) The fusion of the symbol distance values can be performed.
And step 3: and carrying out weighted fusion on the symbol distance values.
According to (p) saved in step 2k,n,sdfk,n) Using pk,nIndexing the corresponding d in the current volume data structurek,n,wk,n. New symbol distance value (d)k,n)newBy weighted averaging to obtain:
(wk,n)new=wk,n+1
dmax=m·l
Figure BSA0000153276190000051
Figure BSA0000153276190000052
wherein m is the system parameter set in step 2, and l is the voxel side length set in step 2. New (p) of corresponding voxel is obtainedk,n,(dk,n)new,(wk,n)new) Then directly replace the original (p) with itk,n,dk,n,wk,n) And updating is realized. After weighted fusion is completed on each sight line of a frame of point cloud, updating of the frame body data is completed.
And 4, step 4: and (5) rendering the volume data.
After the symbol distance fusion of the current frame is completed, the current reconstruction result is displayed in real time by using a volume rendering algorithm such as a ray casting algorithm (raycasting). Through the current reconstruction effect, the reconstruction system can scan the specified part, and the reconstruction effect is improved.
And 5: and (4) storing volume data.
And when the external parameters of the sensor are greatly changed, the volume data structure at the moment is saved in a file system.
As shown in fig. 2. The invention relates to a large-scale scene real-time three-dimensional reconstruction system based on a sight updating algorithm, which comprises the following modules:
a 201 data acquisition module, which mainly obtains three-dimensional point cloud through laser radar, etc., and obtains space three-dimensional coordinates and inertial navigation unit (IMU) through a global positioning system;
202 a computer processing module comprising: a. the system comprises a sensor external parameter calculation sub-module, a sight line algorithm calculation symbol distance value sub-module, a c symbol distance value weighting fusion sub-module, a d volume data drawing sub-module and a c volume data storage sub-module.
a. The sensor external parameter calculation submodule comprises: and calculating external reference data of the laser radar at the moment by a global positioning system, an inertial navigation unit (IMU) and a radar point cloud characteristic point matching method.
b. The sight line algorithm calculates the symbol distance value submodule: and generating a sight line equal to the point number for the current frame point cloud in parallel, calculating and storing a corresponding symbol distance value in the volume data, and storing a corresponding voxel coordinate.
c. A symbol distance value weighted fusion submodule: and indexing related voxels according to the voxel coordinates calculated by the last module, and performing weighted fusion on the newly calculated symbol distance values and the symbol distance values stored in the volume data.
d. Volume data drawing submodule: after the symbol distance fusion of the current frame is completed, the current reconstruction result is displayed in real time by using a volume rendering algorithm such as a ray casting algorithm (raycasting).
e. The volume data storage submodule: and when the external parameters of the sensor are greatly changed, the volume data structure at the moment is saved in a file system.
Those skilled in the art will appreciate that the system architecture and various steps of the present invention described above may be implemented using a general purpose computing device, that is, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and that they may alternatively be implemented using program code executable by a computing device, such that the program code is stored in a memory device and executed by a computing device, and separately fabricated into various integrated circuit modules, or fabricated into a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been shown and described, the above descriptions are only for the convenience of understanding the present invention and are not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (3)

1. A large-scale scene real-time three-dimensional reconstruction method based on a sight updating algorithm is characterized by comprising the following steps:
step 1: acquiring three-dimensional point cloud and calculating sensor external parameter data;
step 2: calculating a symbolic distance value of the implicit surface based on a sight line algorithm; the method is characterized in that the current space position of the laser radar is used as the center, the voxelization is carried out on the space in a certain range, and the volume data structure is as follows:
(pk,n,dk,n,wk,n)
wherein p isk,nRepresenting the three-dimensional spatial coordinates of the nth voxel in the kth frame volume data structure, dk,n,wk,nThe symbol distance value and weight, d, corresponding to the voxelk,nInitialized to Nan at the beginning, representing the voxel unused, wk,nInitialization is 0;
adopting a sight line algorithm to calculate the symbol distance value of the related voxel in parallel, and calculating a sight line direction vector for the kth frame of point cloud by taking each three-dimensional point in the point cloud as a terminal point and the current sensor space position o as a starting point
Figure FSB0000194873150000011
xk,n∈Xk,xk,nRepresenting the k frame point cloud XkThe three-dimensional coordinate of the nth point in the drawing, o represents the current three-dimensional coordinate of the laser radar, and x is used in the sight line directionk,nTaking a point as a center, and taking m points in front and back, wherein m belongs to Z+,Z+Representing a set of positive integers, m being a system parameter, and the setting range being 1 to 10, obtaining three-dimensional coordinates r of 2m points, wherein r represents each xk,nSetting the unit side length of the voxel as l, l as a system parameter, and l belongs to R at the related 2m points+,R+Representing a positive number set, when the value of l is larger, obtaining less details after three-dimensional reconstruction, otherwise, keeping more details, and rounding the 2m +1 points in the x, y and z directions by taking the side length l of the voxel as a unit to obtain the three-dimensional coordinate p of the related voxel in the sight line directionk,nCalculating a symbol distance value sdfk,nTo obtain (p) corresponding to the relevant voxelk,n,sdfk,n);
Wherein sdf is calculatedk,nThe specific mode is as follows:
Sdfk,n=||xk,n-pk,n||2
the specific method for calculating r is as follows:
Figure FSB0000194873150000012
Figure FSB0000194873150000013
Figure FSB0000194873150000014
respectively represent vectors
Figure FSB0000194873150000015
Components in x, y, z directions, comparing
Figure FSB0000194873150000016
Selecting the maximum component for normalization in the component sizes in the x, y and z directions to obtain
Figure FSB0000194873150000017
t is an equation variable, and the value of t is as follows: -m, -m +1, 0, m; by t to xk,nAdding or subtracting to obtain xk,nA correlation point r;
the sight line algorithm can be used for serial calculation or parallel calculation, and sdf obtained by calculation in the process of parallel calculationk,nNot directly written to the current bulk data structure, but saved (p)k,n,sdfk,n) (ii) a After the parallel computation is finished, performing the step 3; when the sight line algorithm runs in series, each time (p) is obtainedk,n,sdfk,n) Then the fusion of the symbol distance value can be carried out;
and step 3: carrying out weighted fusion on the symbol distance values;
and 4, step 4: and performing volume rendering and storage on the volume data, and displaying the reconstruction effect in real time.
2. The implicit surface updating algorithm based on three-dimensional point cloud according to claim 1, wherein the step 3 further specifically comprises: according to (p) saved in step 2k,n,sdfk,n) Using pk,nIndexing the corresponding d in the current volume data structurek,n,wk,n(ii) a New symbol distance value (d)k,n)newBy weighted averaging to obtain:
(wk,n)new=wk,n+1
dmax=m·l
Figure FSB0000194873150000021
Figure FSB0000194873150000022
where m is the system parameter set in step 2 and l is the voxel side length set in step 2, a new (p) of the corresponding voxel is obtainedk,n,(dk,n)new,(wk,n)new) Then directly replace the original (p) with itk,n,dk,n,wk,n) And updating is realized, and after weighted fusion is completed on each sight line of a frame of point cloud, the frame body data is updated.
3. A real-time surface reconstruction system based on laser radar is characterized by comprising the following modules:
the data acquisition module comprises a laser radar for obtaining three-dimensional point cloud data of the surrounding environment, a global positioning system for obtaining space coordinates, and an inertial navigation element for obtaining the current acceleration;
the computer processing module comprises a sensor external parameter calculation sub-module, a sight line algorithm calculation symbol distance value sub-module, a symbol distance value weighting fusion sub-module, a volume data drawing sub-module and a volume data storage sub-module;
the sight line algorithm symbol distance value calculating submodule is used for executing the following steps:
the method is characterized in that the current space position of the laser radar is used as the center, the voxelization is carried out on the space in a certain range, and the volume data structure is as follows:
(pk,n,dk,n,wk,n)
wherein p isk,nRepresenting the three-dimensional spatial coordinates of the nth voxel in the kth frame volume data structure, dk,n,wk,nThe symbol distance value and weight, d, corresponding to the voxelk,nInitialized to Nan at the beginning, representing the voxel unused, wk,nInitialization is 0;
adopting a sight line algorithm to calculate the symbol distance value of the related voxel in parallel, and calculating a sight line direction vector for the kth frame of point cloud by taking each three-dimensional point in the point cloud as a terminal point and the current sensor space position o as a starting point
Figure FSB0000194873150000031
xk,n∈Xk,xk,nRepresenting the k frame point cloud XkThe three-dimensional coordinate of the nth point in the drawing, o represents the current three-dimensional coordinate of the laser radar, and x is used in the sight line directionk,nTaking a point as a center, and taking m points in front and back, wherein m belongs to Z+,Z+Representing a set of positive integers, m being a system parameter, and the setting range being 1 to 10, obtaining three-dimensional coordinates r of 2m points, wherein r represents each xk,nSetting the unit side length of the voxel as l, l as a system parameter, and l belongs to R at the related 2m points+,R+Representing a positive number set, when the value of l is larger, obtaining less details after three-dimensional reconstruction, otherwise, keeping more details, and rounding the 2m +1 points in the x, y and z directions by taking the side length l of the voxel as a unit to obtain the three-dimensional coordinate p of the related voxel in the sight line directionk,nCalculating a symbol distance value sdfk,nTo obtain (p) corresponding to the relevant voxelk,n,sdfk,n);
Wherein sdf is calculatedk,nThe specific mode is as follows:
Sdfk,n=||xk,n-pk,n||2
the specific method for calculating r is as follows:
Figure FSB0000194873150000032
Figure FSB0000194873150000033
Figure FSB0000194873150000034
respectively represent vectors
Figure FSB0000194873150000035
Components in x, y, z directions, comparing
Figure FSB0000194873150000036
Selecting the maximum component for normalization in the component sizes in the x, y and z directions to obtain
Figure FSB0000194873150000037
t is an equation variable, and the value of t is as follows: -m, -m +1, 0, m; by t to xk,nAdding or subtracting to obtain xk,nA correlation point r;
the sight line algorithm can be used for serial calculation or parallel calculation, and sdf obtained by calculation in the process of parallel calculationk,nNot directly written to the current bulk data structure, but saved (p)k,n,sdfk,n) (ii) a After the parallel computation is finished, performing the step 3; when the sight line algorithm runs in series, each time (p) is obtainedk,n,sdfk,n) The fusion of the symbol distance values can be performed.
CN201711087652.3A 2017-11-02 2017-11-02 Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm Active CN107862733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711087652.3A CN107862733B (en) 2017-11-02 2017-11-02 Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711087652.3A CN107862733B (en) 2017-11-02 2017-11-02 Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm

Publications (2)

Publication Number Publication Date
CN107862733A CN107862733A (en) 2018-03-30
CN107862733B true CN107862733B (en) 2021-10-26

Family

ID=61699928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711087652.3A Active CN107862733B (en) 2017-11-02 2017-11-02 Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm

Country Status (1)

Country Link
CN (1) CN107862733B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921027A (en) * 2018-06-01 2018-11-30 杭州荣跃科技有限公司 A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction
CN109544638B (en) * 2018-10-29 2021-08-03 浙江工业大学 Asynchronous online calibration method for multi-sensor fusion
CN111655542A (en) * 2019-04-23 2020-09-11 深圳市大疆创新科技有限公司 Data processing method, device and equipment and movable platform
CN110097582B (en) * 2019-05-16 2023-03-31 广西师范大学 Point cloud optimal registration and real-time display system and working method
CN114119839B (en) * 2022-01-24 2022-07-01 阿里巴巴(中国)有限公司 Three-dimensional model reconstruction and image generation method, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937579A (en) * 2010-09-20 2011-01-05 南京大学 Method for creating three-dimensional surface model by using perspective sketch
CN104574263A (en) * 2015-01-28 2015-04-29 湖北科技学院 Quick three-dimensional ultrasonic reconstruction and display method on basis of GPU (graphics processing unit)
WO2015102637A1 (en) * 2014-01-03 2015-07-09 Intel Corporation Real-time 3d reconstruction with a depth camera
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937579A (en) * 2010-09-20 2011-01-05 南京大学 Method for creating three-dimensional surface model by using perspective sketch
WO2015102637A1 (en) * 2014-01-03 2015-07-09 Intel Corporation Real-time 3d reconstruction with a depth camera
CN104574263A (en) * 2015-01-28 2015-04-29 湖北科技学院 Quick three-dimensional ultrasonic reconstruction and display method on basis of GPU (graphics processing unit)
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Real-Time Camera Tracking and 3D Reconstruction Using Signed Distance Functions;Erik Bylow ET AL.;《RSS(2013)》;20130630;第1-8页 *

Also Published As

Publication number Publication date
CN107862733A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
CN107862733B (en) Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm
JP6768156B2 (en) Virtually enhanced visual simultaneous positioning and mapping systems and methods
CN109461180B (en) Three-dimensional scene reconstruction method based on deep learning
CN107747941B (en) Binocular vision positioning method, device and system
CN106940704B (en) Positioning method and device based on grid map
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
US11210804B2 (en) Methods, devices and computer program products for global bundle adjustment of 3D images
CN108830894A (en) Remote guide method, apparatus, terminal and storage medium based on augmented reality
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN110617814A (en) Monocular vision and inertial sensor integrated remote distance measuring system and method
CN112738487A (en) Image projection method, device, equipment and storage medium
CN103559737A (en) Object panorama modeling method
CN111161398B (en) Image generation method, device, equipment and storage medium
CN111160298A (en) Robot and pose estimation method and device thereof
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
CN102222348A (en) Method for calculating three-dimensional object motion vector
CN115035235A (en) Three-dimensional reconstruction method and device
CN103903263A (en) Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images
CN108028904B (en) Method and system for light field augmented reality/virtual reality on mobile devices
CN110751730A (en) Dressing human body shape estimation method based on deep neural network
EP3599588A1 (en) Rendering an object
CN112907573A (en) Depth completion method based on 3D convolution
WO2018052100A1 (en) Image processing device, image processing method, and image processing program
CN111476842B (en) Method and system for estimating relative pose of camera
CN111598927B (en) Positioning reconstruction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant