CN115512054A - Method for constructing three-dimensional space-time continuous point cloud map - Google Patents

Method for constructing three-dimensional space-time continuous point cloud map Download PDF

Info

Publication number
CN115512054A
CN115512054A CN202211081628.XA CN202211081628A CN115512054A CN 115512054 A CN115512054 A CN 115512054A CN 202211081628 A CN202211081628 A CN 202211081628A CN 115512054 A CN115512054 A CN 115512054A
Authority
CN
China
Prior art keywords
point cloud
point
laser
constructing
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211081628.XA
Other languages
Chinese (zh)
Inventor
陈驰
丛阳滋
杨必胜
张飞
周剑
吴唯同
马瑞琪
徐宇航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202211081628.XA priority Critical patent/CN115512054A/en
Publication of CN115512054A publication Critical patent/CN115512054A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a method for constructing a three-dimensional space-time continuous point cloud map, which comprises the following steps: extracting laser characteristic points of the structured surface in the scene according to the laser radar data; according to the distribution characteristics of the feature points, an error ellipse model is constructed by using a point-to-line and point-to-surface measurement mode, and reliable feature association between the feature points is screened out according to the error ellipse model; calculating optimal pose information according to the feature points extracted in the step 1 and the reliable feature association obtained in the step 2, and obtaining an initial conversion relation between a point cloud map coordinate system and a LiDAR reference coordinate system according to the optimal pose information; taking the initial conversion relation obtained in the step 3 as an initial value, and converting a space-time relation by using a sample line model to realize optimal registration between the LiDAR point cloud and the point cloud map; and performing field search on the registered laser points. The method can reconstruct the three-dimensional point cloud map of the environment more accurately, and can make up distortion information of laser point cloud data caused by movement.

Description

Method for constructing three-dimensional space-time continuous point cloud map
Technical Field
The invention belongs to the technical field of positioning and mapping of radar, and particularly relates to a method for constructing a three-dimensional space-time continuous point cloud map.
Background
With the light, small and low cost of LiDAR equipment and the diversified development of point cloud data acquisition platforms of UAVs and robots, the acquisition of high-precision three-dimensional laser point cloud data is increasingly convenient and civilized. Three-dimensional point cloud maps have gradually become an important form for expressing the real world, and derivatives thereof, such as high-precision maps and the like, play an important role in emerging fields of unmanned driving, novel basic mapping and the like. The core of laser point cloud data calculation lies in high-precision real-time pose positioning, and SLAM (instant positioning and mapping) research using LiDAR as a main data source has gained a focus of the academic and industrial circles in recent years (Bisheng et al, 2017, schwarz, 2010. Most current SLAM researches are mainly oriented to robot navigation and perception application, and established point cloud maps (such as octree point cloud maps, surface element maps and the like) are difficult to meet the requirements of basic mapping and application expansion on time-space resolution and continuity.
LOAM (Zhang and Singh, 2014) establishes a laser SLAM algorithm framework based on feature association, and derives a-LOAM, lego-LOAM, F-LOAM and other variants. The method utilizes the curvature attribute of the laser scanning line to extract angular feature points and planar feature points in the scene, and uses the features to solve the pose and the map through independent threads. Lego-LOAM (Shan and Englot, 2018) pre-segments original laser frame data to obtain ground point clouds and object point clouds, thereby improving the reliability of features. Deschaud (2018) screens robust feature points in a scene by designing a metric model, and robustness of pose estimation is improved. To eliminate non-rigid deformations caused by motion distortion in a point cloud, it is necessary to estimate the distortion using multiple frames of the point cloud (Neuhaus et al, 2018). Dellenbach et al. (2021) estimates the start and end poses of a frame of laser point cloud simultaneously, adding the assumption of speed variation in view of the continuity of motion. In order to reduce the delay caused by laser frame data, qu et al. (2021) performs segment matching on one frame of data, thereby realizing a fast laser odometer. Inspired by visual SLAM work (Whelan et al, 2015), park et al (2021,2018) provides a continuous time three-dimensional mapping method for map fusion by using surface elements, and a dense mapping system taking a map as a center is realized by optimizing map details. On the other hand, lovegrove et al. (2013) firstly proposes an incremental lie group pose parameterization mode based on spline lines and proposes a continuous time SLAM theoretical framework (Droeschel and Behnke, 2018). Sheehan et al. (2013) respectively uses Catmull-Rom spline lines to fit the 6 degrees of freedom of the pose, and a continuous-time positioning system is realized. And (2020) decoupling a translation part in the pose, modeling the rotation part by utilizing quaternion, and realizing the seamless calibration of the IMU and the LiDAR by combining IMU data. Furgale et al (2012) uses the temporal basis functions of spline lines to construct an aggregate optimization equation to obtain continuous-time trajectory estimates, but lacks optimization of the map. Therefore, it is necessary to develop a method capable of optimizing the construction of a map and generating a high-precision cloud map.
Disclosure of Invention
The invention aims to provide a method for constructing a three-dimensional space-time continuous point cloud map aiming at the defects of the prior art, the method can be used for accurately reconstructing the three-dimensional point cloud map of the environment and can make up distortion information of laser point cloud data caused by movement.
In order to solve the technical problem, the invention adopts the following technical scheme:
a method for constructing a three-dimensional space-time continuous point cloud map comprises the following steps:
step 1, extracting laser characteristic points of a structured surface in a scene according to obtained laser radar data;
step 2, constructing an error ellipse model by using a point-to-line and point-to-plane measurement mode according to the distribution characteristics of the feature points, and screening out reliable feature associations among the feature points according to the error ellipse model;
step 3, calculating optimal pose information of the feature points according to the feature points extracted in the step 1 and the reliable feature associations obtained in the step 2, and calculating to obtain an initial conversion relation between a point cloud map coordinate system and a LiDAR reference coordinate system according to the optimal pose information;
step 4, using the initial conversion relation obtained in the step 3 as an initial value, using a spline model to perform continuous time motion modeling on the feature points, the optimal registration between the LiDAR point cloud and the point cloud map is realized, so that the accurate registration parameter of any laser point is obtained;
and 5, performing field search on the registered laser points, and reducing the space-time redundancy of the incremental point cloud map construction.
Further, step 1 specifically includes:
dividing single-frame laser radar data into a plurality of scanning lines according to the vertical angle of a laser point of the single-frame laser radar data;
for each scanning line, according to the distance value gamma of the laser point and the horizontal angle resolution theta of the laser radar
Figure BDA0003833495900000031
Obtaining the local distribution density of the point cloud near the laser point, thereby carrying out voxel downsampling on the point cloud;
after the scanning line point cloud with approximate spatial resolution is obtained, partitioning the single scanning line point cloud by utilizing a self-adaptive distance threshold belonging to the same category, thereby obtaining a point cloud clustering result;
and (3) judging the direction of a continuous vector of each clustering result, filtering out scattered points with fluctuating direction change, and re-clustering points with direction change larger than 45 degrees to obtain an edge feature point set and a plane point set to finish the extraction work of the structured feature point cloud.
Further, step 2 specifically includes:
according to the distribution characteristics of the extracted structural feature points, utilizing point cloud neighborhood distributionInformation is used to find areas of strong linearity or planarity to construct an error vector v m ,v n And its corresponding weight w m ,w n
For error vector v m ,v n And its corresponding weight w m ,w n Carrying out spatial clustering and SVD (singular value decomposition) decomposition calculation to obtain three-dimensional characteristic values and corresponding characteristic vectors;
using the obtained three-dimensional characteristic value as radius (r) x ,r y ,r z ) Constructing an error ellipsoid by taking the characteristic vector as a radius direction V;
and continuously iterating and updating the parameters of the error ellipsoid so that the sphere has an optimal contribution value tau in each direction, and searching for reliable characteristic association according to the contribution value tau.
Further, the specific method in step 3 is as follows:
for each frame of laser point cloud, extracting corresponding structural feature points by using the method in the step 1
Figure BDA0003833495900000032
And the pose obtained according to the last frame of laser point cloud
Figure BDA0003833495900000033
Will be provided with
Figure BDA0003833495900000034
Converting the feature points into a map coordinate system, and screening out reliable feature association according to the method in the step 2, thereby constructing an iterative least square optimization equation and obtaining the feature points through continuously optimizing equation errors
Figure BDA0003833495900000041
Optimal pose information of
Figure BDA0003833495900000042
Finally, according to the optimal pose information
Figure BDA0003833495900000043
Will be provided with
Figure BDA0003833495900000044
And adding the point cloud map coordinate system and the LiDAR reference coordinate system again to obtain an initial conversion relation between the point cloud map coordinate system and the LiDAR reference coordinate system, and providing control information for subsequent laser radar data.
Further, the specific method in step 4 is as follows:
extracting the structured feature point clouds in a certain time window, acquiring pose information of initial estimation of the feature point clouds according to the method in the step 3, respectively fitting rotating and translating parts of poses by adopting a sample line model, acquiring corresponding pose information on a sample line according to the time stamp of each laser point, constructing an error item for the acquired pose information by adopting the method in the step 2, and optimizing the error item so as to acquire optimized pose information of each feature point cloud in the time window and register the optimized pose information to a cloud map.
Further, step 5 specifically comprises:
and (3) performing reliability analysis on the point cloud in the neighborhood by using a space neighborhood searching method while performing map fusion and updating, if a laser point exists in the domain, only keeping the point with the minimum uncertainty, and if the point cloud does not exist, directly updating the neighborhood information.
Compared with the prior art, the invention has the beneficial effects that: according to the method, unmanned vehicle laser radar data is used as a research object, a structured feature extraction strategy is formulated according to data characteristics of the unmanned vehicle laser radar data, reliable feature association is screened by utilizing an error ellipse, and a non-rigid point cloud fusion algorithm in continuous time is adopted, so that the construction of a three-dimensional point cloud map of an indoor scene and an outdoor scene of the unmanned vehicle in a time-space continuous manner is completed; the method can estimate the motion attitude and position of the small unmanned vehicle in real time, well make up the distortion information of the laser point cloud data generated by the motion, thereby being capable of reconstructing the three-dimensional point cloud map of the environment more accurately and ensuring the space-time continuity of map points.
Drawings
FIG. 1 is a flow chart of a three-dimensional space-time continuous point cloud map construction method based on laser radar data according to an embodiment of the invention;
FIG. 2 is a diagram illustrating structured feature point extraction according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an error ellipse-based structured feature point association screening according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of continuous-time non-rigid point cloud registration based on spline lines according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the construction of a spatio-temporal continuous three-dimensional point cloud map according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the following embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention is further illustrated by the following examples, which are not to be construed as limiting the invention.
As shown in FIG. 1, the invention discloses a method for constructing a three-dimensional space-time continuous point cloud map, which comprises the following steps:
step 1, extracting laser characteristic points of a structured surface in a scene according to obtained laser radar data;
in this embodiment, unmanned vehicle lidar data is used as a research object, and to the obtained lidar data, four substeps are mainly adopted in this step to extract laser characteristic points: scan line extraction, scan line down-sampling, scan line segmentation, and incremental vector clustering, as shown in fig. 2. Specifically, firstly, a single frame of laser radar data is divided into a plurality of scanning lines according to the vertical angle of the laser point, for example, the Velodyne16 laser radar can divide the original frame of laser data into 16 scanning lines; secondly, the same processing mode is adopted for each scanning line, so that all the scanning lines in the scene are obtainedThe structured feature points have the characteristic that the distribution density of the point cloud obviously changes along with the distance from the laser radar, so that the point cloud of the scanning lines with more consistent spatial distribution is obtained according to the distance value gamma of the laser point and the horizontal angle resolution theta of the laser radar
Figure BDA0003833495900000051
Obtaining the local distribution density (figure 2) of the point cloud near the laser point, so that the point cloud can be subjected to voxel down-sampling in a self-adaptive manner, namely different areas adopt different voxel grid resolutions; after obtaining the scanning line point cloud with approximate spatial resolution, the single scanning line point cloud is partitioned by using the adaptive distance threshold e (fig. 2) to obtain the result of point cloud clustering, for example, P n-1 ,P n Two adjacent points, gamma, on the same scan line n-1 ,γ n Respectively, the distance between the two and the origin, delta theta is the included angle between the two (figure 2),
Figure BDA0003833495900000061
for extreme angle values (typically 10 °), e is calculated according to the following equation:
Figure BDA0003833495900000062
if the Euclidean distance between two points exceeds this value, i.e. P |) n-1 ,P n II ∈ is formed, two points are partitioned, and if the two points are not exceeded, the two points are combined into the same class; finally, for each clustering result, the direction of the continuous vector is judged, the scattered points with fluctuating direction change are filtered, and the direction change is large (>And re-clustering at the position of 45 degrees, so that an edge feature point set and a plane point set (figure 2) can be obtained, and the extraction work of the structured feature point cloud is completed.
Step 2, constructing an error ellipse model by using a point-to-line and point-to-plane measurement mode according to the distribution characteristics of the feature points, and screening out reliable feature association among the feature points according to the error ellipse model;
in this stepAccording to the distribution characteristics of the extracted structural feature points, searching a region with strong linearity or planarity by using point cloud neighborhood distribution information, and constructing an error vector v m ,v n And its corresponding weight w m ,w n The specific formula is as follows:
Figure BDA0003833495900000063
w m =Φ(||P -P m ||)
Figure BDA0003833495900000064
w n =Φ(|n T P n +d|)
wherein, P m 、P n Is a characteristic point, P, in the laser point cloud of the current frame For a perpendicular point projected onto a line fitted to its neighborhood points,
Figure BDA0003833495900000065
is the normal vector of the plane to which its domain points are fitted,
Figure BDA0003833495900000066
is a plane equation, P i For a point on the straight line, phi (#) represents a robust kernel function;
and carrying out spatial clustering and SVD (singular value decomposition) decomposition calculation on the error vector and the weight thereof to obtain a three-dimensional characteristic value and a corresponding characteristic vector thereof, wherein the three-dimensional characteristic value and the corresponding characteristic vector are shown as the following formula:
Figure BDA0003833495900000067
Figure BDA0003833495900000068
Figure BDA0003833495900000069
where Ω denotes the covariance matrix of the error vector, w i The above-mentioned weights are represented by,
Figure BDA00038334959000000610
representing the corresponding error vector, U, V the result matrix of the SVD decomposition,
Figure BDA0003833495900000071
representing an error vector under an ellipsoid coordinate system;
the three-dimensional characteristic value obtained by the above formula is taken as the radius (r) x ,r y ,r z ) And constructing an error ellipsoid by taking the characteristic vector as the radius direction V, so that each error vector can be mapped to the ellipsoid coordinate system
Figure BDA0003833495900000072
According to the distance d from the ellipsoid to the sphere (the sphere radius is the lowest contribution threshold) in the direction of the feature vector i The contribution τ of the error vector to the attitude optimization problem is estimated, which is specifically shown in the following formula:
Figure BDA0003833495900000073
Figure BDA0003833495900000074
Figure BDA0003833495900000075
in the formula, theta,
Figure BDA0003833495900000076
Representing the angle value under the vector polar coordinate system, x, y and z respectively representing the coordinate value of the vector on the ellipsoid, R representing the desired sphere radius,
reliable feature association is found by sequencing the contribution values τ of the error vectors (selecting the first 10 bits of the contribution values), and parameters (radius and direction of the radius) of the error ellipsoid are continuously updated iteratively, so that the error ellipsoid has higher contribution values τ in all directions, as shown in fig. 3.
Step 3, calculating the optimal pose information of the feature points according to the feature points extracted in the step 1 and the reliable feature association obtained in the step 2, and obtaining an initial conversion relation between a point cloud map coordinate system and a LiDAR reference coordinate system according to the optimal pose information;
in this step, for each frame of laser point cloud, the method of step 1 is used to extract corresponding structural feature points
Figure BDA0003833495900000077
And the pose obtained according to the last frame of laser point cloud
Figure BDA0003833495900000078
Will be provided with
Figure BDA0003833495900000079
Converting the data into a map coordinate system, and then screening out reliable feature association by the method in the step 2, thereby constructing an iterative least square optimization equation, and obtaining the feature point by continuously optimizing errors
Figure BDA00038334959000000710
Optimal pose information of
Figure BDA00038334959000000711
Finally, according to the optimal pose information
Figure BDA00038334959000000712
Will change over
Figure BDA00038334959000000713
Adding into the map coordinate system to obtain an initial transformation relationship between the point cloud map coordinate system and the LiDAR reference coordinate system, as follows (step)4) The lidar data provides control information;
in this step, an optimal pose estimation model is realized by means of iterative least square as follows:
Figure BDA00038334959000000714
wherein f (T) is a nonlinear residual equation related to the parameter T, solving the equation, taylor expanding the objective function near T and only retaining a first order term:
Figure BDA0003833495900000081
wherein the content of the first and second substances,
Figure BDA0003833495900000082
for the first order partial derivatives of the error term with respect to the parameter T, the gradient direction can be derived by deriving the above equation and making the derivative equal to zero:
Figure BDA0003833495900000083
Figure BDA0003833495900000084
the variable that is finally solved is Δ T, so this is a linear system of equations, called incremental equations, so that the process is iteratively solved using Gauss-Newton's method or Levenberg-Marquardt until the optimum parameters converge
Figure BDA0003833495900000085
Step 4, using the initial conversion relation obtained in the step 3 as an initial value, and performing continuous time motion modeling by using a spline model to realize optimal registration between the LiDAR point cloud and the point cloud map so as to obtain an accurate registration parameter of any laser point;
during the step, extracting the structured feature point cloud in a certain time window and the pose information initially estimated (from step 3), because the rotation and translation of the pose belong to different algebraic spaces, respectively fitting the rotation and translation parts of the pose by adopting a spline line model, acquiring the corresponding pose information on the spline line according to the time stamp of each laser point, and constructing an error term generated in the registration process by adopting the method of step 2 for the acquired pose information on the spline line, wherein the method specifically comprises the following steps:
Figure BDA0003833495900000086
Figure BDA0003833495900000087
in the formula (I), the compound is shown in the specification,
Figure BDA0003833495900000088
representing the nth pose within the time window, sy (x) being the error equation, y (x) being the spline fitting equation, Θ (η) being the point p η The time stamp of (a) is stored,
Figure BDA0003833495900000089
the corresponding pose is taken as the corresponding pose;
the optimized pose information of each laser point in the time window is obtained by minimizing the error term and is registered to the cloud map, so that the motion distortion in the data is eliminated while the point cloud is registered, thereby correcting non-rigid deformation in the point cloud map and obtaining a three-dimensional environment map of continuous time, as shown in fig. 4.
Step 5, performing field search on the registered laser points, and reducing the space-time redundancy of the incremental point cloud map construction;
because there is an error in the accurate registration process of the laser point cloud, in consideration of the incremental update process of the point cloud map, the continuously fused error of the point cloud will be gradually accumulated in the map, so that the registered laser point needs to be subjected to domain search to reduce the space-time redundancy of the incremental point cloud map construction, as shown in fig. 5, the specific method is as follows: and (3) performing reliability analysis on the point cloud in the neighborhood by using a space neighborhood searching method while performing map fusion and updating, as shown in fig. 5- (a), if a laser point exists in the domain, only keeping a point with minimum uncertainty, and if the laser point does not exist in the domain, directly updating the neighborhood information, as shown in fig. 5- (b). The method for updating the point cloud map improves the spatial continuity of the three-dimensional map building and reduces the spatial redundancy of the point cloud map.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (6)

1. A method for constructing a three-dimensional space-time continuous point cloud map is characterized by comprising the following steps:
step 1, extracting laser characteristic points of a structured surface in a scene according to obtained laser radar data;
step 2, constructing an error ellipse model by using a point-to-line and point-to-plane measurement mode according to the distribution characteristics of the feature points, and screening out reliable feature associations among the feature points according to the error ellipse model;
step 3, calculating optimal pose information of the feature points according to the feature points extracted in the step 1 and the reliable feature association obtained in the step 2, and obtaining an initial conversion relation between a point cloud map coordinate system and a LiDAR reference coordinate system according to the optimal pose information;
step 4, taking the initial conversion relation obtained in the step 3 as an initial value, and performing continuous time motion modeling on the feature points by using the sample line model to realize the optimal registration between the LiDAR point cloud and the point cloud map so as to obtain the accurate registration parameters of any laser point;
and 5, performing field search on the registered laser points, and reducing the space-time redundancy of the incremental point cloud map construction.
2. The method for constructing the three-dimensional space-time continuous point cloud map according to claim 1, wherein the step 1 specifically comprises:
dividing single-frame laser radar data into a plurality of scanning lines according to the vertical angle of a laser point of the single-frame laser radar data;
for each scanning line, according to the distance value gamma of the laser point and the horizontal angle resolution theta of the laser radar
Figure FDA0003833495890000011
Obtaining the local distribution density of the point cloud near the laser point, thereby carrying out voxel downsampling on the point cloud;
after the scanning line point cloud with approximate resolution is obtained, partitioning the single scanning line point cloud by utilizing a self-adaptive distance threshold belonging to the family, thereby obtaining a point cloud clustering result;
and (3) judging the direction of a continuous vector of each clustering result, filtering out scattered points with fluctuating direction change, and re-clustering points with direction change larger than 45 degrees to obtain an edge feature point set and a plane point set to finish the extraction work of the structured feature point cloud.
3. The method for constructing the three-dimensional space-time continuous point cloud map according to claim 1, wherein the step 2 specifically comprises:
according to the distribution characteristics of the extracted structural feature points, searching a region with strong linearity or planarity by using point cloud neighborhood distribution information, and constructing an error vector v m ,v n And its corresponding weight w m ,w n
For error vector v m ,v n And its corresponding weight w m ,w n Performing spatial clustering and SVD decomposition calculation to obtain three-dimensional characteristic values and corresponding characteristic vectors;
using the obtained three-dimensional characteristic value as radius (r) x ,r y ,r z ) Constructing an error ellipsoid by taking the characteristic vector as a radius direction V;
and continuously iterating and updating the parameters of the error ellipsoid so that the sphere has an optimal contribution value tau in each direction, and searching for reliable characteristic association according to the contribution value tau.
4. The method for constructing the three-dimensional space-time continuous point cloud map according to claim 1, wherein the specific method in the step 3 is as follows:
for each frame of laser point cloud, extracting corresponding structural feature points by using the method in the step 1
Figure FDA0003833495890000027
And the pose obtained according to the last frame of laser point cloud
Figure FDA0003833495890000021
Will be provided with
Figure FDA0003833495890000022
Converting the feature points into a map coordinate system, and screening out reliable feature association according to the method in the step 2, thereby constructing an iterative least square optimization equation and obtaining the feature points through continuously optimizing equation errors
Figure FDA0003833495890000023
Optimal pose information of
Figure FDA0003833495890000024
Finally, according to the optimal pose information
Figure FDA0003833495890000025
Will be provided with
Figure FDA0003833495890000026
Adding the point cloud point coordinate system and the LiDAR reference coordinate system again to obtain the initial conversion relation between the point cloud map coordinate system and the LiDAR reference coordinate system, and providing control information for subsequent laser radar dataAnd (4) information.
5. The method for constructing the three-dimensional space-time continuous point cloud map according to claim 1, wherein the specific method in the step 4 is as follows:
extracting the structured feature point clouds in a certain time window, acquiring pose information of initial estimation of the feature point clouds according to the method in the step 3, respectively fitting rotating and translating parts of poses by adopting a sample line model, acquiring corresponding pose information on a sample line according to the time stamp of each laser point, constructing an error item for the acquired pose information by adopting the method in the step 2, and optimizing the error item so as to acquire optimized pose information of each feature point cloud in the time window and register the optimized pose information to a cloud map.
6. The method for constructing the three-dimensional space-time continuous point cloud map according to claim 1, wherein the step 5 specifically comprises:
and (3) performing reliability analysis on the point cloud in the neighborhood by using a space neighborhood searching method while performing map fusion and updating, if a laser point exists in the domain, only keeping the point with the minimum uncertainty, and if the point cloud does not exist, directly updating the neighborhood information.
CN202211081628.XA 2022-09-06 2022-09-06 Method for constructing three-dimensional space-time continuous point cloud map Pending CN115512054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211081628.XA CN115512054A (en) 2022-09-06 2022-09-06 Method for constructing three-dimensional space-time continuous point cloud map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211081628.XA CN115512054A (en) 2022-09-06 2022-09-06 Method for constructing three-dimensional space-time continuous point cloud map

Publications (1)

Publication Number Publication Date
CN115512054A true CN115512054A (en) 2022-12-23

Family

ID=84501091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211081628.XA Pending CN115512054A (en) 2022-09-06 2022-09-06 Method for constructing three-dimensional space-time continuous point cloud map

Country Status (1)

Country Link
CN (1) CN115512054A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726775A (en) * 2024-02-07 2024-03-19 法奥意威(苏州)机器人系统有限公司 Point cloud preprocessing method and device based on grid downsampling

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN114004869A (en) * 2021-09-18 2022-02-01 浙江工业大学 Positioning method based on 3D point cloud registration
CN114862932A (en) * 2022-06-20 2022-08-05 安徽建筑大学 BIM global positioning-based pose correction method and motion distortion correction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit
CN114004869A (en) * 2021-09-18 2022-02-01 浙江工业大学 Positioning method based on 3D point cloud registration
CN114862932A (en) * 2022-06-20 2022-08-05 安徽建筑大学 BIM global positioning-based pose correction method and motion distortion correction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANGZI CONG等: ""3D-CSTM: A 3D continuous spatio-temporal mapping method"", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》, vol. 186, pages 232 - 245, XP086995970, DOI: 10.1016/j.isprsjprs.2022.02.005 *
顾昊: ""基于激光雷达的三维地图重建和重定位"", 《优秀硕士论文 信息科技辑》, pages 10 - 35 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726775A (en) * 2024-02-07 2024-03-19 法奥意威(苏州)机器人系统有限公司 Point cloud preprocessing method and device based on grid downsampling

Similar Documents

Publication Publication Date Title
WO2021232470A1 (en) Multi-sensor fusion-based slam method and system
CN111076733B (en) Robot indoor map building method and system based on vision and laser slam
CN112902953B (en) Autonomous pose measurement method based on SLAM technology
CN109522832B (en) Loop detection method based on point cloud segment matching constraint and track drift optimization
CN106097304B (en) A kind of unmanned plane real-time online ground drawing generating method
CN113139453B (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN107917710B (en) Indoor real-time positioning and three-dimensional map construction method based on single line laser
CN112435262A (en) Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
Qian et al. Robust visual-lidar simultaneous localization and mapping system for UAV
CN113985429A (en) Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar
CN111860651B (en) Monocular vision-based semi-dense map construction method for mobile robot
CN111932614B (en) Laser radar instant positioning and mapping method based on clustering center characteristics
CN106997614A (en) A kind of large scale scene 3D modeling method and its device based on depth camera
CN116182837A (en) Positioning and mapping method based on visual laser radar inertial tight coupling
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111581313A (en) Semantic SLAM robustness improvement method based on instance segmentation
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN115512054A (en) Method for constructing three-dimensional space-time continuous point cloud map
CN115451964A (en) Ship scene simultaneous mapping and positioning method based on multi-mode mixed features
CN114792338A (en) Vision fusion positioning method based on prior three-dimensional laser radar point cloud map
CN117029870A (en) Laser odometer based on road surface point cloud
CN115355904A (en) Slam method for Lidar-IMU fusion of ground mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination