CN118067102A - Laser radar static point cloud map construction method and system based on viewpoint visibility - Google Patents
Laser radar static point cloud map construction method and system based on viewpoint visibility Download PDFInfo
- Publication number
- CN118067102A CN118067102A CN202410030021.1A CN202410030021A CN118067102A CN 118067102 A CN118067102 A CN 118067102A CN 202410030021 A CN202410030021 A CN 202410030021A CN 118067102 A CN118067102 A CN 118067102A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- map
- local
- laser radar
- ground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003068 static effect Effects 0.000 title claims abstract description 29
- 238000010276 construction Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000007781 pre-processing Methods 0.000 claims abstract description 14
- 238000012216 screening Methods 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 4
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 238000007405 data analysis Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000012847 principal component analysis method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/15—Correlation function computation including computation of convolution operations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Electromagnetism (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及基于激光雷达的同时定位与建图技术领域,尤其涉及一种基于视点可见性的激光雷达静态点云地图构建方法。The present invention relates to the technical field of simultaneous positioning and mapping based on laser radar, and in particular to a laser radar static point cloud map construction method based on viewpoint visibility.
背景技术Background technique
同时定位与建图技术作为自动驾驶和智能交通领域的关键技术之一,建图算法在动态交通场景中的精度、实时性以及鲁棒性仍是亟需进一步优化的方向。高精度地图构建依赖的传感器包括激光雷达、工业相机、GNSS、IMU等,其中激光雷达可以为点云地图构建提供海量的点云数据,而目前已有的基于激光雷达的同时定位与建图算法,都基于静态环境假设,但自动驾驶汽车行驶的交通环境中存在大量运动中的汽车、行人和骑车人,在地图构建过程中,上述动态对象会影响建图算法的精度以及所构建地图的质量。As one of the key technologies in the field of autonomous driving and intelligent transportation, the accuracy, real-time performance and robustness of mapping algorithms in dynamic traffic scenes are still in urgent need of further optimization. The sensors relied on for high-precision map construction include LiDAR, industrial cameras, GNSS, IMU, etc. Among them, LiDAR can provide massive point cloud data for point cloud map construction. The existing LiDAR-based simultaneous positioning and mapping algorithms are based on the assumption of a static environment. However, there are a large number of moving cars, pedestrians and cyclists in the traffic environment where autonomous vehicles are driving. During the map construction process, the above dynamic objects will affect the accuracy of the mapping algorithm and the quality of the constructed map.
首先动态点云会影响激光雷达里程计的配准效果,当场景中动态点云数量过大时,甚至会出现点云配准跑飞的现象;其次,动态对象在最终构建的点云地图中留下移动的痕迹,改变点云地图的空间结构,而且这些痕迹在地图中会被当作障碍物,从而导致地图精度和定位精度降低,降低自动驾驶下游任务的效率。同时,由于近年来自动驾驶相关技术不断发展,高动态场景作为自动驾驶汽车的主要应用场景。实时、准确的构建场景中静态结构的地图,对于实现高等级自动驾驶的实时定位和下游路径规划任务具有重要意义。First, dynamic point clouds will affect the registration effect of the lidar odometer. When the number of dynamic point clouds in the scene is too large, the point cloud registration may even fail. Secondly, dynamic objects leave traces of movement in the final constructed point cloud map, changing the spatial structure of the point cloud map, and these traces will be treated as obstacles in the map, resulting in reduced map accuracy and positioning accuracy, and reducing the efficiency of downstream tasks of autonomous driving. At the same time, due to the continuous development of autonomous driving related technologies in recent years, high-dynamic scenes have become the main application scenarios for autonomous driving vehicles. Real-time and accurate construction of maps of static structures in scenes is of great significance for achieving real-time positioning and downstream path planning tasks for high-level autonomous driving.
发明内容Summary of the invention
有鉴于此,本发明提供一种基于视点可见性的激光雷达静态点云地图构建方法,以解决上述问题。In view of this, the present invention provides a laser radar static point cloud map construction method based on viewpoint visibility to solve the above problems.
根据本发明第一方面,提供一种基于视点可见性的激光雷达静态点云地图构建方法,包括:采集激光雷达点云数据和IMU信息,并进行预处理;对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图;将当前扫描的激光雷达点云和局部点云子地图中的地面点云去除,得到非地面点云;对非地面点云进行球面投影,生成对应的距离图像;对距离图像进行差分,筛选各个局部点云子地图中的动态点;基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图。According to a first aspect of the present invention, a method for constructing a laser radar static point cloud map based on viewpoint visibility is provided, comprising: collecting laser radar point cloud data and IMU information, and performing preprocessing; performing point cloud registration on multiple frame point cloud scans near a currently scanned laser radar point cloud, and generating a local point cloud submap; removing the ground point cloud in the currently scanned laser radar point cloud and the local point cloud submap to obtain a non-ground point cloud; performing spherical projection on the non-ground point cloud to generate a corresponding distance image; performing differentiation on the distance image to filter dynamic points in each local point cloud submap; and splicing each local point cloud submap in the scene based on the dynamic points in each local point cloud submap to generate a static map of the scene.
在本发明的另一实现方式中,预处理包括数据解析、IMU预积分、雷达运动补偿、点云分割与特征提取;预处理后的数据包括有效的、包含可用特征的激光雷达点云数据以及IMU输出的自身初始位姿。In another implementation of the present invention, preprocessing includes data analysis, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction; the preprocessed data includes valid lidar point cloud data containing available features and the initial pose output by the IMU itself.
在本发明的另一实现方式中,对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图,包括:选择当前扫描的激光雷达点云附近的多帧点云扫描;通过最小化不同点云扫描中线特征、面特征的距离来进行点云配准;根据配准后的点云扫描生成局部点云子地图。In another implementation of the present invention, point cloud registration is performed on multiple frame point cloud scans near the currently scanned laser radar point cloud to generate a local point cloud submap, including: selecting multiple frame point cloud scans near the currently scanned laser radar point cloud; performing point cloud registration by minimizing the distance between line features and surface features in different point cloud scans; and generating a local point cloud submap based on the registered point cloud scans.
在本发明的另一实现方式中,对非地面点云进行球面投影,生成对应的距离图像,每个点的像素值由下式计算得到:In another implementation of the present invention, a spherical projection is performed on the non-ground point cloud to generate a corresponding distance image, and the pixel value of each point is calculated by the following formula:
其中,表示点p到第k个关键帧局部坐标系的距离。in, Represents the distance from point p to the local coordinate system of the kth keyframe.
在本发明的另一实现方式中,对距离图像进行差分,筛选各个局部点云子地图中的动态点,包括:局部点云子地图和当前扫描中点云的像素值通过它们的矩阵元素相减:In another implementation of the present invention, the range image is differentiated to screen the dynamic points in each local point cloud submap, including: the pixel values of the local point cloud submap and the point cloud in the current scan are subtracted by their matrix elements:
将计算出的像素值与阈值τ进行比较,如果大于τ则对应的点为动态点,具体的动态点定义如下:The calculated pixel value Compared with the threshold τ, if it is greater than τ, the corresponding point is a dynamic point. The specific definition of the dynamic point is as follows:
τ=γdist(p)τ=γdist(p)
其中,γ是相对于点距离的灵敏度,dist(·)是取对应点的距离值。Among them, γ is the sensitivity relative to the point distance, and dist(·) is the distance value of the corresponding point.
在本发明的另一实现方式中,基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图,包括:将地面点云与过滤掉动态点云的非地面点云进行融合,生成当前扫描以及点云子地图,输出点云子地图配准之后的场景静态地图。In another implementation of the present invention, local point cloud submaps in the scene are spliced based on the dynamic points in each local point cloud submap to generate a static map of the scene, including: fusing the ground point cloud with the non-ground point cloud from which the dynamic point cloud is filtered out, generating the current scan and point cloud submaps, and outputting a static map of the scene after the point cloud submaps are aligned.
根据本发明第二方面,提供一种基于视点可见性的激光雷达静态点云地图构建系统,其特征在于,包括:数据预处理模块:用于采集激光雷达点云数据和IMU信息,并进行预处理。特征提取模块:用于对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图。地面拟合模块:用于将当前扫描的激光雷达点云和局部点云子地图中的地面点云去除,得到非地面点云。对非地面点云进行球面投影,生成对应的距离图像;对距离图像进行差分,筛选各个局部点云子地图中的动态点。建图模块:用于基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图。According to the second aspect of the present invention, there is provided a laser radar static point cloud map construction system based on viewpoint visibility, characterized in that it includes: a data preprocessing module: used to collect laser radar point cloud data and IMU information, and perform preprocessing. A feature extraction module: used to perform point cloud registration on multiple frame point cloud scans near the currently scanned laser radar point cloud to generate a local point cloud submap. A ground fitting module: used to remove the ground point cloud in the currently scanned laser radar point cloud and the local point cloud submap to obtain a non-ground point cloud. A spherical projection is performed on the non-ground point cloud to generate a corresponding distance image; the distance image is differentiated to filter the dynamic points in each local point cloud submap. A mapping module: used to splice the local point cloud submaps in the scene based on the dynamic points in each local point cloud submap to generate a static map of the scene.
本发明的基于视点可见性的激光雷达静态点云地图构建方法,输入非地面点云进行动态点云去除,从而解决基于视点可见性的方法将地面点云误判为动态点的问题;通过筛选出非地面点云,在动态点去除模块对非地面点云处理完成后,再将地面点恢复的方法,使得最终构建的点云地图中保留完整的地面特征,提高动态点云筛选的精度;筛选出的地面点云不进行动态点云去除,减少动态点云去除模块处理的数据量,提升动态点云去除实时性;本发明提出的建图算法对基于可见性方法的缺陷进行优化,在高动态交通场景中具有更好的适用性。The method for constructing a laser radar static point cloud map based on viewpoint visibility of the present invention inputs non-ground point clouds for dynamic point cloud removal, thereby solving the problem that the method based on viewpoint visibility misjudges ground point clouds as dynamic points; by screening out non-ground point clouds, and after the dynamic point removal module completes the processing of the non-ground point clouds, the ground points are restored, so that the complete ground features are retained in the finally constructed point cloud map, and the accuracy of dynamic point cloud screening is improved; the screened ground point clouds are not subjected to dynamic point cloud removal, which reduces the amount of data processed by the dynamic point cloud removal module and improves the real-time performance of dynamic point cloud removal; the mapping algorithm proposed by the present invention optimizes the defects of the visibility-based method and has better applicability in high-dynamic traffic scenes.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚的说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,通过阅读下文实施方式的详细描述,方案中的优点和益处对于本领域的技术人员变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。在附图中:In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following briefly introduces the drawings required for use in the embodiments or the prior art description. By reading the detailed description of the following implementation, the advantages and benefits of the solutions become clear to those skilled in the art. The drawings are only used to illustrate the preferred implementation and are not considered to be limitations of the present invention. In the drawings:
图1为本发明的一个实施例的基于视点可见性的激光雷达静态点云地图构建方法的步骤流程示意图。FIG1 is a schematic flow chart of the steps of a method for constructing a static point cloud map of a laser radar based on viewpoint visibility according to an embodiment of the present invention.
图2为本发明的一个实施例的地面点云筛选的工作流程示意图。FIG. 2 is a schematic diagram of a workflow for ground point cloud screening according to an embodiment of the present invention.
图3为本发明的一个实施例的将点云球投影到距离图像。FIG. 3 is a diagram showing the projection of a point cloud sphere onto a range image according to an embodiment of the present invention.
图4为本发明的一个实施例的建图模块在没有动态点云去除模块时构建的点云地图。FIG. 4 is a point cloud map constructed by a mapping module according to an embodiment of the present invention when there is no dynamic point cloud removal module.
图5为本发明的一个实施例的建图模块添加动态点云去除模块后构建的点云地图。FIG5 is a point cloud map constructed after a dynamic point cloud removal module is added to a mapping module according to an embodiment of the present invention.
具体实施方式Detailed ways
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、详细地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。In order to enable those skilled in the art to better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and in detail below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. All other embodiments obtained by ordinary technicians in the field based on the embodiments in the embodiments of the present invention should fall within the scope of protection of the embodiments of the present invention.
图1为本发明实施例提供的一种基于视点可见性的激光雷达静态点云地图构建方法的步骤流程图,如图1所示,本实施例主要包括以下步骤:FIG1 is a flowchart of a method for constructing a static point cloud map using a laser radar based on viewpoint visibility according to an embodiment of the present invention. As shown in FIG1 , this embodiment mainly includes the following steps:
S101、采集激光雷达点云数据和IMU信息,并进行预处理。S101, collect laser radar point cloud data and IMU information, and perform preprocessing.
S102、对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图。S102, performing point cloud registration on multiple frame point cloud scans near the currently scanned laser radar point cloud to generate a local point cloud submap.
S103、将当前扫描的激光雷达点云和局部点云子地图中的地面点云去除,得到非地面点云。S103: remove the currently scanned laser radar point cloud and the ground point cloud in the local point cloud submap to obtain a non-ground point cloud.
S104、对非地面点云进行球面投影,生成对应的距离图像。S104: Perform spherical projection on the non-ground point cloud to generate a corresponding distance image.
S105、对距离图像进行差分,筛选各个局部点云子地图中的动态点。S105: Differentiate the distance image to filter dynamic points in each local point cloud submap.
S106、基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图。S106: splicing the local point cloud submaps in the scene based on the dynamic points in the local point cloud submaps to generate a static map of the scene.
本发明的基于视点可见性的激光雷达静态点云地图构建方法,输入非地面点云进行动态点云去除,从而解决基于视点可见性的方法将地面点云误判为动态点的问题;通过筛选出非地面点云,在动态点去除模块对非地面点云处理完成后,再将地面点恢复的方法,使得最终构建的点云地图中保留完整的地面特征,提高动态点云筛选的精度;筛选出的地面点云不进行动态点云去除,减少动态点云去除模块处理的数据量,提升动态点云去除实时性;本发明提出的建图算法对基于可见性方法的缺陷进行优化,在高动态交通场景中具有更好的适用性。The method for constructing a laser radar static point cloud map based on viewpoint visibility of the present invention inputs non-ground point clouds for dynamic point cloud removal, thereby solving the problem that the method based on viewpoint visibility misjudges ground point clouds as dynamic points; by screening out non-ground point clouds, and after the dynamic point removal module completes the processing of the non-ground point clouds, the ground points are restored, so that the complete ground features are retained in the finally constructed point cloud map, and the accuracy of dynamic point cloud screening is improved; the screened ground point clouds are not subjected to dynamic point cloud removal, which reduces the amount of data processed by the dynamic point cloud removal module and improves the real-time performance of dynamic point cloud removal; the mapping algorithm proposed by the present invention optimizes the defects of the visibility-based method and has better applicability in high-dynamic traffic scenes.
在本发明的另一实现方式中,预处理包括数据解析、IMU预积分、雷达运动补偿、点云分割与特征提取;预处理后的数据包括有效的、包含可用特征的激光雷达点云数据以及IMU输出的自身初始位姿。In another implementation of the present invention, preprocessing includes data analysis, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction; the preprocessed data includes valid lidar point cloud data containing available features and the initial pose output by the IMU itself.
示例性地,在数据预处理模块对采集的激光雷达点云数据和惯导数据(IMU)进行数据解析、IMU预积分、雷达运动补偿、点云分割与特征提取,预处理后的数据包括有效的、包含可用特征的点云数据以及IMU输出的自身初始位姿。Exemplarily, the data preprocessing module performs data analysis, IMU pre-integration, radar motion compensation, point cloud segmentation and feature extraction on the collected lidar point cloud data and inertial navigation data (IMU). The preprocessed data includes valid point cloud data containing available features and the initial pose output by the IMU itself.
在本发明的另一实现方式中,对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图,包括:选择当前扫描的激光雷达点云附近的多帧点云扫描;通过最小化不同点云扫描中线特征、面特征的距离来进行点云配准;根据配准后的点云扫描生成局部点云子地图。In another implementation of the present invention, point cloud registration is performed on multiple frame point cloud scans near the currently scanned laser radar point cloud to generate a local point cloud submap, including: selecting multiple frame point cloud scans near the currently scanned laser radar point cloud; performing point cloud registration by minimizing the distance between line features and surface features in different point cloud scans; and generating a local point cloud submap based on the registered point cloud scans.
示例性地,将预处理之后的激光雷达点云输入到特征提取模块,提取特征并配准生成局部点云子地图。Exemplarily, the preprocessed lidar point cloud is input into a feature extraction module to extract features and register to generate a local point cloud submap.
优选地,选择激光雷达当前扫描附近的5帧扫描,选择当前扫描的线特征、面特征用下式表示:Preferably, 5 frames of scanning near the current scanning of the laser radar are selected, and the line features and surface features of the current scanning are selected to be expressed by the following formula:
{εk-2,εk-1,εk,εk+1,εk+2}{ε k-2 ,ε k-1 ,ε k ,ε k+1 ,ε k+2 }
{Ηk-2Ηk-1ΗkΗk+1Ηk+2}{Η k-2 Η k-1 H k H k+1 H k+2 }
其中,εk表示第k次激光雷达扫描中提取出的线特征的集合,Ηk表示第k次激光雷达扫描中提取出的面特征的集合。Among them, ε k represents the set of line features extracted from the k-th laser radar scan, and H k represents the set of surface features extracted from the k-th laser radar scan.
优选地,通过最小化不同激光雷达扫描中线特征、面特征的距离来进行点云配准,不同扫描间线特征、面特征距离用下式表示:Preferably, point cloud registration is performed by minimizing the distance between line features and surface features in different LiDAR scans. The distance between line features and surface features between different scans is expressed as follows:
通过配准后的激光雷达扫描生成局部点云子地图。Generate local point cloud submaps from registered LiDAR scans.
在本发明的另一实现方式中,如图2所示,将处理之后的激光雷达点云输入到地面拟合模块进行地面点云去除,筛选出非地面点云。In another implementation of the present invention, as shown in FIG2 , the processed lidar point cloud is input into a ground fitting module to remove the ground point cloud and filter out the non-ground point cloud.
S1031,对输入的点云通过极坐标进行分区,分区的公式如下:S1031, partition the input point cloud by polar coordinates, and the partition formula is as follows:
其中,C表示整个区域,Zm代表第m个区域,Nz代表区域个数,根据经验,Nz被设置为了4,Zm的表示如下:Among them, C represents the entire area, Zm represents the mth area, and Nz represents the number of areas. According to experience, Nz is set to 4. Zm is expressed as follows:
Zm={pk∈p|Lmin,m≤ρk<Lmax,m}Z m = {p k ∈ p | L min, m ≤ ρ k < L max, m }
其中,Lmin,m和Lmax,m分表表示Zm的最小和最大径向边界;然后,Zm也被划分为Nr,m×Nθ,m个bin,其中每个区域具有不同的bin大小。Among them, Lmin,m and Lmax ,m respectively represent the minimum and maximum radial boundaries of Zm ; then, Zm is also divided into Nr,m ×Nθ ,m bins, where each region has a different bin size.
S1032,每个bin中数据点集合都被命名为Sn,bin的总数是Nc具体公式表示如下:S1032, the set of data points in each bin is named Sn , and the total number of bins is Nc. The specific formula is as follows:
其中,Nr,m表示在m区域中径向划分出的区域数量,Nθ,m表示在m区域周向划分出的区域数量。Wherein, N r,m represents the number of regions divided radially in the m region, and N θ,m represents the number of regions divided circumferentially in the m region.
S1033,使用主成分分析法,取出第三个特征代表高度方向的特征,选择最低高度的点作为种子点,令为选择的总数为Nseed的种子点的平均值,则初始地面点云估计如下式:S1033, using the principal component analysis method, extract the third feature representing the height direction, select the point with the lowest height as the seed point, and let is the average value of the total number of seed points selected, N seed , then the initial ground point cloud is estimated as follows:
其中,z(·)返回的是点的高度值,zseed代表的是高度阈值。Among them, z(·) returns the height value of the point, and z seed represents the height threshold.
S1034,通过l次迭代得到的地面点是第l次迭代地面点的法向量是/> S1034, the ground point obtained by l iterations is The normal vector of the ground point at the lth iteration is/>
S1035,通过上一次迭代的地面点法向量来计算地面系数,公式如下所示:S1035, calculate the ground coefficient using the ground point normal vector of the previous iteration, the formula is as follows:
其中,是第1次迭代时,所有被分类为地面的点的平均值。in, is the average value of all points classified as ground in the first iteration.
S1036,第2次地面点的迭代公式如下:S1036, the second iteration formula of the ground point is as follows:
其中,第k个数据点通过下式表示Md表示点的地面设定的距离阈值。The kth data point is represented by Md represents the distance threshold set by the ground of the point.
S1037,通过综合垂直度、高度、平整度三个方向的概率进行地面似然估计,对于乘积大于0.5认为是地面,对地面进行似然估计的公式如下:S1037, the likelihood of the ground is estimated by integrating the probabilities of verticality, height, and flatness. If the product is greater than 0.5, it is considered to be the ground. The formula for estimating the likelihood of the ground is as follows:
其中,f(χn|θn)通过下式计算:Where f(χ n |θ n ) is calculated by the following formula:
其中,rn、σn分别表示平均z值、原点和Sn质心之间的距离及表面变量,其中 in, r n , σ n represent the average z value, the distance between the origin and the center of mass of Sn, and the surface variable, respectively.
更进一步地,对于计算垂直度、高度、平整度的步骤如下:Furthermore, the steps for calculating verticality, height, and flatness are as follows:
S10371,平面垂直度的计算利用每个平面通过主成分分析法提取出的第三维特征向量v3进行判断,公式如下:S10371, the verticality of the plane is calculated by using the third-dimensional feature vector v3 extracted by the principal component analysis method for each plane, and the formula is as follows:
其中,z代表[0,0,1],θτ代表垂直度阈值。Among them, z represents [0, 0, 1] and θ τ represents the verticality threshold.
S10372,利用如下公式估计遮挡空间下上方数据点被认为是地面的情况:S10372, use the following formula to estimate the situation where the upper data point in the occluded space is considered to be the ground:
其中,κ(·)表示根据rn呈指数增长的自适应中点函数。where κ(·) represents an adaptive midpoint function that grows exponentially according to r n .
S10373,基于之前高度判断的结果,利用下式计算平整度来减少上坡平面时的误判:S10373, based on the previous height judgment result, use the following formula to calculate the flatness to reduce the misjudgment of the uphill plane:
在本发明的另一实现方式中,对非地面点云进行球面投影,生成对应的距离图像,每个点的像素值由下式计算得到:In another implementation of the present invention, a spherical projection is performed on the non-ground point cloud to generate a corresponding distance image, and the pixel value of each point is calculated by the following formula:
其中,表示点p到第k个关键帧局部坐标系的距离。in, Represents the distance from point p to the local coordinate system of the kth keyframe.
示例性地,如图3所示,对点云进行球面投影将点云转换为距离图像,每个点的投影坐标通过下式计算:Exemplarily, as shown in FIG3 , the point cloud is converted into a range image by spherical projection, and the projection coordinates of each point are calculated by the following formula:
对当前扫描点云和雷达局部地图进行球投影生成对应的距离图像,每个点的像素值由下式计算得到:The current scan point cloud and the radar local map are projected into a sphere to generate the corresponding distance image. The pixel value of each point is calculated by the following formula:
其中,表示点p到第k个关键帧局部坐标系的距离。in, Represents the distance from point p to the local coordinate system of the kth keyframe.
在本发明的另一实现方式中,对距离图像进行差分,筛选各个局部点云子地图中的动态点,包括:局部点云子地图和当前扫描中点云的像素值通过它们的矩阵元素相减:In another implementation of the present invention, the range image is differentiated to screen the dynamic points in each local point cloud submap, including: the pixel values of the local point cloud submap and the point cloud in the current scan are subtracted by their matrix elements:
将计算出的像素值与阈值τ进行比较,如果大于τ则对应的点为动态点,具体的动态点定义如下:The calculated pixel value Compared with the threshold τ, if it is greater than τ, the corresponding point is a dynamic point. The specific definition of the dynamic point is as follows:
τ=γdist(p)τ=γdist(p)
其中,γ是相对于点距离的灵敏度,dist(·)是取对应点的距离值。Among them, γ is the sensitivity relative to the point distance, and dist(·) is the distance value of the corresponding point.
在本发明的另一实现方式中,基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图,包括:将地面点云与过滤掉动态点云的非地面点云进行融合,生成当前扫描以及点云子地图,输出点云子地图配准之后的场景静态地图。In another implementation of the present invention, local point cloud submaps in the scene are spliced based on the dynamic points in each local point cloud submap to generate a static map of the scene, including: fusing the ground point cloud with the non-ground point cloud from which the dynamic point cloud is filtered out, generating the current scan and point cloud submaps, and outputting a static map of the scene after the point cloud submaps are aligned.
示例性地,生成的点云地图如图4和图5所示,其中,图4为本发明的一个实施例的建图模块在没有动态点云去除模块时构建的点云地图的示意图;图5为本发明的一个实施例的建图模块添加动态点云去除模块后构建的点云地图示意图。Exemplarily, the generated point cloud map is shown in Figures 4 and 5, wherein Figure 4 is a schematic diagram of a point cloud map constructed by a mapping module of an embodiment of the present invention when there is no dynamic point cloud removal module; Figure 5 is a schematic diagram of a point cloud map constructed by a mapping module of an embodiment of the present invention after adding a dynamic point cloud removal module.
根据本发明第二方面,提供一种基于视点可见性的激光雷达静态点云地图构建系统,其特征在于,包括:According to a second aspect of the present invention, a laser radar static point cloud map construction system based on viewpoint visibility is provided, characterized in that it includes:
数据预处理模块:用于采集激光雷达点云数据和IMU信息,并进行预处理。Data preprocessing module: used to collect lidar point cloud data and IMU information and perform preprocessing.
特征提取模块:用于对当前扫描的激光雷达点云附近的多帧点云扫描进行点云配准,生成局部点云子地图。Feature extraction module: used to perform point cloud registration on multiple frame point cloud scans near the currently scanned lidar point cloud to generate a local point cloud submap.
地面拟合模块:用于将当前扫描的激光雷达点云和局部点云子地图中的地面点云去除,得到非地面点云。Ground fitting module: used to remove the ground point cloud in the currently scanned lidar point cloud and the local point cloud submap to obtain the non-ground point cloud.
对非地面点云进行球面投影,生成对应的距离图像;对距离图像进行差分,筛选各个局部点云子地图中的动态点。Perform spherical projection on the non-ground point cloud to generate the corresponding distance image; perform difference on the distance image to filter the dynamic points in each local point cloud submap.
建图模块:用于基于各个局部点云子地图中的动态点将场景中的各个局部点云子地图进行拼接,生成场景的静态地图。Mapping module: used to stitch together the local point cloud submaps in the scene based on the dynamic points in each local point cloud submap to generate a static map of the scene.
优选地,建图模块中可以设置动态点云去除模块。Preferably, a dynamic point cloud removal module may be provided in the mapping module.
至此,已经对本发明的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。Thus far, specific embodiments of the present invention have been described. Other embodiments are within the scope of the appended claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve the desired results. Additionally, the processes depicted in the accompanying drawings do not necessarily require the specific order or sequential order shown to achieve the desired results. In some embodiments, multitasking and parallel processing may be advantageous.
需要说明的是,本发明实施例中所有方向性指示(诸如上、下、左、右、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。It should be noted that all directional indications in the embodiments of the present invention (such as up, down, left, right, back, etc.) are only used to explain the relative position relationship, movement status, etc. between the components under a certain specific posture (as shown in the accompanying drawings). If the specific posture changes, the directional indication will also change accordingly.
在本发明的描述中,术语“第一”、“第二”仅用于方便描述不同的部件或名称,而不能理解为指示或暗示顺序关系、相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。In the description of the present invention, the terms "first" and "second" are only used to facilitate the description of different components or names, and cannot be understood as indicating or implying a sequential relationship, relative importance, or implicitly indicating the number of the indicated technical features. Therefore, the features defined as "first" and "second" may explicitly or implicitly include at least one of the features.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as those commonly understood by those skilled in the art of the present invention. The terms used in the specification of the present invention herein are only for the purpose of describing specific embodiments and are not intended to limit the present invention.
需要说明的是,虽然结合附图对本发明的具体实施例进行了详细地描述,但不应理解为对本发明的保护范围的限定。在权利要求书所描述的范围内,本领域技术人员不经创造性劳动即可做出的各种修改和变形仍属于本发明的保护范围。It should be noted that although the specific embodiments of the present invention are described in detail in conjunction with the accompanying drawings, it should not be understood as limiting the scope of protection of the present invention. Within the scope described in the claims, various modifications and variations that can be made by those skilled in the art without creative work still belong to the scope of protection of the present invention.
本发明实施例的示例旨在简明地说明本发明实施例的技术特点,使得本领域技术人员能够直观了解本发明实施例的技术特点,并不作为本发明实施例的不当限定。The examples of the embodiments of the present invention are intended to concisely illustrate the technical features of the embodiments of the present invention so that those skilled in the art can intuitively understand the technical features of the embodiments of the present invention, and are not intended to be improper limitations of the embodiments of the present invention.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit it. Although the present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions described in the aforementioned embodiments, or make equivalent replacements for some of the technical features therein. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410030021.1A CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410030021.1A CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118067102A true CN118067102A (en) | 2024-05-24 |
Family
ID=91106779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410030021.1A Pending CN118067102A (en) | 2024-01-09 | 2024-01-09 | Laser radar static point cloud map construction method and system based on viewpoint visibility |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118067102A (en) |
-
2024
- 2024-01-09 CN CN202410030021.1A patent/CN118067102A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112014857B (en) | Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot | |
CN113593017B (en) | Method, device, equipment and storage medium for constructing surface three-dimensional model of strip mine | |
CN113903011B (en) | Semantic map construction and positioning method suitable for indoor parking lot | |
CN115376109B (en) | Obstacle detection method, obstacle detection device, and storage medium | |
CN113838129B (en) | Method, device and system for obtaining pose information | |
CN115861968A (en) | Dynamic obstacle removing method based on real-time point cloud data | |
CN114116933B (en) | A semantic-topological joint mapping method based on monocular images | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
Masood et al. | Multi-building extraction and alignment for as-built point clouds: a case study with crane cameras | |
CN111127520A (en) | A method and system for vehicle tracking based on video analysis | |
CN116380039A (en) | A mobile robot navigation system based on solid-state lidar and point cloud map | |
Volkova et al. | More Robust Features for Adaptive Visual Navigation of UAVs in Mixed Environments: A Novel Localisation Framework | |
CN113449692A (en) | Map lane information updating method and system based on unmanned aerial vehicle | |
CN113804182B (en) | Grid map creation method based on information fusion | |
CN113836251B (en) | Cognitive map construction method, device, equipment and medium | |
Liu et al. | A lightweight lidar-camera sensing method of obstacles detection and classification for autonomous rail rapid transit | |
CN113850864A (en) | GNSS/laser radar loop detection method for mobile robot | |
CN117115415B (en) | Image marking processing method and system based on big data analysis | |
CN118067102A (en) | Laser radar static point cloud map construction method and system based on viewpoint visibility | |
CN115236643B (en) | Sensor calibration method, system, device, electronic equipment and medium | |
CN116045965A (en) | Multi-sensor-integrated environment map construction method | |
CN115656991A (en) | Vehicle external parameter calibration method, device, equipment and storage medium | |
CN115482277A (en) | Social distance risk early warning method and device | |
CN114639084A (en) | Road side end vehicle sensing method based on SSD (solid State disk) improved algorithm | |
CN113902047A (en) | Image element matching method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |