WO2022188094A1 - 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达 - Google Patents

一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达 Download PDF

Info

Publication number
WO2022188094A1
WO2022188094A1 PCT/CN2021/080212 CN2021080212W WO2022188094A1 WO 2022188094 A1 WO2022188094 A1 WO 2022188094A1 CN 2021080212 W CN2021080212 W CN 2021080212W WO 2022188094 A1 WO2022188094 A1 WO 2022188094A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
frame
point
clusters
cluster
Prior art date
Application number
PCT/CN2021/080212
Other languages
English (en)
French (fr)
Inventor
赵小文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/080212 priority Critical patent/WO2022188094A1/zh
Priority to CN202180000478.2A priority patent/CN113168717B/zh
Publication of WO2022188094A1 publication Critical patent/WO2022188094A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of positioning technology, and in particular, to a point cloud matching method and device, a navigation method and device, a positioning method, and a laser radar.
  • SLAM Simultaneous Localization and Mapping
  • Laser SLAM uses lidar to obtain point clouds for matching, estimates the pose of lidar and builds a map.
  • Visual SLAM uses visual sensors to acquire images and extract feature points from the images. and matching to estimate the pose of the vision sensor and build a map. Because visual SLAM is easily affected by changes in ambient light, especially in outdoor environments, light changes will directly affect the extraction accuracy of feature points, resulting in poor mapping results. Therefore, the overall performance of visual SLAM in outdoor environments is not as good as laser SLAM. In a scene with sparse features, laser SLAM is easy to lose tracked targets, resulting in errors in point cloud matching, resulting in errors. These errors will gradually accumulate, resulting in the problem of overlapping final maps. Moreover, laser SLAM is dynamic in a dynamic environment. Objects are also prone to affect point cloud matching, and the correct pose cannot be estimated.
  • the present application provides a point cloud matching method and device, a navigation method and equipment, a positioning method, and a laser radar, which have faster point cloud matching speed and higher point cloud matching accuracy, and improve synchronous positioning and construction.
  • the speed and accuracy of graph SLAM The speed and accuracy of graph SLAM.
  • a first aspect of the present application provides a point cloud matching method, including:
  • point cloud matching is performed between the point cloud of the second frame and the point cloud of the first frame,
  • the pose transformation parameters between the point cloud of the second frame and the point cloud of the first frame are obtained.
  • the point clouds of the two frames before and after collected by the lidar are clustered and segmented, so as to obtain the point cloud clusters of the objects in the two frames of point clouds, and through the point clouds of the objects in the two frames of point clouds
  • Cluster correlation calculation is performed to obtain the similarity of the point cloud clusters of the objects in the two frames of point clouds, and the point cloud clusters of the objects with the highest similarity in the two frames of point clouds are associated.
  • the frame point cloud performs the matching of the point cloud clusters of the objects associated with each other and the matching of each point in the two frames of point clouds, so as to obtain the pose transformation parameters between the two frames of point clouds.
  • the point cloud of the second frame is subjected to pose transformation
  • the point cloud of the second frame after the pose transformation is fused with the point cloud of the first frame to obtain a point cloud map composed of the point cloud of the second frame and the point cloud of the first frame.
  • the point cloud of the second frame can be transformed into the pose, and the point cloud of the second frame after the pose transformation can be transformed with the point cloud of the first frame.
  • the frame point cloud is fused, and at the same time, the second frame point cloud and the first frame point cloud after fusion can be constructed into the map coordinate system in combination with the map coordinate system to obtain a point cloud map.
  • the point cloud cluster of each object in the point cloud of the second frame is compared with the point cloud cluster of each object in the point cloud of the first frame based on the similarity of the point cloud clusters
  • Point cloud clusters are associated, including:
  • the similarity is calculated based on the feature vector of the point cloud cluster of each object in the second frame of point cloud and the feature vector of the point cloud cluster of each object in the first frame of point cloud.
  • the point cloud cluster of the object is associated with the point cloud cluster of each object in the point cloud of the first frame based on the similarity.
  • the feature vector of each point cloud cluster is obtained, and the feature vector of the point cloud cluster of each object in the second frame point cloud is calculated.
  • the correlation distance of the feature vectors of the point cloud clusters of each object in the frame point cloud can obtain the similarity between the point cloud clusters of each object in the second frame point cloud and the point cloud clusters of each object in the first frame point cloud, based on the similarity
  • the point cloud cluster of each object in the point cloud of the second frame can be associated with the point cloud cluster of each object in the point cloud of the first frame.
  • the feature vector of the point cloud cluster includes at least one of the following:
  • the position parameter, orientation parameter or size parameter of the point cloud cluster is the position parameter, orientation parameter or size parameter of the point cloud cluster.
  • the position parameters, orientation parameters and size parameters of the point cloud cluster can be determined to form the feature points of each point cloud cluster, which is convenient for fast Complete the pairwise correlation calculation between the point cloud clusters of each object in the point cloud of the second frame and the point cloud clusters of each object in the point cloud of the first frame, and perform the correlation of the point cloud clusters based on the results of the pairwise correlation calculation.
  • the second frame based on the result of the association and the distance between the point cloud of the second frame and each point in the point cloud of the first frame, the second frame The point cloud is matched with the point cloud of the first frame, including:
  • each point cluster in the point cloud of the second frame belongs to the point cloud of the second frame.
  • the probability of the point cloud cluster of each object in the point cloud of the second frame based on the correlation result, can be further calculated to obtain that each point in the point cloud of the second frame belongs to the point cloud of the first frame after the pose transformation.
  • the probability of the point cloud cluster of each object in the second frame can be obtained after the pose transformation of all points in the second frame of the point cloud.
  • the overall probability can be used as the object category error between the point cloud cluster of each object in the point cloud of the second frame after the pose transformation and the point cloud cluster of each object in the point cloud of the first frame.
  • ICP Iterative Closest Point
  • the distance error between each point in the second frame point cloud after pose transformation and each point in the first frame point cloud can be calculated.
  • a weight coefficient is set for the error and distance error respectively, and the summation calculation is performed.
  • the minimum result of the weighted summation is used as a constraint condition, and the pose transformation parameters between the point cloud of the second frame and the point cloud of the first frame can be calculated.
  • This method performs point cloud matching between two frames of point clouds based on the object category error and distance error. Compared with the existing point cloud matching based on distance error only, it has faster point cloud matching speed and higher point cloud matching speed. Matching accuracy can obtain higher-precision pose transformation parameters.
  • the method before performing point cloud matching, the method further includes:
  • the pose transformation parameters between the point cloud of the second frame and the point cloud of the first frame are optimized.
  • the frames closest to the point cloud of the first frame and the point cloud of the second frame are respectively found in the frame database to form a closed loop, and then the local feature points of the closest frames that constitute the closed loop are projected respectively.
  • To the point cloud of the first frame and the point cloud of the second frame and minimize the reprojection error to optimize the pose of the point cloud of the first frame and the point cloud of the second frame, so as to compare the point cloud of the second frame and the point cloud of the first frame
  • the pose transformation parameters between clouds are optimized.
  • the method before performing the clustering, the method further includes:
  • a point cloud matching device including:
  • an acquisition module configured to acquire two frames of point cloud data collected by the lidar, where the two frames of point cloud data include the first frame point cloud and the second frame point cloud before and after;
  • the clustering module is used for clustering the point cloud of the first frame to obtain the point cloud clusters of each object in the point cloud of the first frame; clustering the point cloud of the second frame to obtain the point cloud of the first frame The point cloud cluster of each object in the two-frame point cloud;
  • an association module configured to associate the point cloud clusters of each object in the point cloud of the second frame with the point cloud clusters of the objects in the point cloud of the first frame based on the similarity of the point cloud clusters;
  • a point cloud matching module configured to match the point cloud of the second frame with the point cloud of the first frame based on the result of the association and the distance between the point cloud of the second frame and each point in the point cloud of the first frame The point cloud is matched with the point cloud to obtain the pose transformation parameters between the point cloud of the second frame and the point cloud of the first frame.
  • a map construction module configured to perform pose transformation on the second frame point cloud based on the pose transformation parameters of the second frame point cloud and the first frame point cloud; the second frame after pose transformation
  • the point cloud and the point cloud of the first frame are fused to obtain a point cloud map composed of the point cloud of the second frame and the point cloud of the first frame.
  • association module is specifically configured to:
  • the similarity is calculated based on the feature vector of the point cloud cluster of each object in the second frame of point cloud and the feature vector of the point cloud cluster of each object in the first frame of point cloud.
  • the point cloud cluster of the object is associated with the point cloud cluster of each object in the point cloud of the first frame based on the similarity.
  • the feature vector of the point cloud cluster includes at least one of the following:
  • the position parameter, orientation parameter or size parameter of the point cloud cluster is the position parameter, orientation parameter or size parameter of the point cloud cluster.
  • the point cloud matching module is specifically used for:
  • a dynamic filtering module is further included:
  • It is used to filter out point cloud clusters corresponding to dynamic objects in the point cloud of the second frame.
  • a closed-loop optimization module is also included:
  • the pose transformation parameters between the point cloud of the second frame and the point cloud of the first frame are optimized.
  • the obtaining module is further configured to:
  • a navigation method comprising:
  • the obtained pose transformation parameters of each frame of point cloud and the previous frame of point cloud are obtained based on the point cloud matching method
  • Path planning and dynamic obstacle avoidance are performed based on the point cloud map.
  • a navigation device comprising:
  • the point cloud matching device is used to obtain the acquired pose transformation parameters of each frame of the point cloud and the previous frame;
  • the frame point cloud is fused to obtain a point cloud map including the point cloud of each frame;
  • the navigation module is used for path planning and dynamic obstacle avoidance based on the point cloud map.
  • a fifth aspect of the present application provides a positioning method, comprising:
  • the pose transformation parameters of the current moment and the previous moment of the lidar are obtained.
  • the pose of the lidar at the current moment is determined.
  • a lidar including:
  • the point cloud matching device is used to obtain the pose transformation parameters of the current moment and the previous moment of the lidar
  • the positioning module is configured to determine the pose of the laser radar at the current moment based on the pose transformation parameter and the pose of the laser radar at the previous moment.
  • a seventh aspect of the present application provides a computing device, comprising:
  • At least one memory connected to the bus and storing program instructions that, when executed by the at least one processor, cause the at least one processor to perform the point cloud matching method, or the navigation method , or the positioning method.
  • an eighth aspect of the present application provides a computer-readable storage medium on which program instructions are stored, the program instructions, when executed by a computer, cause the computer to execute the point cloud matching method or the navigation method. method, or the positioning method.
  • a ninth aspect of the present application provides a computer program product comprising program instructions that, when executed by a computer, cause the computer to execute the point cloud matching method, or the navigation method, or the positioning method.
  • FIG. 1 is a flowchart of an existing visual SLAM method
  • FIG. 2 is a flowchart of an existing laser SLAM method
  • 3 is a schematic diagram of a distance constraint between two frames of laser point clouds
  • FIG. 4 is a flowchart of a SLAM method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of establishing an association between multiple point cloud clusters in two frames of point clouds according to an embodiment of the present application
  • FIG. 6 is a flowchart of a SLAM method in a dynamic environment provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of establishing an association between multiple point cloud clusters in two frames of point clouds according to an embodiment of the present application
  • FIG. 8 is a flowchart of a SLAM method in a static environment provided by an embodiment of the present application.
  • FIG. 9 is a block diagram of a SLAM device provided by an embodiment of the present application.
  • FIG. 10 is an architectural diagram of a computing device provided by an embodiment of the present application.
  • Simultaneous localization and mapping Simultaneous Localization and Mapping, SLAM, or real-time localization and map construction, or concurrent mapping and positioning.
  • the principle of synchronous positioning and mapping technology is to use laser radar or visual sensor observation information for position estimation, and at the same time, point clouds or images are spliced together to form a complete map.
  • Iterative Closest Point Iterative Closest Point, ICP.
  • An iterative calculation method in the target point cloud P and the source point cloud Q to be matched, respectively, according to certain constraints, find the nearest point (pi, qi), and then calculate the optimal matching parameters (rotation parameter Rotate, Translation parameter Translate), a method to minimize the error function.
  • the optimal matching parameters are the pose transformation parameters of the two point clouds.
  • Lidar Odometry and Mapping Lidar Odometry and Mapping
  • LOAM Lidar Odometry and Mapping
  • An algorithm that uses lidar to realize SLAM calculates the position of the robot relative to the initial moment through the point cloud matching of lidar, which is called lidar odometer, and combines the point clouds according to each position to form a map, which is called construction. picture.
  • Point Cloud Library Point Cloud Library, PCL. It is a large-scale cross-platform open source C++ programming library established on the basis of absorbing previous researches on point cloud. It implements a large number of point cloud-related general algorithms and efficient data structures, involving point cloud acquisition, filtering, segmentation, and registration. , retrieval, feature extraction, recognition, tracking, surface reconstruction, visualization, etc. It supports multiple operating system platforms and can run on Windows, Linux, Android, Mac OS X, and some embedded real-time systems.
  • Pose, Position and Orientation, Pose Indicates the position and attitude, the position is the coordinate of the robot relative to the reference coordinate system, and the attitude is the orientation of the robot relative to the reference coordinate system, including the pitch angle, yaw angle and roll angle.
  • Figure 1 shows a flow chart of an existing visual SLAM method.
  • This method collects visual images through a visual sensor (camera), extracts image feature points, and uses the pre-integration of the inertial sensor as an initial value to perform image processing.
  • the feature points are matched, the pose of the camera is estimated, and the visual odometry is obtained.
  • the errors of all frames are optimized by the nonlinear optimization technology, and the optimized pose is obtained, and the global map is constructed by using the collected visual images and the optimized pose.
  • Figure 2 shows the flow chart of the existing one-laser SLAM method.
  • This method obtains point clouds through the lidar sensor, and matches the point clouds of the two frames before and after. Iterate the closest point ICP algorithm, find the closest point in the target point cloud after transforming to the target point cloud according to each point in the source point cloud, use the error between each point and the closest point to sum, and minimize the error
  • the method of summation is used to estimate the pose transformation parameters between the point clouds of the two frames.
  • it also performs closed-loop detection by looking up the frame point cloud in the frame database. After detecting the closed loop, nonlinear optimization technology is used to optimize the pose, and the point clouds of all frames are spliced together according to the corresponding pose transformation parameters to construct a point cloud. map.
  • the second defect of technology As shown in Figure 3, since the laser SLAM method only considers the closest distance between the source point cloud and the midpoint of the target point cloud in the point cloud matching process, the distance error is used as a distance constraint to solve the pose transformation Parameter, the point on the edge of some point cloud clusters in the source point cloud, the closest point in the target point cloud belongs to other point cloud clusters, which leads to large errors in the matching results of the previous and subsequent frames.
  • the point cloud matching takes a long time, and the accuracy of the pose transformation parameters is low, which leads to the problem of overlapping of the constructed maps.
  • the embodiment of the present application provides a SLAM method.
  • a SLAM method On the basis of the distance constraint between the two frames of point clouds, an association constraint of the object category between the two frames of point clouds is added, and according to the distance constraint and the association constraint, the point cloud Matching improves the speed and accuracy of point cloud matching to obtain high-precision pose transformation parameters, thereby improving the accuracy of the final constructed point cloud map.
  • the present application can be applied to the field of robot and vehicle automatic driving. The application is described in detail below.
  • laser point cloud collection is performed by using lidar
  • clustering segmentation, association calculation, dynamic filtering, point cloud collection and multi-frame point cloud collection are performed by Processes such as cloud matching and closed-loop optimization to build point cloud maps.
  • the lidar can be installed on movable devices such as vehicles and robots. When the movement of the lidar is described later, it can be understood that the movement of the device causes the spatial position of the lidar to change.
  • the method includes:
  • S401 Acquire two frames of point cloud data collected by lidar
  • Cloud data is taken as an example for detailed introduction.
  • the two frames of point cloud data are the T1 frame point cloud and the T2 frame point cloud collected before and after the lidar.
  • the T1 frame point cloud and the T2 frame point cloud may be front and rear phase clouds.
  • the neighbor relationship can also be a non-adjacent relationship.
  • the voxel grid filter in the point cloud library (PCL) is used to downsample each frame of the collected point cloud.
  • the voxel grid filter can achieve the function of downsampling without destroying the geometric structure of the point cloud itself. , which is downsampled to reduce the number of points in the point cloud per frame while maintaining the shape characteristics of the point cloud.
  • S402 Clustering the point cloud of the T1 frame and the point cloud of the T2 frame respectively, to obtain the point cloud cluster of each object in the point cloud of the T1 frame and the point cloud cluster of each object in the point cloud of the T2 frame;
  • the clustering specifically includes the following sub-steps:
  • the clustering algorithm based on Euclidean distance performs clustering on the collected point cloud of T1 frame and point cloud of T2 frame respectively; in some other embodiments, K-means algorithm, Mahalanobis distance clustering algorithm, etc. can also be used. .
  • Segment the point cloud clusters of each object in the T2 frame point cloud obtained by the clustering from the T2 frame point cloud construct the bounding box of the point cloud cluster of each object, and obtain the bounding box of the point cloud cluster of each object. parameter.
  • the parameters of the bounding box of the point cloud cluster may specifically include the position coordinates of the bounding box, the horizontal heading angle of the bounding box, and the length, width, and height of the bounding box.
  • S403 Based on the similarity of the point cloud clusters, associate the point cloud clusters of each object in the T2 frame point cloud with the point cloud clusters of each object in the T1 frame point cloud;
  • the association process specifically includes:
  • the point cloud of each object in the point cloud of the T2 frame is calculated.
  • the association distance between the cluster and the point cloud cluster of each object in the T1 frame point cloud is used to indicate the similarity between the point cloud cluster of each object in the T2 frame point cloud and the point cloud cluster of each object in the T1 frame point cloud, The smaller the association distance between the two point cloud clusters, the higher the similarity between the two point cloud clusters.
  • the point cloud clusters of each object are associated in pairs.
  • the pairwise association relationship between the point cloud clusters of objects in the point cloud of the T2 frame and the point cloud clusters of the objects in the point cloud of the T1 frame can be obtained.
  • the point cloud clusters that establish a pairwise relationship specifically refer to the point clouds corresponding to the same objects included in the point cloud of the T2 frame and the point cloud of the T1 frame. cluster. Since the point cloud of T1 frame and the point cloud of T2 frame are two frame point clouds collected before and after the moving lidar, due to the movement of lidar, the point cloud clusters of some objects in the point cloud of T1 frame are not in the point cloud of T2 frame.
  • the point cloud clusters of some objects in the point cloud of the T2 frame are not collected in the point cloud of the T1 frame, so when performing the correlation calculation, only the point cloud of the T1 frame and the point cloud of the T2 frame are required.
  • the point cloud clusters corresponding to the same object included in the cloud at the same time are associated.
  • S404 Filter out point cloud clusters corresponding to dynamic objects in the point cloud of the T1 frame and the point cloud of the T2 frame;
  • the Kalman filtering algorithm can be used to analyze each point cloud cluster in each frame of point cloud in the continuous multi-frame point cloud. By observing and estimating the speed of each point cloud cluster, combined with the model of uniform motion, the movement speed of the point cloud cluster is judged, and the multiple point cloud clusters in each frame of point cloud are divided based on the movement speed. Point cloud clusters of dynamic objects and point cloud clusters of static objects, and filter the point cloud clusters of dynamic objects from the point cloud of each frame to eliminate the influence of dynamic objects on point cloud matching.
  • S405 Based on the result of the association and the distance between the point cloud of the T2 frame and each point in the point cloud of the T1 frame, perform point cloud matching on the point cloud of the T2 frame and the point cloud of the T1 frame to obtain the obtained The pose transformation parameters between the point cloud of the T2 frame and the point cloud of the T1 frame;
  • the establishment process of the association constraint includes: considering the possibility that multiple point cloud clusters may overlap in each frame of point cloud, the clustering result of a certain point in the T2 frame point cloud in the T2 frame point cloud is: For example, by calculating the distance from a certain point to the point cloud cluster of each object in the point cloud of the T2 frame, according to a set threshold, filter out one or more objects in the point cloud of the T2 frame that are smaller than the set threshold.
  • Point cloud cluster as the point cloud cluster of one or more objects that a certain point may belong to, and then calculate the error between the certain point and the one or more point cloud clusters; assuming that the certain point passes through the pose After the transformation, there is one projected point in the point cloud of the T1 frame, and based on the correlation result in step S403, the error between the projected point and one or more point cloud clusters in the point cloud of the T1 frame with correlation can be calculated similarly.
  • the difference between the point cloud cluster of each object in the point cloud of T2 frame and the point cloud cluster of each object in the point cloud of T1 frame can be obtained.
  • the minimum object category error can be used as the point cloud cluster of each object in the point cloud of the T2 frame and the point cloud of each object in the point cloud of the T1 frame in the point cloud matching process. Association constraints between point cloud clusters;
  • the establishment process of the distance constraint includes: based on the iterative closest point ICP algorithm, calculating the geometric distance of each point in the point cloud of T2 frame to the nearest point in the point cloud of T1 frame after the pose transformation, and calculating the geometric distance of the point cloud of T2 frame to the nearest point in the point cloud of T1 frame.
  • the geometric distances of the points in the T2 frame are summed to obtain the distance error between the point cloud of the T2 frame and the point cloud of the T1 frame.
  • the minimum distance error can be used as the Distance constraints between each point in the point cloud of the T2 frame and each point in the point cloud of the T1 frame;
  • the pose transformation parameter between the cloud and the T1 frame point cloud is the optimal pose transformation parameter in the point cloud matching process between the T2 frame point cloud and the T1 frame point cloud.
  • S406 Perform closed-loop detection optimization on the point cloud of the T1 frame and the point cloud of the T2 frame;
  • the frame that is closest to the point cloud of each frame is respectively searched in the frame database, and after forming a closed loop, the poses of the point cloud of T1 frame and the point cloud of T2 frame can be respectively performed by nonlinear optimization technology. Optimization to optimize the pose transformation parameters between the T2 frame point cloud and the T1 frame point cloud.
  • the point cloud of the T1 frame can be used as a reference frame, and based on the pose of the point cloud of the T1 frame and the pose transformation parameters between the point cloud of the T2 frame and the point cloud of the T1 frame, the point cloud of the T2 frame is Cloud performs pose transformation;
  • the point cloud of the T2 frame and the point cloud of the T1 frame after the pose transformation are fused to obtain a point cloud map composed of the point cloud of the T2 frame and the point cloud of the T1 frame. Based on the improvement of the accuracy of the pose transformation parameters obtained by this method, the constructed points The accuracy of the cloud map is also improved accordingly.
  • an embodiment of the present application provides a SLAM method in a dynamic environment.
  • the method taking the processing process of two adjacent frames of point clouds collected by a lidar in a dynamic environment as an example, this The method is described in detail, and the adjacent two frame point clouds are set as the T1 frame point cloud and the T2 frame point cloud, wherein the T1 frame point cloud is the previous frame of the T2 frame point cloud.
  • the method includes:
  • S601 point cloud preprocessing and segmentation, preprocessing and segmentation of the input T1 frame point cloud and T2 frame point cloud;
  • the voxel grid filter in the point cloud library is used to filter the T1 frame point cloud and the T2 frame point cloud.
  • the resolution of the grid can be adjusted.
  • the resolution can be adjusted to 0.2m, and the point in the center of each grid is used to replace all points in the grid to achieve downsampling, reduce the number of points in the point cloud, and maintain the shape of the point cloud at the same time feature;
  • the down-sampled point cloud is clustered and segmented, and the clustering algorithm based on Euclidean distance can be used to cluster the down-sampled T1 frame point cloud and T2 frame point cloud to form different point cloud clusters.
  • the subsequent association processing can be performed based on the parameters of the bounding box without the need for Identify the specific category of the object, where the parameters of the bounding box may specifically include the position coordinates (x, y, z) of the center of the bounding box, the horizontal heading angle of the bounding box, the orientation angle ⁇ , the length L, the width W, and the height H and other parameters.
  • S602 Object association, perform correlation calculation on the point cloud of the T2 frame and the point cloud of the T1 frame, and associate the point cloud of the T2 frame with the point cloud cluster corresponding to the same object in the point cloud of the T1 frame;
  • each point cloud cluster in the T2 frame point cloud and the T1 frame point cloud are calculated respectively.
  • the association distance of each point cloud cluster which is used to indicate the similarity between the two point cloud clusters. The smaller the association distance, the higher the similarity, indicating the possibility that the two point cloud clusters belong to the same object bigger.
  • the point cloud cluster in the T2 frame point cloud participating in the correlation calculation be object i
  • the position coordinates of object i are The horizontal heading angle is ⁇ 2
  • the size of object i is
  • the point cloud cluster in the T1 frame point cloud participating in the correlation calculation is object j
  • the position coordinates of object j are The horizontal heading angle is ⁇ 1
  • the size of object j is
  • the associated distance between the object i and the object j is d i,j
  • the associated distance d i,j specifically includes the position distance d 1 , the orientation distance d 2 and the size distance d between the bounding boxes of the two point cloud clusters 3 ;
  • the correlation distance between the object i in the point cloud of the T2 frame and the object j in the point cloud of the T1 frame can be calculated as d i,j ,
  • the two frames can be A one-to-one correspondence is performed between multiple point cloud clusters in the point cloud within the threshold range, and an association relationship between two point cloud clusters corresponding to the same object is established.
  • the ID of a certain point cloud cluster in the point cloud of the T2 frame is C
  • the ID of the point cloud cluster that has an associated relationship in the point cloud of the T1 frame can also be set to C.
  • S603 Filter out dynamic objects, and filter out dynamic objects in the point cloud of the T1 frame and the point cloud of the T2 frame based on the correlation result;
  • the Kalman filtering algorithm can be used to measure the centroid position of the point cloud cluster of the same object in consecutive multi-frame point clouds, and based on the uniform motion model, the same object can be estimated in consecutive multi-frame points.
  • the movement speed in the cloud according to the movement speed, the point cloud cluster can be divided into the point cloud cluster of dynamic objects and the point cloud cluster of static objects, which can filter out the dynamic point cloud clusters and reduce the matching of dynamic objects to subsequent point clouds. impact.
  • S604 point cloud matching, based on the result of the correlation calculation and the distance between the point cloud of the T2 frame and each point in the point cloud of the T1 frame, construct a multi-constrained point cloud matching, and obtain the position between the point cloud of the T2 frame and the point cloud of the T1 frame Pose transformation parameters;
  • the projected point of the point p i in the point cloud of T1 frame is qi
  • the corresponding ID of the point cloud cluster C in the point cloud of T2 frame in the point cloud of T1 frame is the same is C
  • the distance from the projected point qi to the nearest point in the point cloud cluster C in the T1 frame point cloud is dis(q i , C )
  • the projected point q i belongs to the point cloud cluster C in the T1 frame point cloud
  • each point may belong to more than one point cloud cluster during clustering, and the point p i is in the T2 frame point cloud. For example, by calculating the distance from the point p i to each point cloud cluster in the point cloud of the T2 frame, according to a set threshold, one or more points whose calculated distance is less than the set threshold are screened out.
  • Cloud clusters (eg point cloud clusters C 1 , C 2 , C 3 ), and then calculate the error that the point pi may belong to the point cloud clusters C 1 , C 2 , C 3 in the point cloud of the T2 frame; similarly, it can be calculated
  • the projection point q i corresponding to the point p i belongs to the error of the point cloud clusters C 1 , C 2 , and C 3 in the point cloud of the T1 frame. According to this, it can be concluded that the projection point q i may belong to the point cloud in the point cloud of the T1 frame.
  • the errors of the clusters C 1 , C 2 , and C 3 are summed to obtain the total error between the projected point qi and the point cloud clusters of the objects that may belong to the point cloud in the T1 frame:
  • the total error of each projection point of the T2 frame point cloud in the T1 frame point cloud can be summed to obtain the point cloud cluster of each object in the T2 frame point cloud and the T1 frame point cloud of each object in the point cloud.
  • the object category error of the point cloud cluster, the minimum object category error is taken as the association constraint of point cloud matching between the point cloud cluster of each object in the T2 frame point cloud and the point cloud cluster of each object in the T1 frame point cloud:
  • the E(R, T) Label represents the relationship between the object category error and the pose transformation parameters.
  • is the weight coefficient.
  • solve the pose transformation parameter (R, T) that minimizes the result of the weighted summation is the pose transformation parameter of the T2 frame point cloud and the T1 frame point cloud, where R is the difference between the T2 frame point cloud and the T1 frame point cloud Rotation vector, T is the displacement vector of the point cloud of T2 frame relative to the point cloud of T1 frame.
  • S605 closed loop detection, find the closest frame in the frame database for the point cloud of T2 frame and the point cloud of T1 frame respectively, form a closed loop, and optimize their poses respectively, so as to detect the difference between the point cloud of T2 frame and the point cloud of T1 frame.
  • the pose transformation parameters are optimized;
  • the closest frame is found in the frame database, and after forming a closed loop, nonlinear optimization is performed to optimize the poses of the T2 frame point cloud and the T1 frame point cloud respectively. , so as to optimize the pose transformation parameters between the point cloud of T2 frame and the point cloud of T1 frame.
  • S606 construct a map, and construct a point cloud map including the point cloud of the T2 frame and the point cloud of the T1 frame based on the optimized pose transformation parameters.
  • the point cloud of the T1 frame can be used as a reference frame, and based on the pose of the point cloud of the T1 frame and the pose transformation parameters between the point cloud of the T2 frame and the point cloud of the T1 frame, the point cloud of the T2 frame is processed pose transformation;
  • the point cloud of T2 frame and the point cloud of T1 frame after pose transformation are fused, and the point cloud map composed of the point cloud of T2 frame and the point cloud of T1 frame is obtained.
  • this method when there are multiple frames of point clouds, this method is also used to transform each frame of point clouds according to the pose transformation parameters of the point cloud of the previous frame and the point cloud of the previous frame, and then fuse it with the previous frame to construct a A point cloud map containing a multi-frame point cloud.
  • the method by using the association constraints and distance constraints of the two frames of point clouds as constraints to perform point cloud matching, the pose transformation parameters with higher accuracy can be obtained, so that the error of the constructed point cloud map is smaller. Higher precision.
  • the method also considers the influence of dynamic objects in the dynamic environment. By filtering out the dynamic objects, the influence of the dynamic objects on the point cloud matching is eliminated, and the robustness of the method in the dynamic environment is improved.
  • the SLAM in this embodiment is compared with the existing laser SLAM in the second technique mentioned above.
  • the obtained trajectory accuracy is compared with the GPS real value.
  • the effect and error of the trajectory are shown in the following table. It can be seen that the trajectory obtained by the SLAM in this embodiment is closer to the GPS real value. In a dynamic scenario, the SLAM in this embodiment is Has a more pronounced accuracy improvement.
  • an embodiment of the present application provides a SLAM method in a static environment. Compared with the second embodiment shown in FIG. 6 , this embodiment lacks the elimination process of dynamic objects, and other processing processes are the same as those shown in FIG. 6 . The second embodiment is the same as shown, and will not be repeated here.
  • association constraints and distance constraints of the two frames of point clouds are also used as the constraints of point cloud matching to perform point cloud matching, which has a faster matching speed and can obtain higher-precision pose transformation parameters, so that the constructed The point cloud map has less error and higher accuracy.
  • the SLAM in this embodiment is compared with that in the second technique.
  • the trajectory accuracy obtained by laser SLAM is compared with the GPS real value. It can be seen that the trajectory obtained by SLAM in this embodiment is closer to the GPS real value.
  • this embodiment does not filter out dynamic objects, Compared with the second embodiment, the error has increased, but the accuracy is still higher than that of the existing laser SLAM.
  • the SLAM method provided by the embodiments of the present application has the following beneficial effects:
  • This method uses the method of associating the point cloud clusters of each object in the two frames of point clouds based on the parameters of the object bounding box. It does not need to identify the type of the object to find the object correspondence between the two frames of point clouds.
  • semantic SLAM Associating objects by identifying the semantics of objects
  • no training model is required, and it has less computational complexity and operational difficulty;
  • the method also has a wide range of applications, such as positioning technology in automatic driving, automatic navigation technology, and automatic obstacle avoidance technology during automatic driving according to the predicted motion trajectory of dynamic objects.
  • the acquired pose transformation parameters of each frame of point cloud and the previous frame of point cloud can be obtained based on the SLAM method in the above-mentioned embodiment;
  • the point cloud of each frame is transformed and fused with the point cloud of the previous frame to obtain a point cloud map including the point cloud of each frame; path planning and mapping are performed based on the point cloud map. Dynamic obstacle avoidance.
  • a positioning method provided in this embodiment of the present application can be used to determine the current pose of the lidar. Specifically, based on the SLAM method in the above embodiment, the position of the lidar can be obtained. The pose transformation parameters of the current moment and the previous moment; based on the pose transformation parameters and the pose of the lidar at the previous moment, the pose of the lidar at the current moment is determined.
  • an embodiment of the present application provides a SLAM device, and the device includes:
  • an acquisition module 901 configured to acquire two frames of point cloud data collected by the lidar, where the two frames of point cloud data include a first frame point cloud and a second frame point cloud before and after;
  • Clustering module 902 configured to cluster the point cloud of the first frame to obtain the point cloud clusters of each object in the point cloud of the first frame; to cluster the point cloud of the second frame to obtain the point cloud of the second frame The point cloud cluster of each object in the second frame point cloud;
  • an association module 903, configured to associate the point cloud clusters of each object in the point cloud of the second frame with the point cloud clusters of the objects in the point cloud of the first frame;
  • a dynamic filtering module 904 configured to filter out point cloud clusters corresponding to dynamic objects in the point cloud of the first frame and the point cloud of the second frame;
  • a point cloud matching module 905 configured to perform point cloud matching on the second frame point cloud and the first frame point cloud based on the result of the association, to obtain the second frame point cloud and the first frame Pose transformation parameters between point clouds;
  • a closed-loop optimization module 906 configured to perform closed-loop detection on the point cloud of the first frame and the point cloud of the second frame; based on the result of the closed-loop detection, the point cloud of the second frame and the point cloud of the first frame The pose transformation parameters between clouds are optimized;
  • the map construction module 907 is configured to perform pose transformation on the second frame point cloud based on the pose transformation parameters of the second frame point cloud and the first frame point cloud; The frame point cloud and the first frame point cloud are fused to obtain a point cloud map composed of the second frame point cloud and the first frame point cloud.
  • the SLAM method in the above-mentioned embodiment can be implemented to perform point cloud matching on the point cloud collected by the lidar, construct a point cloud map, etc.
  • the technical principle of its implementation is the same as that of the SLAM method in the above-mentioned embodiment. Consistent, and will not be repeated here.
  • FIG. 10 is a schematic structural diagram of a computing device 1000 provided by an embodiment of the present application.
  • the computing device 1000 includes: a processor 1010 , a memory 1020 , a communication interface 1030 , and a bus 1040 .
  • the communication interface 1030 in the computing device 1000 shown in FIG. 10 can be used to communicate with other devices.
  • the processor 1010 can be connected with the memory 1020 .
  • the memory 1020 may be used to store the program codes and data. Therefore, the memory 1020 may be a storage module inside the processor 1010 , or an external storage module independent from the processor 1010 , or may include a storage module inside the processor 1010 and an external storage module independent from the processor 1010 . part.
  • the computing device 1000 may further include a bus 1040 .
  • the memory 1020 and the communication interface 1030 may be connected to the processor 1010 through the bus 1040 .
  • the bus 1040 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA Extended Industry Standard Architecture
  • the bus 1040 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one line is shown in FIG. 10, but it does not mean that there is only one bus or one type of bus.
  • the processor 1010 may adopt a central processing unit (central processing unit, CPU).
  • the processor may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs off-the-shelf programmable gate arrays
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the processor 1010 uses one or more integrated circuits to execute related programs, so as to implement the technical solutions provided by the embodiments of the present application.
  • the memory 1020 may include read only memory and random access memory, and provides instructions and data to the processor 1010 .
  • a portion of the processor 1010 may also include non-volatile random access memory.
  • the processor 1010 may also store device type information.
  • the processor 1010 executes the computer-executed instructions in the memory 1020 to execute the operation steps of the above method.
  • the computing device 1000 may correspond to a corresponding subject in executing the methods according to the various embodiments of the present application, and the above-mentioned and other operations and/or functions of each module in the computing device 1000 are for the purpose of realizing the present invention, respectively.
  • the corresponding processes of each method in the embodiment will not be repeated here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • Embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, is used to execute a method for generating diverse problems, and the method includes the methods described in the foregoing embodiments. at least one of the options.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with computer-readable program code coupled thereto. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
  • LAN local area network
  • WAN wide area network
  • Internet service provider an external computer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本申请提供了一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达,该点云匹配方法包括:获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;将所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数。本申请具有更快的点云匹配速度和更高的点云匹配精度,提高同步定位与建图SLAM的速度和精度。

Description

一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达 技术领域
本申请涉及定位技术领域,特别涉及一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达。
背景技术
随着移动机器人、智能驾驶技术的发展,在生活服务,工业运输,抗险救灾等领域,同步定位与建图(Simultaneous Localization and Mapping,SLAM)技术逐渐成为最基础、最核心的技术,其原理是利用激光雷达或视觉传感器观测信息进行位置估计,同时将点云或者图像拼接到一起构成完整的地图。当机器人探索未知环境时,利用SLAM技术实时定位并且构建环境地图,是完成后续路径规划与导航任务的重要前提。可以说,SLAM技术对于移动机器人以及智能驾驶技术的发展是至关重要的。虽然在结构化的室内场景中SLAM技术已经得到成熟的应用,但是在室外场景中,由于场景特征相比室内较少,SLAM技术仍然存在较大的困难与挑战,在执行定位与建图的过程中,位姿估计的精度和地图的质量都有待进一步提高。
传统的SLAM方法主要分为激光SLAM与视觉SLAM,激光SLAM使用激光雷达获取点云进行匹配,估计激光雷达的位姿并构建地图,视觉SLAM使用视觉传感器获取图像,并在图像中进行特征点提取与匹配,从而估计视觉传感器的位姿并构建地图。由于视觉SLAM容易受到环境光线变化的影响,尤其在室外环境中,光线变化会直接影响特征点的提取精度,导致建图效果不佳,因此,视觉SLAM在室外环境中的整体表现不如激光SLAM。而激光SLAM在特征稀疏的场景中,容易丢失跟踪目标,导致点云匹配出现失误,从而产生误差,这些误差会逐渐累积,导致最终地图出现重叠的问题,并且,激光SLAM在动态环境中,动态物体也容易对点云匹配产生影响,无法估计正确的位姿。
发明内容
有鉴于此,本申请提供了一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达,具有更快的点云匹配速度和更高的点云匹配精度,提高同步定位与建图SLAM的速度和精度。
为达到上述目的,本申请的第一方面提供一种点云匹配方法,包括:
获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;
对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;
对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;
基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;
基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将 所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数。
由上,通过本方法,对激光雷达采集的前后两帧点云进行聚类、分割,以得到该两帧点云中各物体的点云簇,通过该两帧点云中各物体的点云簇进行关联计算,得到该两帧点云中各物体的点云簇的相似度,将该两帧点云中相似度最高的物体的点云簇进行关联,基于该关联的结果,对该两帧点云进行相互关联的各物体的点云簇的匹配和该两帧点云中各个点的匹配,即可得到该两帧点云之间的位姿变换参数。利用本方法,可实现速度更快、精度更高的点云匹配,得到精确度更高的位姿变换参数,从而提高同步定位与建图的精度。
在第一方面的一种可能的实现方式中,还包括:
基于所述第二帧点云与所述第一帧点云的位姿变换参数,将所述第二帧点云进行位姿变换;
将位姿变换后的第二帧点云与所述第一帧点云进行融合,得到所述第二帧点云与第一帧点云组成的点云地图。
由上,基于得到的第二帧点云与第一帧点云的位姿变换参数,可将第二帧点云进行位姿变换,并将位姿变换后的第二帧点云与第一帧点云进行融合,同时还可以结合地图坐标系,将融合后的第二帧点云与第一帧点云构建到地图坐标系中,得到点云地图。
在第一方面的一种可能的实现方式中,所述基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联,包括:
获取所述第一帧点云中各物体的点云簇的特征向量;
获取所述第二帧点云中各物体的点云簇的特征向量;
基于所述第二帧点云中各物体的点云簇的特征向量与所述第一帧点云中各物体的点云簇的特征向量计算相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行基于所述相似度的关联。
由上,基于聚类得到的两帧点云中各物体的点云簇,获取每个点云簇的特征向量,通过计算第二帧点云中各物体的点云簇的特征向量与第一帧点云中各物体的点云簇的特征向量的关联距离,可得到第二帧点云中各物体的点云簇与第一帧点云中各物体的点云簇的相似度,基于相似度,可将第二帧点云中各物体的点云簇与第一帧点云中各物体的点云簇进行关联。
在第一方面的一种可能的实现方式中,所述点云簇的特征向量包括至少以下之一:
所述点云簇的位置参数、朝向参数或尺寸参数。
由上,通过获取点云簇的中心坐标、水平航向角、长宽高等参数,可确定该点云簇的位置参数、朝向参数及尺寸参数,以形成每个点云簇的特征点,便于快速完成第二帧点云中各物体的点云簇与第一帧点云中各物体的点云簇的两两关联计算,并基于两两关联计算的结果进行点云簇的关联。
在第一方面的一种可能的实现方式中,所述基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将所述第二帧点云与所述第一帧点云进行点云匹配,包括:
基于所述关联的结果,计算经过位姿变换后的所述第二帧点云中的各物体的点云簇与所述第一帧点云中各物体的点云簇的物体类别误差;
计算经过所述位姿变换后的所述第二帧点云中各个点与所述第一帧点云中距离最近的点的距离误差;
对所述物体类别误差和所述距离误差进行加权求和,将所述加权求和的结果最小作为约束条件,计算所述第二帧点云与所述第一帧点云之间的位姿变换参数。
由上,基于第二帧点云中的每个点到该第二帧点云中的每个物体的点云簇的距离,可得到该第二帧点云中的每个点聚类属于该第二帧点云中的每个物体的点云簇的概率,基于关联的结果,可进一步计算得到第二帧点云中的每个点经过位姿变换后聚类属于第一帧点云中的每个物体的点云簇的概率,从而可以得到第二帧点云中的所有点经过位姿变换后聚类属于第一帧点云中的每个物体的点云簇的整体概率,该整体概率可以作为经过位姿变换后的第二帧点云中的各物体的点云簇与第一帧点云中各物体的点云簇的物体类别误差。同时,基于迭代最近点(Iterative Closest Point,ICP)算法,可计算经过位姿变换后的第二帧点云中的各个点与第一帧点云中各个点的距离误差,通过对该物体类别误差和距离误差分别设置一权重系数,并进行求和计算,将加权求和的结果最小作为约束条件,即可计算第二帧点云与第一帧点云之间的位姿变换参数。本方法基于物体类别误差和距离误差进行两帧点云之间的点云匹配,与现有的只基于距离误差进行点云匹配相比,具有更快的点云匹配速度和更高的点云匹配精度,可得到更高精度的位姿变换参数。
在第一方面的一种可能的实现方式中,所述进行点云匹配之前,还包括:
滤除所述第一帧点云中对应动态物体的点云簇;和/或
滤除所述第二帧点云中对应动态物体的点云簇。
由上,在复杂的动态环境中,动态的物体会造成两帧点云之间存在较大的差异,导致点云匹配失败,无法获取精确的位姿变换参数,因此在进行点云匹配之前,可对同一物体的点云簇在连续的多帧点云中的速度进行观测和估计,从而判断该物体是否是动态物体,并将对应动态物体的点云簇从每帧点云中滤除,以消除动态物体对点云匹配造成的影响。
在第一方面的一种可能的实现方式中,还包括:
对所述第一帧点云进行闭环检测;和/或
对所述第二帧点云进行闭环检测;
基于所述闭环检测的结果,对所述第二帧点云与所述第一帧点云之间的位姿变换参数进行优化。
由上,基于闭环检测算法,在帧数据库中分别查找与第一帧点云、第二帧点云最接近的帧,分别构成闭环,然后将构成闭环的最接近的帧的局部特征点分别投影到第一帧点云、第二帧点云并最小化重投影误差,以对第一帧点云、第二帧点云的位姿进行优化,从而对第二帧点云与第一帧点云之间的位姿变换参数进行优化。
在第一方面的一种可能的实现方式中,所述进行聚类之前,还包括:
滤除所述第一帧点云中的噪声点;和/或
滤除所述第二帧点云中的噪声点。
由上,通过对采集的第一帧点云、第二帧点云进行滤波,减少第一帧点云、第二帧点云中的噪声点,同时保留点云的形状特征,减少后续处理的计算量。
为达到上述目的,本申请的第二方面提供一种点云匹配装置,包括:
获取模块,用于获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;
聚类模块,用于对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;
关联模块,用于基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;
点云匹配模块,用于基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数。
在第二方面的一种可能的实现方式中,还包括:
地图构建模块,用于基于所述第二帧点云与所述第一帧点云的位姿变换参数,将所述第二帧点云进行位姿变换;将位姿变换后的第二帧点云与所述第一帧点云进行融合,得到所述第二帧点云与第一帧点云组成的点云地图。
在第二方面的一种可能的实现方式中,所述关联模块具体用于:
获取所述第一帧点云中各物体的点云簇的特征向量;
获取所述第二帧点云中各物体的点云簇的特征向量;
基于所述第二帧点云中各物体的点云簇的特征向量与所述第一帧点云中各物体的点云簇的特征向量计算相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行基于所述相似度的关联。
在第二方面的一种可能的实现方式中,所述点云簇的特征向量包括至少以下之一:
所述点云簇的位置参数、朝向参数或尺寸参数。
在第二方面的一种可能的实现方式中,所述点云匹配模块具体用于:
基于所述关联的结果,计算经过位姿变换后的所述第二帧点云中的各物体的点云簇与所述第一帧点云中各物体的点云簇的物体类别误差;
计算经过所述位姿变换后的所述第二帧点云中各个点与所述第一帧点云中距离最近的点的距离误差;
对所述物体类别误差和所述距离误差进行加权求和,将所述加权求和的结果最小作为约束条件,计算所述第二帧点云与所述第一帧点云之间的位姿变换参数。
在第二方面的一种可能的实现方式中,所述进行点云匹配之前,还包括动态滤除模块:
用于滤除所述第一帧点云中对应动态物体的点云簇;和/或
用于滤除所述第二帧点云中对应动态物体的点云簇。
在第二方面的一种可能的实现方式中,还包括闭环优化模块:
用于对所述第一帧点云进行闭环检测;和/或
用于对所述第二帧点云进行闭环检测;
基于所述闭环检测的结果,对所述第二帧点云与所述第一帧点云之间的位姿变换参数进行优化。
在第二方面的一种可能的实现方式中,所述获取模块还用于:
滤除所述第一帧点云中的噪声点;和/或
滤除所述第二帧点云中的噪声点。
为达到上述目的,本申请的第三方面提供一种导航方法,包括:
基于所述点云匹配方法得到获取的每帧点云与前一帧点云的位姿变换参数;
基于所述位姿变换参数,对所述每帧点云进行变换,并与前一帧点云进行融合,得到包含所述每帧点云的点云地图;
基于所述点云地图进行路径规划和动态障碍物避障。
为达到上述目的,本申请的第四方面提供一种导航设备,包括:
所述点云匹配装置,用于得到获取的每帧点云与前一帧点云的位姿变换参数;基于所述位姿变换参数,对所述每帧点云进行变换,并与前一帧点云进行融合,得到包含所述每帧点云的点云地图;
导航模块,用于基于所述点云地图进行路径规划和动态障碍物避障。
为达到上述目的,本申请的第五方面提供一种定位方法,包括:
基于所述点云匹配方法,得到激光雷达的当前时刻与前一时刻的位姿变换参数;
基于所述位姿变换参数、所述激光雷达的前一时刻的位姿,确定所述激光雷达的当前时刻的位姿。
为达到上述目的,本申请的第六方面提供一种激光雷达,包括:
所述点云匹配装置,用于得到激光雷达的当前时刻与前一时刻的位姿变换参数;
定位模块,用于基于所述位姿变换参数、所述激光雷达的前一时刻的位姿,确定所述激光雷达的当前时刻的位姿。
为达到上述目的,本申请的第七方面提供一种计算设备,包括:
总线;
通信接口,其与所述总线连接;
至少一个处理器,其与所述总线连接;以及
至少一个存储器,其与所述总线连接并存储有程序指令,所述程序指令当被所述至少一个处理器执行时使得所述至少一个处理器执行所述点云匹配方法、或所述导航方法、或所述定位方法。
为达到上述目的,本申请的第八方面提供一种计算机可读存储介质,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行点云匹配方法、或所述导航方法、或所述定位方法。
为达到上述目的,本申请的第九方面提供一种计算机程序产品,其包括有程序指令,所述程序指令当被计算机执行时使得所述计算机执行点云匹配方法、或所述导航方法、或所述定位方法。
本申请的这些和其它方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
以下参照附图来进一步说明本申请的各个特征和各个特征之间的联系。附图均为示例性的,一些特征并不以实际比例示出,并且一些附图中可能省略了本申请所涉及领域的惯常的且对于本申请非必要的特征,或是额外示出了对于本申请非必要的特征,附图所示的各个特征的组合并不用以限制本申请。另外,在本说明书全文中,相同的附图标记所指代的内容也是相同的。具体的附图说明如下:
图1为现有的一种视觉SLAM方法的流程图;
图2为现有的一种激光SLAM方法的流程图;
图3为两帧激光点云之间的距离约束的示意图;
图4为本申请实施例提供的一种SLAM方法的流程图;
图5为本申请实施例的两帧点云中的多个点云簇之间建立关联的示意图;
图6为本申请实施例提供的一种动态环境中的SLAM方法的流程图;
图7为本申请实施例的两帧点云中的多个点云簇之间建立关联的示意图;
图8为本申请实施例提供的一种静态环境中的SLAM方法的流程图;
图9为本申请实施例提供的一种SLAM装置的模块图;
图10为本申请实施例提供的一种计算设备的架构图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
说明书和权利要求书中的词语“第一、第二、第三等”或模块A、模块B、模块C等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在以下的描述中,所涉及的表示步骤的标号,如S110、S120……等,并不表示一定会按此步骤执行,在允许的情况下可以互换前后步骤的顺序,或同时执行。
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它 不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。
本说明书中提到的“一个实施例”或“实施例”意味着与该实施例结合描述的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在本说明书各处出现的用语“在一个实施例中”或“在实施例中”并不一定都指同一实施例,但可以指同一实施例。此外,在一个或多个实施例中,能够以任何适当的方式组合各特定特征、结构或特性,如从本公开对本领域的普通技术人员显而易见的那样。
对本申请具体实施方式进行进一步详细说明之前,对本申请实施例中涉及的技术用语进行说明。除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。如有不一致,以本说明书中所说明的含义或者根据本说明书中记载的内容得出的含义为准。另外,本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
为了准确地对本申请中的技术内容进行叙述,以及为了准确地理解本申请,在对具体实施方式进行说明之前先对本说明书中所使用的术语给出如下的解释说明或定义。
同步定位与建图,Simultaneous Localization and Mapping,SLAM,或称为即时定位与地图构建,或并发建图与定位。本申请中,同步定位与建图技术的原理是利用激光雷达或视觉传感器观测信息进行位置估计,同时将点云或者图像拼接到一起构成完整的地图。
迭代最近点,Iterative Closest Point,ICP。一种迭代计算方法,分别在待匹配的目标点云P和源点云Q中,按照一定的约束条件,找到最邻近点(pi,qi),然后计算出最优匹配参数(旋转参数Rotate,平移参数Translate),使得误差函数最小的一种方法。本申请中,最优匹配参数就是两个点云的位姿变换参数。
激光雷达里程计和建图,Lidar Odometry and Mapping,LOAM。一种利用激光雷达实现SLAM的算法,通过激光雷达的点云匹配,计算机器人相对于初始时刻的位置,称为激光雷达里程计,根据每个位置把点云结合到一起构成地图,称为建图。
点云库,Point Cloud Library,PCL。是在吸收了前人点云相关研究基础上建立起来的大型跨平台开源C++编程库,它实现了大量点云相关的通用算法和高效数据结构,涉及到点云获取、滤波、分割、配准、检索、特征提取、识别、追踪、曲面重建、可视化等。支持多种操作系统平台,可在Windows、Linux、Android、Mac OS X、部分嵌入式实时系统上运行。
位姿,Position and Orientation,Pose。表示位置和姿态,位置是机器人所处的相对于参考坐标系的坐标,姿态是机器人相对于参考坐标系的朝向,包括俯仰角,偏航角和翻滚角。
下面,首先对现有技术进行分析:
技术一:如图1所示为现有的一种视觉SLAM方法的流程图,该方法通过视觉传 感器(摄像头)采集视觉图像,并提取图像特征点,利用惯性传感器的预积分作为初值进行图像特征点匹配,估计出相机的位姿,得到视觉里程计。并在闭环检测之后将所有帧的误差通过非线性优化的技术来进行优化,得到优化后的位姿,利用采集的视觉图像以及优化后的位姿构建全局地图。
技术一缺陷:由于视觉SLAM是提取图像特征点,图像的亮度或灰度容易对特征点的提取效果产生影响,当视觉传感器朝向地面或单独的墙壁时,容易造成图像特征点丢失,导致无法提取到足够多的图像特征点;或者当视觉传感器处于强光线的室外环境中,光线的剧烈变化也容易对图像特征点的提取产生较大影响,从而导致最终的建图效果不理想。基于此,现有的视觉SLAM方法容易受到多种因素的影响,存在适用环境范围较小,鲁棒性较差的缺陷。
技术二:如图2所示为现有的一激光SLAM方法的流程图,该方法通过激光雷达传感器获取点云,将前后两帧的点云进行点云匹配,在点云匹配过程中,采用迭代最近点ICP算法,根据源点云中的每个点查找其变换到目标点云后在目标点云中的最近点,利用每个点与最近点之间的误差求和,并最小化误差求和的方式,来估计两帧点云之间的位姿变换参数。同时还通过在帧数据库中查找帧点云执行闭环检测,在检测到闭环后利用非线性优化技术来优化位姿,并将所有帧的点云根据对应的位姿变换参数拼接到一起构建点云地图。
技术二缺陷:如图3所示,由于该激光SLAM方法在点云匹配过程中,只考虑了源点云和目标点云中点的距离最近,将其距离误差作为距离约束来求解位姿变换参数,源点云中的部分点云簇的边缘的点,其在目标点云中距离最近的点属于其他的点云簇,导致前后帧的匹配结果容易产生较大误差,同时,点云匹配的过程耗时较长,位姿变换参数的精度较低,导致构建的地图容易产生重叠的问题。并且在室外动态环境中使用时,由于动态物体的影响,源点云中的点难以在目标点云中找到准确的对应点,导致点云匹配失败,无法收敛到正确的位姿,从而导致最终的建图效果不理想,同样存在鲁棒性较差的问题。
本申请实施例提供了一种SLAM方法,在两帧点云之间的距离约束的基础上,增加两帧点云之间的物体类别的关联约束,根据该距离约束和关联约束,进行点云匹配,提高了点云匹配的速度和精度,以得到高精度的位姿变换参数,从而提高最终构建的点云地图的精度。本申请可以应用于机器人、车辆自动驾驶领域。下面对本申请进行详细介绍。
实施例一
如图4所示,本申请实施例提供的一种SLAM方法中,通过激光雷达进行激光点云的采集,并通过对采集的多帧点云进行聚类分割、关联计算、动态滤除、点云匹配和闭环优化等过程的处理,构建点云地图。其中激光雷达可以安装在车辆、机器人等可移动的设备上,后文描述激光雷达的移动时,可以理解为所述设备的移动而导致激光雷达的空间位置发生了变化。具体的,该方法包括:
S401:获取激光雷达采集到的两帧点云数据;
本实施例中,通过激光雷达进行点云采集时,以间隔一设定的行走距离d(例如 d=0.3m)的方式,采集多帧点云,本实施例中,以其中的两帧点云数据为例进行详细介绍,该两帧点云数据为激光雷达前后采集的T1帧点云和T2帧点云,在一些实施例中,该T1帧点云和T2帧点云可以为前后相邻关系,也可以为不相邻关系。
然而通过激光雷达采集的点云,往往会较为密集,过多的点云数量会对后续分割工作带来困难。本实施例通过点云库(PCL)中的体素栅格滤波器对采集的每帧点云进行降采样,体素栅格滤波器可以达到降采样的同时不破坏点云本身几何结构的功能,经过降采样,减少每帧点云中的点的数量,并同时保持点云的形状特征。
S402:对所述T1帧点云、T2帧点云分别进行聚类,得到所述T1帧点云中各物体的点云簇、所述T2帧点云中各物体的点云簇;
本实施例中,所述聚类具体包括下述子步骤:
基于欧氏距离的聚类算法对所述采集的T1帧点云、T2帧点云分别进行聚类;在其他一些实施例中,也可以采用K-means算法,马氏距离的聚类算法等。
将所述聚类得到的T1帧点云中各物体的点云簇从T1帧点云中分割出来,构建各物体的点云簇的包围框,并获取各物体的点云簇的包围框的参数;
将所述聚类得到的T2帧点云中各物体的点云簇从T2帧点云中分割出来,构建各物体的点云簇的包围框,并获取各物体的点云簇的包围框的参数。
其中,点云簇的包围框的参数具体可以包括包围框的位置坐标,包围框的水平航向角朝向,包围框的长,宽,高等参数。
S403:基于点云簇的相似度,将所述T2帧点云中各物体的点云簇与所述T1帧点云中各物体的点云簇进行关联;
本实施例中,该关联过程具体包括:
基于所述T1帧点云中各物体的点云簇的包围框的参数、所述T2帧点云中各物体的点云簇的包围框的参数,计算T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇的关联距离,该关联距离用于表示T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇的相似度,两个点云簇的关联距离越小,表示两个点云簇的相似度越高,基于该点云簇的相似度,将T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇进行两两关联。
如图5所示,通过该关联过程,可得到T2帧点云中各物体的点云簇与所述T1帧点云中各物体的点云簇的两两关联关系。需要说明的是,本实施例以及下述实施例中所述建立两两关联关系的点云簇,具体指的是该T2帧点云中和T1帧点云中包括的对应相同物体的点云簇。由于T1帧点云和T2帧点云是通过移动的激光雷达前后采集的两帧点云,由于激光雷达的移动,T1帧点云中的部分物体的点云簇,并未在T2帧点云中采集到,同理,T2帧点云中的部分物体的点云簇,也未在T1帧点云中采集到,因此在进行关联计算时,仅需将T1帧点云中和T2帧点云中同时包括的对应相同物体的点云簇进行关联。
S404:滤除所述T1帧点云、T2帧点云中的对应动态物体的点云簇;
由于动态环境中的动态物体会对点云匹配产生一定影响,因此在进行点云匹配之前,可采用卡尔曼滤波算法对每帧点云中的每个点云簇在连续的多帧点云中的速度进行观测和估计,通过观测每个点云簇的质心位置,结合匀速运动的模型,从而判断该 点云簇的运动速度,基于运动速度将每帧点云中的多个点云簇划分为动态物体的点云簇和静态物体的点云簇,并将动态物体的点云簇从每帧点云中滤除,以消除动态物体对点云匹配的影响。
S405:基于所述关联的结果及所述T2帧点云与所述T1帧点云中的各个点的距离,将所述T2帧点云与所述T1帧点云进行点云匹配,得到所述T2帧点云与所述T1帧点云之间的位姿变换参数;
本实施例中,基于步骤S403获取的T2帧点云中各物体的点云簇与所述T1帧点云中各物体的点云簇的两两关联关系,建立包括关联约束和距离约束的点云匹配函数,利用该点云匹配函数计算得到T2帧点云与T1帧点云之间的位姿变换参数。具体的,
该关联约束的建立过程包括:考虑到每帧点云中可能会存在多个点云簇交叉重合的可能,以T2帧点云中的某个点的在该T2帧点云的聚类结果为例,通过计算该某个点到该T2帧点云中各物体的点云簇的距离,根据一设定阈值,筛选出该T2帧点云中小于该设定阈值的一个或多个物体的点云簇,作为该某个点可能聚类属于的一个或多个物体的点云簇,然后计算该某个点与该一个或多个点云簇的误差;假设该某个点经过位姿变换后,在T1帧点云中具有一个投影点,基于步骤S403的关联结果,同理可计算该投影点与具有关联关系的T1帧点云中的一个或多个点云簇的误差。通过计算T1帧点云中的所有投影点与可能属于的各物体的点云簇的误差,可得到T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇的物体类别误差,本实施例中,可以将所述物体类别误差最小作为所述点云匹配过程中的所述T2帧点云中各物体的点云簇与所述T1帧点云中各物体的点云簇之间的关联约束;
该距离约束的建立过程包括:基于迭代最近点ICP算法,计算T2帧点云中的每个点经过位姿变换后,到T1帧点云中的最近的点的几何距离,对T2帧点云中的各个点的所述几何距离进行求和,得到T2帧点云与T1帧点云的距离误差,本实施例中,可以将所述距离误差最小作为所述点云匹配过程中的所述T2帧点云中各个点与所述T1帧点云中各个点之间的距离约束;
通过对所述物体类别误差和距离误差分别设置一权重系数,并进行求和计算,基于所述关联约束和距离约束,将加权求和的结果最小作为最终的约束条件,即可计算T2帧点云与T1帧点云之间的位姿变换参数,该位姿变换参数即为T2帧点云与T1帧点云在点云匹配过程中的最优的位姿变换参数。
S406:对所述T1帧点云、T2帧点云进行闭环检测优化;
本实施例中,基于一查找条件,在帧数据库中分别查找每帧点云最接近的帧,构成闭环后,可利用非线性优化技术对T1帧点云、T2帧点云的位姿分别进行优化,以对T2帧点云与T1帧点云之间的位姿变换参数进行优化。
S407:基于所述T2帧点云与T1帧点云之间的位姿变换参数,构建点云地图;
本实施例中,可以将T1帧点云作为参考帧,基于该T1帧点云的位姿、所述T2帧点云与T1帧点云之间的位姿变换参数,将所述T2帧点云进行位姿变换;
将位姿变换后的T2帧点云与T1帧点云进行融合,得到T2帧点云与T1帧点云组成的点云地图,基于本方法得到的位姿变换参数精度的提高,构建的点云地图的精度也相应的提高。
实施例二
如图6所示,本申请实施例提供了一种动态环境中的SLAM方法,该方法中,以激光雷达在动态环境中,采集的前后相邻的两帧点云的处理过程为例,对本方法进行详细描述,设该相邻的两帧点云分别为T1帧点云和T2帧点云,其中T1帧点云为T2帧点云的前一帧。该方法包括:
S601:点云预处理与分割,对输入的T1帧点云和T2帧点云进行预处理与分割;
其中,首先采用点云库中的体素栅格滤波器对该T1帧点云和T2帧点云进行滤波,通过将点云划分到三维体素栅格中,栅格的分辨率可以调整,本实施例中可调整分辨率为0.2m,用每个栅格中心的点代替该栅格内的所有的点,实现降采样,减少点云中的点的数量,并同时保持点云的形状特征;
然后对降采样后的点云进行聚类分割,其中可采用基于欧氏距离的聚类算法对降采样后的T1帧点云和T2帧点云进行聚类,形成一个个不同的点云簇,通过将该不同的点云簇从原始点云中分割出来,并给每个点云簇建立包围框,以表示不同的物体,基于该包围框的参数即可进行后续的关联处理,而无需识别出物体的具体类别,其中该包围框的参数具体可以包括该包围框的中心的位置坐标(x,y,z),包围框的水平航向角朝向角θ,长L,宽W,高H等参数。
S602:物体关联,对T2帧点云和T1帧点云进行关联计算,将T2帧点云和T1帧点云中的对应同一物体的点云簇进行关联;
其中,对于T2帧点云和T1帧点云分割出的多个点云簇,基于其各自的包围框的参数,分别计算T2帧点云中的每个点云簇与T1帧点云中的每个点云簇的关联距离,该关联距离用于表示两个点云簇之间的相似度,关联距离越小,相似度越高,则表示该两个点云簇属于同一物体的可能性越大。具体的,设参与该关联计算的T2帧点云中的点云簇为物体i,物体i的位置坐标为
Figure PCTCN2021080212-appb-000001
水平航向角为θ 2,物体i的尺寸为
Figure PCTCN2021080212-appb-000002
参与该关联计算的T1帧点云中的点云簇为物体j,物体j的位置坐标为
Figure PCTCN2021080212-appb-000003
水平航向角为θ 1,物体j的尺寸为
Figure PCTCN2021080212-appb-000004
该物体i和物体j的关联距离为d i,j,则该关联距离d i,j具体包括该两个点云簇的包围框之间的位置距离d 1、朝向距离d 2和尺寸距离d 3
Figure PCTCN2021080212-appb-000005
d 2=θ 21
Figure PCTCN2021080212-appb-000006
基于上述三个不同参数之间的关联距离d 1、d 2和d 3,即可计算出T2帧点云中的物体i与T1帧点云中的物体j的关联距离为d i,j
Figure PCTCN2021080212-appb-000007
其中,w k用于表示不同参数的关联距离的权重,基于不同参数对物体特征的表述能力及工程经验,可分别选取位置距离d 1、朝向距离d 2和尺寸距离d 3在关联距离d i,j中所占的权重分别为w 1=0.5,w 2=0.1,w 3=0.4。
如图7所示,通过分别计算T2帧点云中的每个点云簇与T1帧点云中的每个点云 簇的关联距离,通过设定一关联距离的阈值,可将该两帧点云中的处于该阈值范围内的多个点云簇进行一一对应,建立对应相同物体的两个点云簇之间的关联关系,该关联关系具体可以为点云簇的ID的对应关系,例如T2帧点云中的某个点云簇的ID为C,则其在T1帧点云中具有关联关系的点云簇的ID也可以设置为C。
需要说明的是,由于激光雷达的移动,对于T1帧点云中的部分点云簇,并未在T2帧中采集到,同理,T2帧点云中新采集的部分点云簇也未包含在T1帧点云中,在进行关联计算的过程中,其关联距离不处于筛选的阈值范围内,因此不会建立其关联关系。
S603:滤除动态物体,基于关联的结果,滤除T1帧点云和T2帧点云中的动态物体;
其中,基于关联的结果,可采用卡尔曼滤波算法,通过测量连续多帧点云中的同一物体的点云簇的质心位置,并基于匀速运动的模型,可估计同一物体在连续的多帧点云中运动速度,根据运动速度,可将点云簇划分为动态物体的点云簇和静态物体的点云簇,由此可滤除动态提的点云簇,减少动态物体对后续点云匹配造成的影响。
S604:点云匹配,基于关联计算的结果及T2帧点云与T1帧点云中的各个点的距离,构建多约束的点云匹配,得到T2帧点云与T1帧点云之间的位姿变换参数;
其中,基于关联计算的结果,设T2帧点云中的某个点p i到该T2帧点云中的某个点云簇C的质心的距离为d i,定义该点p i在该T2帧点云中分类属于点云簇C的误差概率
Figure PCTCN2021080212-appb-000008
为:
Figure PCTCN2021080212-appb-000009
假设经过位姿变换后,该点p i在T1帧点云中的投影点为q i,基于关联计算的结果,T2帧点云中的点云簇C在T1帧点云中对应的ID同样为C,则投影点q i到T1帧点云中的点云簇C中最近的点的距离为dis(q i,C),则投影点q i在T1帧点云中属于点云簇C的误差函数为:
Figure PCTCN2021080212-appb-000010
当dis(q i,C)为0时,该误差函数的值也为0,表示投影点q i到T1帧点云中的点云簇C的误差为0。
由于每帧点云中的多个点云簇之间可能会存在交叉重叠的情况,因此每个点在聚类时可能会聚类属于不止一个点云簇,以点p i在T2帧点云的聚类结果为例,通过计算该点p i到该T2帧点云中每个点云簇的距离,根据一设定阈值,筛选出计算出的距离小于设定阈值的一个或多个点云簇(例如点云簇C 1、C 2、C 3),然后计算该点p i可能属于该T2帧点云中的点云簇C 1、C 2、C 3的误差;同理可计算该点p i对应的投影点q i在T1帧点云中属于点云簇C 1、C 2、C 3的误差,据此,可对投影点q i可能属于T1帧点云中的点云簇C 1、C 2、C 3的误差进行求和,得到该投影点q i与T1帧点云中可能属于的各物体的点云簇的总的误差:
Figure PCTCN2021080212-appb-000011
由此,可对T2帧点云在T1帧点云中的各个投影点的所述总的误差进行求和,得到T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇的物体类别误差,将物 体类别误差最小作为T2帧点云中各物体的点云簇与T1帧点云中各物体的点云簇进行点云匹配的关联约束:
Figure PCTCN2021080212-appb-000012
其中,该E(R,T) Label表示物体类别误差与位姿变换参数之间的关联关系。
然后,基于迭代最近点算法,计算T2帧点云中的每个点p i经过位姿变换后,与T1帧点云中的距离最近的点q i的几何距离,对T2帧点云中的各个点的所述几何距离进行求和,得到T2帧点云与T1帧点云的距离误差,将所述距离误差最小作为所述点云匹配过程中的所述T2帧点云中各个点与所述T1帧点云中各个点之间的距离约束:
Figure PCTCN2021080212-appb-000013
构建关联约束和距离约束的点云匹配函数,通过对所述物体类别误差和距离误差分别设置一权重系数,并进行求和计算,将加权求和的结果最小作为最终的约束条件,即可计算T2帧点云与T1帧点云之间的位姿变换参数:
E(R,T) icp=λE(R,T) Label+(1-λ)E(R,T)
其中,λ为权重系数,根据工程经验,可设置物体类别误差在点云匹配中所占的权重为λ=0.42,则距离误差所占的权重为1-λ=0.58,利用该点云匹配函数,求解使加权求和的结果最小的位姿变换参数(R,T)即为T2帧点云与T1帧点云的位姿变换参数,其中R为T2帧点云相对于T1帧点云的旋转向量,T为T2帧点云相对于T1帧点云的位移向量。
S605:闭环检测,对T2帧点云和T1帧点云分别在帧数据库中查找最接近的帧,构成闭环,分别对其位姿进行优化,以对T2帧点云与T1帧点云之间的位姿变换参数进行优化;
其中,通过一预设的距离阈值和时间戳间隔阈值,在帧数据库中查找最接近的帧,构成闭环后,进行非线性优化,以对T2帧点云和T1帧点云的位姿分别优化,从而对T2帧点云与T1帧点云之间的位姿变换参数进行优化。
S606,构建地图,基于优化后的位姿变换参数,构建包含T2帧点云和T1帧点云的点云地图。
具体的,可以将T1帧点云作为参考帧,基于该T1帧点云的位姿、所述T2帧点云与T1帧点云之间的位姿变换参数,将所述T2帧点云进行位姿变换;
将位姿变换后的T2帧点云与T1帧点云进行融合,得到T2帧点云与T1帧点云组成的点云地图。
在一些实施例中,当具有多帧点云时,也采用该方法,将每帧点云按照其与前一帧点云的位姿变换参数,进行位姿变换,与前一帧融合,构建包含多帧点云的点云地图。
综上,本实施例通过采用两帧点云的关联约束和距离约束作为约束项,进行点云匹配,可得到精度更高的位姿变换参数,从而使得构建的点云地图的误差更小,精度更高。同时,本方法还考虑了动态环境中动态物体的影响,通过对动态物体进行滤除,消除了动态物体对点云匹配造成的影响,提高了本方法在动态环境中的鲁棒性。
以室外不同场景(A、B、C三种场景,B为人流相对较多的动态场景,A、C为静态场景)为例,比较了本实施例SLAM与前述技术二中的现有激光SLAM得到的轨迹精度,分别与GPS真实值进行了对比,轨迹的效果和误差如下表所示,可以看出本实施例SLAM得到的轨迹与GPS真实值更接近,在动态场景下,本实施例SLAM具有更明显的精度提升。
Figure PCTCN2021080212-appb-000014
实施例三
如图8所示,本申请实施例提供了一种静态环境中的SLAM方法,本实施例相对于图6所示的实施例二,缺少了动态物体的消除过程,其他处理过程与图6所示的实施例二一致,在此不做赘述。
本实施例同样采用两帧点云的关联约束和距离约束作为点云匹配的约束项,进行点云匹配,具有更快的匹配速度,可得到精度更高的位姿变换参数,从而使得构建的点云地图的误差更小,精度更高。
同理,以室外不同场景(A、B、C三种场景,B为人流相对较多的动态场景,A、C为静态场景)为例,比较了本实施例SLAM与前述技术二中的现有激光SLAM得到的轨迹精度,分别与GPS真实值进行了对比,可以看出本实施例SLAM得到的轨迹与GPS真实值更接近,在动态场景下,虽然本实施例没有滤除动态物体,相比于实施例二,误差有所增大,但仍然比现有激光SLAM的精度高。
Figure PCTCN2021080212-appb-000015
综上,本申请实施例所提供的SLAM方法,具有如下有益效果:
相比于现有的激光SLAM方法中,仅仅使用两帧点云的点之间的距离约束来求解位姿变换参数,具有更高的精度,并且点云匹配的时间明显降低,收敛更块。本方法采用的基于物体包围框参数来关联两帧点云中各物体的点云簇的方法,不需要识别出物体的种类,即可找到两帧点云的物体对应关系,相比于语义SLAM(通过识别物体 的语义来关联物体),不需要训练模型,具有更小的计算量和操作难度;
通过将动态物体移除,避免了在点云匹配过程中,由于动态物体影响导致找到错误的匹配点,进而使点云匹配收敛到错误的位姿变换参数的情况,本方法提高了位姿变换参数的精度,同时避免了将动态物体的点云添加到最终的点云地图中,有效地提高了SLAM技术在室外环境中的鲁棒性。
除此之外,本方法还具有广阔的应用范围,例如自动驾驶中的定位技术、自动导航技术,以及根据预测的动态物体的运动轨迹,进行自动驾驶过程中的自动避障技术。
作为本实施例的变形实施例,本申请实施例提供的一种导航方法中,可基于上述实施例中的SLAM方法得到获取的每帧点云与前一帧点云的位姿变换参数;基于所述位姿变换参数,对所述每帧点云进行变换,并与前一帧点云进行融合,得到包含所述每帧点云的点云地图;基于所述点云地图进行路径规划和动态障碍物避障。
作为本实施例的变形实施例,本申请实施例提供的一种定位方法中,可用于对激光雷达的当前位姿进行确定,具体的,基于上述实施例中的SLAM方法,可得到激光雷达的当前时刻与前一时刻的位姿变换参数;基于所述位姿变换参数、所述激光雷达的前一时刻的位姿,确定所述激光雷达的当前时刻的位姿。
实施例四
如图9所示,本申请实施例提供了一种SLAM装置,该装置包括:
获取模块901,用于获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;
聚类模块902,用于对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;
关联模块903,用于将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;
动态滤除模块904,用于滤除所述第一帧点云、第二帧点云中对应动态物体的点云簇;
点云匹配模块905,用于基于所述关联的结果,将所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数;
闭环优化模块906,用于对所述第一帧点云、所述第二帧点云进行闭环检测;基于所述闭环检测的结果,对所述第二帧点云与所述第一帧点云之间的位姿变换参数进行优化;
地图构建模块907,用于基于所述第二帧点云与所述第一帧点云的位姿变换参数,将所述第二帧点云进行位姿变换;将位姿变换后的第二帧点云与所述第一帧点云进行融合,得到所述第二帧点云与第一帧点云组成的点云地图。
基于本实施例提供的SLAM装置,可实现上述实施例中的SLAM方法,对激光雷达采集的点云进行点云匹配、构建点云地图等,其实现的技术原理与上述实施例中的SLAM方法一致,此处不再赘述。
实施例五
图10是本申请实施例提供的一种计算设备1000的结构性示意性图。该计算设备1000包括:处理器1010、存储器1020、通信接口1030、总线1040。
应理解,图10所示的计算设备1000中的通信接口1030可以用于与其他设备之间进行通信。
其中,该处理器1010可以与存储器1020连接。该存储器1020可以用于存储该程序代码和数据。因此,该存储器1020可以是处理器1010内部的存储模块,也可以是与处理器1010独立的外部存储模块,还可以是包括处理器1010内部的存储模块和与处理器1010独立的外部存储模块的部件。
其中,计算设备1000还可以包括总线1040。其中,存储器1020、通信接口1030可以通过总线1040与处理器1010连接。总线1040可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线1040可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。
应理解,在本申请实施例中,该处理器1010可以采用中央处理模块(central processing unit,CPU)。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器1010采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
该存储器1020可以包括只读存储器和随机存取存储器,并向处理器1010提供指令和数据。处理器1010的一部分还可以包括非易失性随机存取存储器。例如,处理器1010还可以存储设备类型的信息。
在计算设备1000运行时,所述处理器1010执行所述存储器1020中的计算机执行指令执行上述方法的操作步骤。
应理解,根据本申请实施例的计算设备1000可以对应于执行根据本申请各实施例的方法中的相应主体,并且计算设备1000中的各个模块的上述和其它操作和/或功能分别为了实现本实施例各方法的相应流程,为了简洁,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如 多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于执行一种多样化问题生成方法,该方法包括上述各个实施例所描述的方案中的至少之一。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中连接了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。 程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
注意,上述仅为本申请的较佳实施例及所运用的技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请的构思的情况下,还可以包括更多其他等效实施例,均属于本申请的保护范畴。

Claims (23)

  1. 一种点云匹配方法,其特征在于,包括:
    获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;
    对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;
    对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;
    基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;
    基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    基于所述第二帧点云与所述第一帧点云的位姿变换参数,将所述第二帧点云进行位姿变换;
    将位姿变换后的第二帧点云与所述第一帧点云进行融合,得到所述第二帧点云与第一帧点云组成的点云地图。
  3. 根据权利要求1所述的方法,其特征在于,所述基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联,包括:
    获取所述第一帧点云中各物体的点云簇的特征向量;
    获取所述第二帧点云中各物体的点云簇的特征向量;
    基于所述第二帧点云中各物体的点云簇的特征向量与所述第一帧点云中各物体的点云簇的特征向量计算相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行基于所述相似度的关联。
  4. 根据权利要求3所述的方法,其特征在于,所述点云簇的特征向量包括至少以下之一:
    所述点云簇的位置参数、朝向参数或尺寸参数。
  5. 根据权利要求1至4任意一项所述的方法,其特征在于,所述基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将所述第二帧点云与所述第一帧点云进行点云匹配,包括:
    基于所述关联的结果,计算经过位姿变换后的所述第二帧点云中的各物体的点云簇与所述第一帧点云中各物体的点云簇的物体类别误差;
    计算经过所述位姿变换后的所述第二帧点云中各个点与所述第一帧点云中距离最近的点的距离误差;
    对所述物体类别误差和所述距离误差进行加权求和,将所述加权求和的结果最小作为约束条件,计算所述第二帧点云与所述第一帧点云之间的位姿变换参数。
  6. 根据权利要求1所述的方法,其特征在于,所述进行点云匹配之前,还包括:
    滤除所述第一帧点云中对应动态物体的点云簇;和/或
    滤除所述第二帧点云中对应动态物体的点云簇。
  7. 根据权利要求1所述的方法,其特征在于,还包括:
    对所述第一帧点云进行闭环检测;和/或
    对所述第二帧点云进行闭环检测;
    基于所述闭环检测的结果,对所述第二帧点云与所述第一帧点云之间的位姿变换参数进行优化。
  8. 根据权利要求1所述的方法,其特征在于,所述进行聚类之前,还包括:
    滤除所述第一帧点云中的噪声点;和/或
    滤除所述第二帧点云中的噪声点。
  9. 一种点云匹配装置,其特征在于,包括:
    获取模块,用于获取激光雷达采集到的两帧点云数据,所述两帧点云数据包括前后的第一帧点云和第二帧点云;
    聚类模块,用于对所述第一帧点云进行聚类,得到所述第一帧点云中各物体的点云簇;对所述第二帧点云进行聚类,得到所述第二帧点云中各物体的点云簇;
    关联模块,用于基于点云簇的相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行关联;
    点云匹配模块,用于基于所述关联的结果及所述第二帧点云与所述第一帧点云中的各个点的距离,将所述第二帧点云与所述第一帧点云进行点云匹配,得到所述第二帧点云与所述第一帧点云之间的位姿变换参数。
  10. 根据权利要求9所述的装置,其特征在于,还包括:
    地图构建模块,用于基于所述第二帧点云与所述第一帧点云的位姿变换参数,将所述第二帧点云进行位姿变换;将位姿变换后的第二帧点云与所述第一帧点云进行融合,得到所述第二帧点云与第一帧点云组成的点云地图。
  11. 根据权利要求9所述的装置,其特征在于,所述关联模块具体用于:
    获取所述第一帧点云中各物体的点云簇的特征向量;
    获取所述第二帧点云中各物体的点云簇的特征向量;
    基于所述第二帧点云中各物体的点云簇的特征向量与所述第一帧点云中各物体的点云簇的特征向量计算相似度,将所述第二帧点云中各物体的点云簇与所述第一帧点云中各物体的点云簇进行基于所述相似度的关联。
  12. 根据权利要求11所述的装置,其特征在于,所述点云簇的特征向量包括至少以下之一:
    所述点云簇的位置参数、朝向参数或尺寸参数。
  13. 根据权利要求9至12任意一项所述的装置,其特征在于,所述点云匹配模块具体用于:
    基于所述关联的结果,计算经过位姿变换后的所述第二帧点云中的各物体的点云簇与所述第一帧点云中各物体的点云簇的物体类别误差;
    计算经过所述位姿变换后的所述第二帧点云中各个点与所述第一帧点云中距离最近的点的距离误差;
    对所述物体类别误差和所述距离误差进行加权求和,将所述加权求和的结果最小作为约束条件,计算所述第二帧点云与所述第一帧点云之间的位姿变换参数。
  14. 根据权利要求9所述的装置,其特征在于,所述进行点云匹配之前,还包括动态滤除模块:
    用于滤除所述第一帧点云中对应动态物体的点云簇;和/或
    用于滤除所述第二帧点云中对应动态物体的点云簇。
  15. 根据权利要求9所述的装置,其特征在于,还包括闭环优化模块:
    用于对所述第一帧点云进行闭环检测;和/或
    用于对所述第二帧点云进行闭环检测;
    基于所述闭环检测的结果,对所述第二帧点云与所述第一帧点云之间的位姿变换参数进行优化。
  16. 根据权利要求9所述的装置,其特征在于,所述获取模块还用于:
    滤除所述第一帧点云中的噪声点;和/或
    滤除所述第二帧点云中的噪声点。
  17. 一种导航方法,其特征在于,包括:
    基于权利要求1至8任意一项所述的点云匹配方法,得到获取的每帧点云与前一帧点云的位姿变换参数;
    基于所述位姿变换参数,对所述每帧点云进行变换,并与前一帧点云进行融合,得到包含所述每帧点云的点云地图;
    基于所述点云地图进行路径规划和动态障碍物避障。
  18. 一种导航设备,其特征在于,包括:
    权利要求9至16任意一项所述的点云匹配装置,用于得到获取的每帧点云与前一帧点云的位姿变换参数;基于所述位姿变换参数,对所述每帧点云进行变换,并与前一帧点云进行融合,得到包含所述每帧点云的点云地图;
    导航模块,用于基于所述点云地图进行路径规划和动态障碍物避障。
  19. 一种定位方法,其特征在于,包括:
    基于权利要求1至8任意一项所述的点云匹配方法,得到激光雷达的当前时刻与前一时刻的位姿变换参数;
    基于所述位姿变换参数、所述激光雷达的前一时刻的位姿,确定所述激光雷达的当前时刻的位姿。
  20. 一种激光雷达,其特征在于,包括:
    权利要求9至16任意一项所述的点云匹配装置,用于得到激光雷达的当前时刻与前一时刻的位姿变换参数;
    定位模块,用于基于所述位姿变换参数、所述激光雷达的前一时刻的位姿,确定所述激光雷达的当前时刻的位姿。
  21. 一种计算设备,其特征在于,包括:
    总线;
    通信接口,其与所述总线连接;
    至少一个处理器,其与所述总线连接;以及
    至少一个存储器,其与所述总线连接并存储有程序指令,所述程序指令当被所述至少一个处理器执行时使得所述至少一个处理器执行权利要求1至8任意一项所述的 点云匹配方法、或权利要求17所述的导航方法、或权利要求19所述的定位方法。
  22. 一种计算机可读存储介质,其特征在于,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的点云匹配方法、或权利要求17所述的导航方法、或权利要求19所述的定位方法。
  23. 一种计算机程序产品,其特征在于,其包括有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的点云匹配方法、或权利要求17所述的导航方法、或权利要求19所述的定位方法。
PCT/CN2021/080212 2021-03-11 2021-03-11 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达 WO2022188094A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/080212 WO2022188094A1 (zh) 2021-03-11 2021-03-11 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达
CN202180000478.2A CN113168717B (zh) 2021-03-11 2021-03-11 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/080212 WO2022188094A1 (zh) 2021-03-11 2021-03-11 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达

Publications (1)

Publication Number Publication Date
WO2022188094A1 true WO2022188094A1 (zh) 2022-09-15

Family

ID=76875976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080212 WO2022188094A1 (zh) 2021-03-11 2021-03-11 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达

Country Status (2)

Country Link
CN (1) CN113168717B (zh)
WO (1) WO2022188094A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937320A (zh) * 2023-02-21 2023-04-07 深圳市华亿明投资发展有限公司 一种手机壳打磨用视觉定位方法
CN116485887A (zh) * 2023-01-16 2023-07-25 湖北普罗格科技股份有限公司 一种无监督3d纸箱检测方法及系统
CN116977226A (zh) * 2023-09-22 2023-10-31 天津云圣智能科技有限责任公司 点云数据分层的处理方法、装置、电子设备及存储介质
CN117368902A (zh) * 2023-10-18 2024-01-09 广州易而达科技股份有限公司 一种轨迹跟踪方法、装置、设备及存储介质
CN117523021A (zh) * 2023-11-20 2024-02-06 友上智能科技(苏州)有限公司 基于agv控制系统的快速扫描地图方法
CN117649495A (zh) * 2024-01-30 2024-03-05 山东大学 基于点云描述符匹配的室内三维点云地图生成方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113639745B (zh) * 2021-08-03 2023-10-20 北京航空航天大学 一种点云地图的构建方法、装置及存储介质
CN113850995B (zh) * 2021-09-14 2022-12-27 华设设计集团股份有限公司 一种基于隧道雷视数据融合的事件检测方法、装置及系统
CN113894050B (zh) * 2021-09-14 2023-05-23 深圳玩智商科技有限公司 物流件分拣方法、分拣设备及存储介质
CN114373002B (zh) * 2022-01-04 2023-08-18 杭州三坛医疗科技有限公司 一种点云配准方法及装置
CN114563000B (zh) * 2022-02-23 2024-05-07 南京理工大学 一种基于改进型激光雷达里程计的室内外slam方法
CN114371485B (zh) * 2022-03-21 2022-06-10 中汽研(天津)汽车工程研究院有限公司 一种基于icp与多特征数据关联的障碍物预测与跟踪方法
CN114913330B (zh) * 2022-07-18 2022-12-06 中科视语(北京)科技有限公司 点云部件分割方法、装置、电子设备与存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142354A1 (en) * 2009-12-11 2011-06-16 Electronics And Telecommunications Research Institute Apparatus and method for detecting pose in motion capture data
CN109165680A (zh) * 2018-08-01 2019-01-08 东南大学 基于视觉slam的室内场景下单一目标物体字典模型改进方法
CN111563442A (zh) * 2020-04-29 2020-08-21 上海交通大学 基于激光雷达的点云和相机图像数据融合的slam方法及系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109940606B (zh) * 2019-01-29 2021-12-03 中国工程物理研究院激光聚变研究中心 基于点云数据的机器人引导系统及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110142354A1 (en) * 2009-12-11 2011-06-16 Electronics And Telecommunications Research Institute Apparatus and method for detecting pose in motion capture data
CN109165680A (zh) * 2018-08-01 2019-01-08 东南大学 基于视觉slam的室内场景下单一目标物体字典模型改进方法
CN111563442A (zh) * 2020-04-29 2020-08-21 上海交通大学 基于激光雷达的点云和相机图像数据融合的slam方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAO XIAOWEN; WANG HESHENG: "Self-localization Using Point Cloud Matching at the Object Level in Outdoor Environment", 2019 IEEE 9TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), IEEE, 29 July 2019 (2019-07-29), pages 1447 - 1452, XP033756720, DOI: 10.1109/CYBER46603.2019.9066619 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485887A (zh) * 2023-01-16 2023-07-25 湖北普罗格科技股份有限公司 一种无监督3d纸箱检测方法及系统
CN116485887B (zh) * 2023-01-16 2024-02-02 湖北普罗格科技股份有限公司 一种无监督3d纸箱检测方法及系统
CN115937320A (zh) * 2023-02-21 2023-04-07 深圳市华亿明投资发展有限公司 一种手机壳打磨用视觉定位方法
CN115937320B (zh) * 2023-02-21 2023-05-05 深圳市华亿明投资发展有限公司 一种手机壳打磨用视觉定位方法
CN116977226A (zh) * 2023-09-22 2023-10-31 天津云圣智能科技有限责任公司 点云数据分层的处理方法、装置、电子设备及存储介质
CN116977226B (zh) * 2023-09-22 2024-01-19 天津云圣智能科技有限责任公司 点云数据分层的处理方法、装置、电子设备及存储介质
CN117368902A (zh) * 2023-10-18 2024-01-09 广州易而达科技股份有限公司 一种轨迹跟踪方法、装置、设备及存储介质
CN117368902B (zh) * 2023-10-18 2024-04-23 广州易而达科技股份有限公司 一种轨迹跟踪方法、装置、设备及存储介质
CN117523021A (zh) * 2023-11-20 2024-02-06 友上智能科技(苏州)有限公司 基于agv控制系统的快速扫描地图方法
CN117523021B (zh) * 2023-11-20 2024-05-28 友上智能科技(苏州)有限公司 基于agv控制系统的快速扫描地图方法
CN117649495A (zh) * 2024-01-30 2024-03-05 山东大学 基于点云描述符匹配的室内三维点云地图生成方法及系统
CN117649495B (zh) * 2024-01-30 2024-05-28 山东大学 基于点云描述符匹配的室内三维点云地图生成方法及系统

Also Published As

Publication number Publication date
CN113168717B (zh) 2023-03-03
CN113168717A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2022188094A1 (zh) 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
CN112634451B (zh) 一种融合多传感器的室外大场景三维建图方法
US9990726B2 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN109186606B (zh) 一种基于slam和图像信息的机器人构图及导航方法
WO2019057179A1 (zh) 一种基于点线特征的视觉slam方法和装置
CN108229416B (zh) 基于语义分割技术的机器人slam方法
CN113985445A (zh) 一种基于相机与激光雷达数据融合的3d目标检测算法
Ding et al. Vehicle pose and shape estimation through multiple monocular vision
CN110070578B (zh) 一种回环检测方法
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
Zhang et al. Vision-aided localization for ground robots
Zhang et al. Hand-held monocular SLAM based on line segments
Fang et al. Multi-sensor based real-time 6-DoF pose tracking for wearable augmented reality
Theodorou et al. Visual SLAM algorithms and their application for AR, mapping, localization and wayfinding
Zhang LILO: A Novel Lidar–IMU SLAM System With Loop Optimization
Tang et al. Fmd stereo slam: Fusing mvg and direct formulation towards accurate and fast stereo slam
CN114187418A (zh) 回环检测方法、点云地图构建方法、电子设备及存储介质
Wei et al. Novel robust simultaneous localization and mapping for long-term autonomous robots
Liu et al. Accurate real-time visual SLAM combining building models and GPS for mobile robot
CN112731503A (zh) 一种基于前端紧耦合的位姿估计方法及系统
Rosinol Densifying sparse vio: a mesh-based approach using structural regularities
Li-Chee-Ming et al. Augmenting visp’s 3d model-based tracker with rgb-d slam for 3d pose estimation in indoor environments
Liu et al. Laser 3D tightly coupled mapping method based on visual information
Herceg et al. Real-time detection of moving objects by a mobile robot with an omnidirectional camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929572

Country of ref document: EP

Kind code of ref document: A1