CN116358525A - Laser radar-based map building and positioning method, system and engineering vehicle - Google Patents

Laser radar-based map building and positioning method, system and engineering vehicle Download PDF

Info

Publication number
CN116358525A
CN116358525A CN202310340583.1A CN202310340583A CN116358525A CN 116358525 A CN116358525 A CN 116358525A CN 202310340583 A CN202310340583 A CN 202310340583A CN 116358525 A CN116358525 A CN 116358525A
Authority
CN
China
Prior art keywords
information
pose
point cloud
laser
initial value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310340583.1A
Other languages
Chinese (zh)
Inventor
董洋
高乐
邓加成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Heavy Machinery Ltd
Original Assignee
Sany Heavy Machinery Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Machinery Ltd filed Critical Sany Heavy Machinery Ltd
Priority to CN202310340583.1A priority Critical patent/CN116358525A/en
Publication of CN116358525A publication Critical patent/CN116358525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the field of engineering vehicles, in particular to a laser radar-based map building and positioning method and system and an engineering vehicle. The method includes acquiring multi-sensor information; performing coordinate transformation on the multi-sensor information to generate a pose initial value; determining custom point cloud information according to the laser point cloud information; calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; and outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor graph model. The map construction and positioning method based on the laser radar improves the map optimization algorithm, so that the map construction and positioning method can acquire pose information by using a 6-axis inertial sensor without affecting positioning accuracy, and the accuracy and the robustness of the LIO-SAM algorithm when applied to an automatic driving vehicle and a data set are improved.

Description

Laser radar-based map building and positioning method, system and engineering vehicle
Technical Field
The application relates to the field of engineering vehicles, in particular to a laser radar-based map building and positioning method and system and an engineering vehicle.
Background
SLAM (Simultaneous Localization and Mapping), i.e. simultaneous localization and mapping, is currently widely used in the robot field. The SLAM algorithm enables the robot to sense environment information and establish an environment map through the sensor carried by the robot when the robot is in an unfamiliar environment, and completes calculation of the pose of the robot, so that the unmanned carrier can move in the unknown environment.
LIO-SAM (Lidae Odometry and Mapping System with Scan Context-based Adaptive Mapping) is a lidar-based mobile robot simultaneous localization and mapping algorithm developed based on the robot operating system. Compared with other widely applied algorithms, such as LOAM (Lidar Odometry and Mapping in Real-time) and other algorithms, the LIO-SAM uses a frame-local map to replace a frame-global map and a key frame selection strategy, so that the performance of the algorithm is improved.
However, the existing LIO-SAM algorithm applied to the automatic driving vehicle has the problem that the simultaneous positioning and map construction technology is not compatible, and improvement is needed.
Disclosure of Invention
In view of the above, the present application provides a laser radar-based mapping and positioning method, system and engineering vehicle, which solve or improve the technical problem that the LIO-SAM algorithm applied to an automatic driving vehicle in the prior art is not compatible with the mapping technology at the same time.
According to a first aspect of the present application, the present application provides a laser radar-based mapping and positioning method, which includes: acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information; performing coordinate transformation on the multi-sensor information to generate a pose initial value; determining custom point cloud information according to the laser point cloud information; calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; and outputting track information and a global map according to the self-defined point cloud information, the multi-sensor information, the pose initial value and the factor graph model.
In one possible implementation, acquiring IMU pose information includes: acquiring initial IMU information; pre-integrating the initial IMU information to generate IMU integrated information and issuing an inertial odometer; adding the IMU integral information and the laser odometer into the factor graph model to obtain an optimized factor graph model; and outputting the optimized IMU pose information by the optimized factor graph model.
In one possible implementation manner, the determining custom point cloud information according to the laser point cloud information includes: acquiring initial laser point cloud data output by a laser radar; distortion correction is carried out on the initial point cloud data, and meanwhile, the inertial odometer is fused to generate self-defined point cloud information; extracting features of the initial laser point cloud data to generate feature point cloud data; and adding characteristic point cloud data into the self-defined point cloud information to generate self-defined point cloud information.
In one possible implementation manner, the outputting track information and a global map according to the custom point cloud information, the multi-sensor information, the pose initial value and the factor graph model includes: initializing the pose initial value to obtain a current frame laser pose initial value; acquiring a corresponding key frame set corresponding to the initial value of the laser pose of the current frame; constructing a local map according to the corresponding key frame set; calculating the laser odometer according to the self-defined point cloud information and the multi-sensor information; optimizing the local map and the original global map according to the factor graph model and the laser odometer; and outputting track information and a global map according to the optimized local map and the original global map.
In a possible implementation manner, the initializing the pose initial value to obtain the current frame laser pose initial value includes: when the pose initial value is the first frame pose, optimizing the pose initial value according to preset pose angle information to obtain a current frame laser pose initial value; or when the pose initial value is the pose of the subsequent frame, acquiring incremental pose transformation between the pose of the subsequent frame and the pose of the adjacent frame through the inertial odometer, and acquiring the laser pose initial value of the current frame according to the pose of the adjacent frame and the incremental pose transformation.
In one possible implementation manner, after the outputting of the track information and the global map according to the optimized local map and the original global map, the method further includes: updating the pose of the historical key frame; and issuing the laser odometer.
In one possible implementation manner, the calculating a laser odometer according to the custom point cloud information and the multi-sensor information includes: acquiring a characteristic point set according to the self-defined point cloud information and the multi-sensor information; optimizing the feature point set to generate a pose of the feature point set after optimization; and carrying out weighted fusion on the initial value of the laser pose of the current frame and the optimized pose, and calculating the laser odometer.
In one possible implementation manner, the performing coordinate transformation on the multi-sensor information, generating the pose initial value includes: acquiring custom GPS information according to the GPS information; recording initial coordinate values of the custom GPS information, and transforming the initial coordinate values of the custom GPS information into a radar coordinate system to generate GPS information under the radar coordinate system; acquiring attitude angle information according to GPS information under the radar coordinate system; initializing the attitude angle information to obtain an initial value of the pose.
According to a second aspect of the present application, the present application further provides a laser radar-based mapping and positioning system, which includes: the information acquisition module is used for acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information; the pose initial value generation module is used for carrying out coordinate transformation on the multi-sensor information to generate a pose initial value; the self-defining point cloud information determining module is used for determining self-defining point cloud information according to the laser point cloud information; the factor graph model optimization module is used for calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; the map optimization module is used for outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor map model; the sensor unit comprises a GPS, a laser radar and an IMU, and the sensor unit is in communication connection with the information acquisition module.
According to a third aspect of the present application, there is also provided an engineering vehicle including: the laser radar-based mapping and positioning system.
The map construction and positioning method based on the laser radar improves a map optimization algorithm, so that the map construction and positioning method can carry out coordinate transformation on GPS information and IMU information without affecting positioning accuracy, the information is converted into a laser radar coordinate system from original coordinates, the problem that a 6-axis inertial sensor cannot comprehensively acquire a vehicle body attitude angle is solved, the pose information can be acquired by using the 6-axis inertial sensor by the system applying the method, and accuracy and robustness are further improved when the LIO-SAM algorithm is applied to an automatic driving vehicle and a data set; the coordinate conversion algorithm based on different sensor information improves the problem that the installation position of the GPS positioning equipment is possibly different from the installation position of the IMU in actual use.
In addition, the method can obviously improve the real-time performance of the SLAM system by adopting local map matching to replace global map matching.
Furthermore, the method judges whether the current laser frame is required to be set as a key frame or not, namely, the historical key frame can be updated, so that a new key frame which can be selectively selected is registered in a priori sub-key frame set with a fixed size, and the real-time performance is further improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a schematic flow chart of a laser radar-based mapping and positioning method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a laser radar-based mapping and positioning method according to another embodiment of the present application.
Fig. 3 is a flow chart of a method for determining user-defined point cloud information in a mapping and positioning method based on a laser radar according to another embodiment of the present application.
Fig. 4 is a schematic flow chart of a track information and global map output method in a mapping and positioning method based on a laser radar according to another embodiment of the present application.
Fig. 5 is a schematic flow chart of a laser odometer calculation method in a laser radar-based mapping and positioning method according to another embodiment of the present application.
Fig. 6 is a block diagram of a mapping and positioning system based on a lidar according to an embodiment of the present application.
Fig. 7 is a block diagram illustrating a construction of a laser radar-based mapping and positioning system according to another embodiment of the present application.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back, top, bottom … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Furthermore, references herein to "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Summary of the application
Aiming at the technical problem that the LIO-SAM algorithm is applied to an automatic driving vehicle and the simultaneous positioning and map construction technology cannot be compatible in the prior art, the following analysis can be realized:
(1) The LIO-SAM algorithm defaults to using 9-axis inertial sensors, while 6-axis inertial sensors in the data set and the autonomous vehicle are used in most ways, and the attitude angle cannot be comprehensively acquired;
(2) The mounting location of the GPS positioning device may be different from the mounting location of the IMU in actual use, which is not considered by the LIO-SAM algorithm.
The application provides a laser radar-based map building and positioning method, a laser radar-based map building and positioning system and an engineering vehicle. The laser radar-based map building and positioning method specifically comprises the following steps: acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information; performing coordinate transformation on the multi-sensor information to generate a pose initial value; determining custom point cloud information according to the laser point cloud information; calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; and outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor graph model. The map construction and positioning method based on the laser radar improves the map optimization algorithm, so that the map construction and positioning method can acquire pose information by using a 6-axis inertial sensor without affecting positioning accuracy, and accuracy and robustness of the LIO-SAM algorithm when the LIO-SAM algorithm is applied to automatic driving vehicles and data sets are further improved.
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Exemplary method
Fig. 1 is a schematic flow chart of a laser radar-based mapping and positioning method provided in the present application. As shown in fig. 1, the mapping and positioning method based on the laser radar specifically includes the following steps:
step 100: and acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information.
The multi-sensor information includes GPS (Global Positioning System, i.e., global positioning system) information, IMU (Inertial Measurement Unit, i.e., inertial sensor) pose information, and laser point cloud information. The GPS information is data of a position (longitude and latitude) and a course angle (an included angle between a vehicle head and the north direction) acquired and transmitted by a GPS positioning system; the IMU pose information is vehicle body pose information detected and transmitted by an inertial sensor; the laser point cloud information is point cloud data detected and transmitted by a laser radar (also referred to as "light detection and ranging" (Light detection and ranging) for short). After the multi-sensor information is acquired, the data of the three initial forms can be processed, and after the corresponding initial data is generated, an accurate digital three-dimensional model can be generated by an LIO-SAM (Lidae Odometry and Mapping System with Scan Context-based Adaptive Mapping) algorithm, so that three-dimensional real scenes around an automatic driving vehicle or robot can be acquired.
Step 200: and carrying out coordinate transformation on the multi-sensor information to generate a pose initial value.
The initial pose value refers to the generated pose angle data obtained by performing coordinate conversion on the acquired original data of a plurality of sensors, such as a global positioning system, an inertial sensor and the like. And carrying out coordinate transformation on the GPS information and the IMU information, so that the information is transformed from an original coordinate to a laser radar coordinate system, and thus the problem that the 6-axis inertial sensor cannot comprehensively acquire the attitude angle of the vehicle body is solved.
Step 300: and determining custom point cloud information according to the laser point cloud information.
The self-defined point cloud information is point cloud information data subjected to distortion correction, and also refers to point cloud information added with characteristic point cloud data after characteristic extraction. Such custom point cloud information may be used to optimize a local map, thereby outputting global map and track information, etc.
Step 400: and calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors.
The constraint factors comprise laser odometer factors, GPS factors, closed loop factors and the like, corresponding constraint factors are calculated according to the self-defined point cloud information, and the constraint factors are utilized to optimize variables of an initial factor graph. By introducing factor graph models and various constraint factor pairs to perform robot track estimation, the measurement quantity of the fusion laser radar and the inertial sensor can be effectively utilized to perform position identification between the robot postures, absolute measurement, namely GPS information, is introduced, and factor sets from different sources are used for joint optimization of graphs, so that the real-time performance and accuracy of the map are improved.
Wherein, the calculation thread of the closed loop factor comprises: searching a historical key frame which is closest to the current key frame and has the largest time interval; extracting a characteristic point cloud of a current key frame and a local map of a historical key frame according to a strategy in a frame-local matching method; and (3) using ICP (Iterative Closest Point, namely a nearest point iterative algorithm, constructing a rotation translation matrix based on corresponding point pairs by solving the corresponding point pairs between the source point cloud and the target point cloud, transforming the source point cloud to the coordinate system of the target point cloud by utilizing the solved matrix, estimating error functions of the transformed source point cloud and the target point cloud, and if the error function value is larger than a threshold value, iteratively performing the operation until the given error requirement is met), calculating pose changes of the current key frame and the historical key frame, and recording the pose changes in a closed-loop factor queue.
The GPS factor calculation thread is mainly used for receiving GPS information, and if the distance interval is enough, a GPS factor adding queue is constructed.
Step 500: and outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor graph model.
The trajectory information refers to trajectory route information of a robot or an autonomous vehicle moving with time. The attitude angle is calculated through GPS information acquired by other channels or automatic driving vehicles, and the initial value of the attitude is set, so that the LIO-SAM algorithm can be compatible with a 6-axis inertial sensor, and the method adopts local map matching to replace global map matching, thereby remarkably improving the instantaneity and the accuracy of the SLAM system.
Specifically, in an embodiment, fig. 2 is a schematic flow chart of a laser radar-based mapping and positioning method according to another embodiment of the present application. As shown in fig. 2, step 100 (acquiring multi-sensor information) may further include the steps of:
step 101: initial IMU information is obtained.
The initial IMU information is attitude angle data directly detected by an inertial sensor.
Step 102: and pre-integrating the initial IMU information to generate IMU integral information and issuing a inertial odometer.
In practical application, the LIO-SAM algorithm constructs two IMU information queues for optimization and integration, respectively. The optimization queue adds IMU integral information between two frames of laser odometers and the laser odometers into a factor graph, calculates the optimized pose, and resets the pose at the corresponding moment in the integral queue; the integral queue is used to issue the odometer.
Step 103: and adding the IMU integral information and the laser odometer into the factor graph model to obtain an optimized factor graph model.
And adding an odometer, speed, bias priori factors and the like into the factor graph model as constraint conditions, and resetting the factor graph model. And calculating the IMU pre-integration quantity between the laser odometer of the previous frame and the laser odometer of the current frame, constructing a pre-integration factor and a bias factor, adding a factor graph and executing factor graph optimization. After optimization, re-propagation is performed based on the timestamp of the current laser odometer, and the topics of the inertial odometer are developed.
Step 104: and outputting the optimized pose information by the optimized factor graph model.
Therefore, the output pose information is stronger in real-time performance and accuracy.
Optionally, fig. 3 is a schematic flow chart of a method for determining information of a user-defined point cloud in a mapping and positioning method based on a laser radar according to another embodiment of the present application. As shown in fig. 3, step 300 (determining custom point cloud information from laser point cloud information) may further include the steps of:
step 301: and acquiring initial laser point cloud data output by the laser radar.
Step 302: and carrying out distortion correction on the initial point cloud data, and simultaneously fusing the inertial odometer to generate self-defined point cloud information.
After the initial laser point cloud data is acquired, calculating the start-stop time of the point cloud data, and verifying the effectiveness of the point cloud data. And traversing the IMU information, selecting the IMU information hit corresponding to the start-stop time of the point cloud data, and calculating the pose increment, the rotation amount and the like corresponding to the IMU information. And traversing the point cloud data, correcting motion distortion of the point cloud data, and finally constructing and releasing the custom point cloud information by combining the data.
Step 303: and extracting features of the initial laser point cloud data to generate feature point cloud data.
Step 304: and adding characteristic point cloud data into the self-defined point cloud information to generate the self-defined point cloud information.
After feature extraction is performed on the initial laser point cloud data, angular points and face points of the point cloud data can be obtained, the angular points and the face points are downsampled, feature point cloud data can be generated, and finally, feature point cloud data is added into the custom point cloud information to generate custom point cloud information.
Specifically, in an embodiment, fig. 4 is a schematic flow chart of a track information and global map output method in a map building and positioning method based on a laser radar according to another embodiment of the present application. As shown in fig. 4, step 500 (outputting trajectory information and global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor map model) may further include the steps of:
step 510: and initializing the pose initial value to obtain the laser pose initial value of the current frame.
The current frame refers to the current laser frame, and after the initial value of the laser pose of the current frame is determined, the current frame can be used for obtaining the key frame of the laser pose of the current frame.
Step 520: and acquiring a corresponding key frame set corresponding to the laser pose initial value of the current frame.
The key frame is the frame where the key action is located in the object motion change, and the key frame set for correctly acquiring the laser pose initial value of the current frame is the key for constructing the local map.
Step 530: and constructing a local map according to the corresponding keyframe set.
The local map is a part of map different from the global map, and the local map matching is adopted to replace the global map matching, so that the real-time performance of the SLAM system can be remarkably improved, the selective introduction of key frames and a sliding window method can be realized, and new key frames can be input into priori 'sub-key frames' of a fixed window. And simultaneously searching a set of key frames adjacent to the current laser frame in the space-time dimension for a frame of key frame with the minimum time interval, and downsampling. And extracting corresponding corner points and plane points for each key frame in the key frame set, and adding a local map so as to construct a current local map.
Step 540: and calculating a laser odometer according to the self-defined point cloud information and the multi-sensor information.
Laser odometers refer to methods that utilize laser radar (LiDAR) for recursive pose state estimation.
Step 550: and optimizing the local map and the original global map according to the factor graph model and the laser odometer.
The original global map is a global map before the current frame is positioned and a new map is built in real time. The map construction process is used for establishing a tightly-coupled laser radar inertial odometer frame on the factor graph model, is suitable for multi-sensor fusion and global optimization, has more real-time performance, and is more accurate in the output map.
Step 560: and outputting track information and a global map according to the optimized local map and the original global map.
The process improves the map optimization part algorithm, so that the map optimization part algorithm can replace the 9-axis IMU which is originally required by using only 6 axes, the hardware cost and the calculation complexity are reduced, and meanwhile, the good positioning accuracy is still maintained.
Optionally, when the pose initial value is the first frame pose, the step 510 (initializing the pose initial value to obtain the current frame laser pose initial value) may further include:
step 5101: and when the pose initial value is the first frame pose, acquiring preset pose angle information, optimizing the pose initial value, and acquiring the laser pose initial value of the current frame.
If the current laser frame is the first frame, initializing the pose of the current laser frame by using pose angle information acquired from other channels, such as calculating the pose angle by using RTK information, thereby obtaining the pose of the current laser frame and the like.
When the pose initial value is the pose of the subsequent frame other than the first frame, the step 510 (initializing the pose initial value to obtain the laser pose initial value of the current frame) may include the following steps:
step 5102: when the pose initial value is the pose of the subsequent frame, the incremental pose transformation between the pose of the subsequent frame and the pose of the adjacent frame is obtained through the inertial odometer, and the laser pose initial value of the current frame is obtained according to the pose of the adjacent frame and the incremental pose transformation.
If the current laser frame is a subsequent frame, namely a non-first frame, calculating the increment pose transformation between the current laser frame and two frames of the adjacent previous laser frame by using an inertial odometer, and acting the increment pose transformation on the laser pose of the previous laser frame to obtain the initial value of the laser pose of the current frame.
Specifically, in another embodiment, as shown in fig. 4, after step 560 (outputting the track information and the global map according to the optimized local map and the original global map), the method may further include the following steps:
step 570: and updating the pose of the historical key frame.
Step 580: a laser odometer is issued.
Firstly, judging whether the current laser frame is required to be set as a key frame, namely updating the historical key frame, so that a new key frame which can be selectively selected is registered in a priori sub-key frame set with a fixed size, and real-time performance is realized.
In one possible implementation, as shown in fig. 2, step 200 (transforming coordinates of the multi-sensor information to generate the pose initial value) may further include:
step 210: and acquiring custom GPS information according to the GPS information.
The custom GPS information is generated by analyzing the original GPS data and correcting the coordinate system, and the custom GPS information is obtained so as to be convenient for fusing the GPS data and the IMU data.
Step 220: and recording initial coordinate values of the custom GPS information, and transforming the initial coordinate values of the custom GPS information into a radar coordinate system to generate GPS information under the radar coordinate system.
Step 230: and acquiring attitude angle information according to GPS information under a radar coordinate system.
Attitude angle information is information data concerning the attitude of the body of the autonomous vehicle. At this time, if the attitude angle data is directly included in the custom GPS information, the output and the storage are directly performed, and if the attitude angle information is not included in the custom GPS information, the custom GPS information is calculated, and then the attitude angle information is generated.
Step 240: initializing attitude angle information and obtaining an initial value of the pose.
After the attitude angle information obtained in the steps is initialized, a pose initial value can be generated, and then the custom GPS information and the pose initial value can be issued, so that the factor graph model is combined with the GPS information and the pose initial value to generate and output a map in real time.
It should be noted that, the above-mentioned GPS information is preferably RTK (Real-time kinematic) information, which is a Real-time dynamic positioning technology based on carrier phase observations, and the RTK information can provide three-dimensional positioning results of the measuring station in a specified coordinate system in Real time, so as to further improve positioning accuracy of the global positioning system.
In another possible implementation manner, fig. 5 is a schematic flow chart of a laser odometer calculation method in a laser radar-based mapping and positioning method according to another embodiment of the present application. As shown in fig. 5, step 540 (calculating a laser odometer based on the custom point cloud information, the multi-sensor information) may further include the steps of:
step 5401: and acquiring a characteristic point set according to the self-defined point cloud information and the multi-sensor information.
And calculating the curvature of each point in the self-defined point cloud information, and carrying out feature extraction on the points which are partially shielded or parallel to each other and carrying out feature extraction on unlabeled corner points and surface points.
Specifically, scanning lines can be traversed in a current laser frame, the point cloud involved in one circle of scanning of each scanning line is divided into 6 sections, each section is ordered according to curvature, and corner points and face points are selected; downsampling the selected corner points and the selected face points; adding characteristic point cloud data into the self-defined point cloud information to form a characteristic point set, and publishing the characteristic point set.
Step 5402: and optimizing the feature point set to generate the pose of the feature point set after optimization.
And optimizing the characteristic point set by adopting a Newton iteration method, specifically, calculating corner related parameters and face point related parameters respectively, constructing a Jacobian matrix by utilizing the corner related parameters and the face point related parameters, and generating a Jie Gaosi Newton equation to generate the optimized pose.
The corner related parameter calculation process comprises the following steps: updating the current pose by using the previous iteration result, and transforming the current frame corner to the global map coordinate system; traversing the angular points in the current frame, searching 5 nearest neighboring points in the local map, if a straight line can be formed, calculating the distance from the angular point of the current frame to the straight line and the unit vector of the vertical line, and storing the unit vector as the angular point related parameters.
The calculation process of the related parameters of the facial points comprises the following steps: updating the current pose by using the result of the previous iteration, and transforming the current frame surface point to the global map coordinate system; traversing the face points in the current frame, searching 5 nearest neighbor points in the local map, if a plane can be formed, calculating the distance from the face points of the current frame to the plane and the unit vector of the vertical line, and storing the unit vector as the face point related parameters.
Step 5403: and carrying out weighted fusion on the initial value of the laser pose of the current frame and the optimized pose, and calculating a laser odometer.
The laser odometer is calculated by utilizing a frame-local matching method, namely, the initial value of the current frame laser pose and the optimized pose are used for weighted fusion, and the pitch angle, the roll angle and the z coordinate are restrained, so that the laser odometer is calculated.
Corresponding to the laser radar-based mapping and positioning method, the application also provides a laser radar-based mapping and positioning system. The laser radar based mapping and positioning system will be described in detail below with reference to fig. 6.
Fig. 6 is a block diagram of a mapping and positioning system based on a lidar according to an embodiment of the present application. As shown in fig. 6, the mapping and positioning system based on the laser radar provided in the present application may specifically include: the system comprises an information acquisition module 101, a pose initial value generation module 102, a self-defined point cloud information determination module 103, a factor graph model optimization module 104, a map optimization module 105 and a sensor unit 200. The information acquisition module 101 is configured to acquire multi-sensor information, where the multi-sensor information includes GPS information, IMU pose information, and laser point cloud information; the pose initial value generation module 102 is used for performing coordinate transformation on the multi-sensor information to generate a pose initial value; the custom point cloud information determining module 103 is configured to determine custom point cloud information according to laser point cloud information; the factor graph model optimization module 104 is used for calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; the map optimization module 105 is used for outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor map model; the sensor unit 200 is communicatively connected to the information acquisition module 101. In addition, the multi-sensor information includes GPS information, IMU pose information, and laser point cloud information. The sensor unit 200 includes GPS, lidar and IMU.
The laser radar-based mapping and positioning system provided by the application can be used for performing the laser radar-based mapping and positioning method in the embodiment, namely the laser radar-based mapping and positioning system can acquire multi-sensor information; performing coordinate transformation on the multi-sensor information to generate a pose initial value; acquiring self-defined point cloud information; calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor graph model; the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information. The map construction and positioning method based on the laser radar improves the map optimization algorithm, so that the map construction and positioning method can acquire pose information by using a 6-axis inertial sensor without affecting positioning accuracy, and accuracy and robustness of the LIO-SAM algorithm when the LIO-SAM algorithm is applied to automatic driving vehicles and data sets are further improved.
In one possible implementation, fig. 7 is a block diagram of a mapping and positioning system based on lidar according to another embodiment of the present application. As shown in fig. 7, the mapping and positioning system based on the laser radar may also include: a map optimization unit 100, an IMU pre-integration unit 400, an image projection unit 500, a feature extraction unit 600, and a GPS information processing unit 700. The information acquisition module 101, the pose initial value generation module 102, the custom point cloud information determination module 103, the factor graph model optimization module 104, the map optimization module 105 and the like are integrated in the map optimization unit 100. That is, the map optimization unit 100 receives the point cloud information, completes calculation of constraint factors such as a laser odometer factor, a GPS factor, a closed loop factor, and the like, adds the constraint factors to the factor graph model, and outputs the optimized trajectory information and the global map based on the factor graph model. The IMU pre-integration unit 400 subscribes to the topic of the laser odometer, builds a factor graph model by using IMU pose information and the laser odometer, and outputs an optimized inertial odometer. The image projection unit 500 subscribes to the point cloud information topic, corrects distortion thereof, and outputs custom point cloud information. The feature extraction unit 600 calculates the surface features and line features of the point cloud, and adds the surface features and line features of the point cloud to the custom point cloud information. The GPS information processing unit 700 subscribes to GPS information and performs coordinate transformation on the received GPS information to acquire attitude angle information.
In addition, corresponding to the above-mentioned laser radar-based mapping and positioning method and system, the present application further provides an engineering vehicle, which includes the above-mentioned laser radar-based mapping and positioning system, that is, the engineering vehicle may be driven automatically or without a person, and the above-mentioned laser radar-based mapping and positioning method may be applied, which has effects similar to those of the above-mentioned laser radar-based mapping and positioning method and system, and will not be described herein.
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 8.
Fig. 8 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 8, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 11 to implement the lidar-based mapping and positioning methods of the various embodiments of the present application described above, and/or other desired functions.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
In addition, the input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information to the outside, including the determined distance information, direction information, and the like. The output device 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 8 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
As a third aspect of the present application, there is provided a computer-readable storage medium storing a computer program for executing the steps of:
Acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information; performing coordinate transformation on the multi-sensor information to generate a pose initial value; determining custom point cloud information according to the laser point cloud information; calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; and outputting track information and a global map according to the self-defined point cloud information, the multi-sensor information, the pose initial value and the factor graph model.
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program information which, when executed by a processor, causes the processor to perform the steps in the lidar-based mapping and positioning method described in the present specification according to various embodiments of the present application.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, on which computer program information is stored, which, when being executed by a processor, causes the processor to perform the steps in the lidar-based mapping and positioning method according to various embodiments of the present application.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.

Claims (10)

1. The mapping and positioning method based on the laser radar is characterized by comprising the following steps of:
acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information;
Performing coordinate transformation on the multi-sensor information to generate a pose initial value;
determining custom point cloud information according to the laser point cloud information;
calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors; and
and outputting track information and a global map according to the self-defined point cloud information, the multi-sensor information, the pose initial value and the factor graph model.
2. The lidar-based mapping and positioning method of claim 1, wherein the obtaining of IMU pose information comprises:
acquiring initial IMU information;
pre-integrating the initial IMU information to generate IMU integrated information and issuing an inertial odometer;
adding the IMU integral information and the laser odometer into the factor graph model to obtain an optimized factor graph model; and
and outputting the optimized IMU pose information by the optimized factor graph model.
3. The laser radar-based mapping and positioning method according to claim 2, wherein the determining custom point cloud information according to the laser point cloud information includes:
acquiring initial laser point cloud data output by a laser radar;
Distortion correction is carried out on the initial point cloud data, and meanwhile, the inertial odometer is fused to generate self-defined point cloud information;
extracting features of the initial laser point cloud data to generate feature point cloud data; and
and adding characteristic point cloud data into the self-defined point cloud information to generate self-defined point cloud information.
4. The method for mapping and positioning based on the lidar of claim 2, wherein outputting the trajectory information and the global map according to the custom point cloud information, the multi-sensor information, the pose initial value and the factor map model comprises:
initializing the pose initial value to obtain a current frame laser pose initial value;
acquiring a corresponding key frame set corresponding to the initial value of the laser pose of the current frame;
constructing a local map according to the corresponding key frame set;
calculating the laser odometer according to the self-defined point cloud information and the multi-sensor information;
optimizing the local map and the original global map according to the factor graph model and the laser odometer;
and outputting track information and a global map according to the optimized local map and the original global map.
5. The method for mapping and positioning based on laser radar according to claim 4, wherein initializing the initial pose value to obtain the initial pose value of the current frame laser comprises:
when the pose initial value is the first frame pose, optimizing the pose initial value according to preset pose angle information to obtain a current frame laser pose initial value; or (b)
And when the pose initial value is the pose of the subsequent frame, acquiring incremental pose transformation between the pose of the subsequent frame and the pose of the adjacent frame through the inertial odometer, and acquiring the laser pose initial value of the current frame according to the pose of the adjacent frame and the incremental pose transformation.
6. The lidar-based mapping and localization method of claim 4, wherein after outputting the trajectory information and the global map from the optimized local map and the original global map, the method further comprises:
updating the pose of the historical key frame;
and issuing the laser odometer.
7. The method for mapping and positioning based on laser radar according to claim 4, wherein the calculating a laser odometer according to the custom point cloud information and the multi-sensor information comprises:
Acquiring a characteristic point set according to the self-defined point cloud information and the multi-sensor information;
optimizing the feature point set to generate a pose of the feature point set after optimization;
and carrying out weighted fusion on the initial value of the laser pose of the current frame and the optimized pose, and calculating the laser odometer.
8. The method for mapping and positioning based on lidar of claim 1, wherein the performing coordinate transformation on the multi-sensor information to generate the initial pose value comprises:
acquiring custom GPS information according to the GPS information;
recording initial coordinate values of the custom GPS information, and transforming the initial coordinate values of the custom GPS information into a radar coordinate system to generate GPS information under the radar coordinate system;
acquiring attitude angle information according to GPS information under the radar coordinate system;
initializing the attitude angle information to obtain an initial value of the pose.
9. A laser radar-based mapping and positioning system, comprising:
the information acquisition module is used for acquiring multi-sensor information, wherein the multi-sensor information comprises GPS information, IMU pose information and laser point cloud information;
The pose initial value generation module is used for carrying out coordinate transformation on the multi-sensor information to generate a pose initial value;
the self-defining point cloud information determining module is used for determining self-defining point cloud information according to the laser point cloud information;
the factor graph model optimization module is used for calculating constraint factors according to the self-defined point cloud information and optimizing a factor graph model according to the constraint factors;
the map optimization module is used for outputting track information and a global map according to the point cloud information, the multi-sensor information, the pose initial value and the factor map model;
the sensor unit comprises a GPS, a laser radar and an IMU, and the sensor unit is in communication connection with the information acquisition module.
10. An engineering vehicle comprising the lidar-based mapping and positioning system of claim 9.
CN202310340583.1A 2023-03-31 2023-03-31 Laser radar-based map building and positioning method, system and engineering vehicle Pending CN116358525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310340583.1A CN116358525A (en) 2023-03-31 2023-03-31 Laser radar-based map building and positioning method, system and engineering vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310340583.1A CN116358525A (en) 2023-03-31 2023-03-31 Laser radar-based map building and positioning method, system and engineering vehicle

Publications (1)

Publication Number Publication Date
CN116358525A true CN116358525A (en) 2023-06-30

Family

ID=86920617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310340583.1A Pending CN116358525A (en) 2023-03-31 2023-03-31 Laser radar-based map building and positioning method, system and engineering vehicle

Country Status (1)

Country Link
CN (1) CN116358525A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117269977A (en) * 2023-11-23 2023-12-22 湖南大学 Laser SLAM implementation method and system based on vertical optimization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117269977A (en) * 2023-11-23 2023-12-22 湖南大学 Laser SLAM implementation method and system based on vertical optimization

Similar Documents

Publication Publication Date Title
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN110178048B (en) Method and system for generating and updating vehicle environment map
CN110312912B (en) Automatic vehicle parking system and method
CN113376650B (en) Mobile robot positioning method and device, electronic equipment and storage medium
JP2021139898A (en) Positioning method, apparatus, computing device, computer-readable storage medium, and computer program
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
CN111427026A (en) Laser radar calibration method and device, storage medium and self-moving equipment
US11158065B2 (en) Localization of a mobile unit by means of a multi hypothesis kalman filter method
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
CN113324542B (en) Positioning method, device, equipment and storage medium
CN114636414A (en) High definition city map drawing
CN116358525A (en) Laser radar-based map building and positioning method, system and engineering vehicle
Demim et al. Simultaneous localisation and mapping for autonomous underwater vehicle using a combined smooth variable structure filter and extended kalman filter
Li et al. A novel edge-enabled SLAM solution using projected depth image information
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
CN116577801A (en) Positioning and mapping method and system based on laser radar and IMU
CN114047766B (en) Mobile robot data acquisition system and method for long-term application of indoor and outdoor scenes
CN117015719A (en) Method for determining the movement state of a rigid body
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
Szaj et al. Vehicle localization using laser scanner
KR102526710B1 (en) Method for measuring sensormileage and computing device for executing the same
Peiris et al. Alternative platform for vision based robot navigation
CN114440882B (en) Multi-intelligent home mobile device and collaborative road-finding anti-collision method thereof
CN115930943A (en) SLAM method and system based on monocular vision and IMU fusion under graph optimization and EKF framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination