WO2023000294A1 - 位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 - Google Patents
位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 Download PDFInfo
- Publication number
- WO2023000294A1 WO2023000294A1 PCT/CN2021/108093 CN2021108093W WO2023000294A1 WO 2023000294 A1 WO2023000294 A1 WO 2023000294A1 CN 2021108093 W CN2021108093 W CN 2021108093W WO 2023000294 A1 WO2023000294 A1 WO 2023000294A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- lidar
- pose
- preset
- imu
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 210
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000010354 integration Effects 0.000 claims description 70
- 230000033001 locomotion Effects 0.000 claims description 68
- 230000008569 process Effects 0.000 claims description 65
- 238000013519 translation Methods 0.000 claims description 38
- 230000014616 translation Effects 0.000 claims description 38
- 230000005484 gravity Effects 0.000 claims description 36
- 230000000875 corresponding effect Effects 0.000 claims description 30
- 230000015556 catabolic process Effects 0.000 claims description 23
- 238000006731 degradation reaction Methods 0.000 claims description 23
- 230000002596 correlated effect Effects 0.000 claims description 13
- 230000001133 acceleration Effects 0.000 claims description 12
- 230000001131 transforming effect Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 description 20
- 238000000605 extraction Methods 0.000 description 11
- 230000004807 localization Effects 0.000 description 9
- 238000013507 mapping Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 6
- 230000001788 irregular Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012804 iterative process Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
Definitions
- the present application relates to the technical field of positioning and map construction, and specifically relates to a pose estimation method, a laser radar-inertial odometer, a movable platform and a storage medium.
- Simultaneously Localization and Mapping is a technique for a robot to estimate its own motion in an unknown environment and build a map of the surrounding environment. It has a wide range of applications in areas such as drones, autonomous driving, mobile robot navigation, virtual reality and augmented reality.
- pose estimation plays a very important role in the field of Simultaneous Localization And Mapping (SLAM).
- SLAM Simultaneous Localization And Mapping
- sensors such as lidar, RGBD camera, IMU, GPS
- the movable platform such as robot, self-driving vehicle, etc.
- Positioning and navigation of the platform autonomous exploration in space or map surveying and mapping, etc.
- one of the purposes of this application is to provide a pose estimation method, a laser radar-inertial odometer, a movable platform and a storage medium.
- the embodiment of the present application provides a lidar-inertial odometer pose estimation method, including:
- the point cloud can be clustered first to determine at least two clusters of the point cloud, one of which is the foreground point representing a dynamic animal, and the other A cluster is the non-foreground points other than the foreground points.
- the foreground points representing dynamic objects can be removed, and the non-foreground points in the point cloud except the foreground points can be used for pose estimation, thereby effectively eliminating the influence of animal objects and improving LiDAR-inertial odometry for localization robustness and accuracy.
- the embodiment of the present application provides a lidar-inertial odometer pose estimation method, including:
- the IMU parameters can be converged to the prior values, so that the IMU parameters can be successfully initialized in any state.
- the scene is not limited, and the uncertainty (or noise) of the lidar-inertial odometer and the correlation between the various IMU parameters are comprehensively considered.
- the integral and the preset prior values of the IMU parameters are combined to optimize and solve the IMU parameters in one step, so as to obtain the optimal solution in the sense of the maximum posterior probability, and further improve the accuracy and robustness of subsequent pose estimation.
- the embodiment of the present application provides a lidar-inertial odometer pose estimation method, including:
- the eigenvalues are basically the same;
- the feature points are mainly distributed in the vicinity in the degraded scene, it is difficult to extract features from some distant street signs, road poles, and walls with relatively small angles with the incident light, which makes the feature points
- the distribution area of is relatively small, and the constraints of some degrees of freedom are not enough, so it is easy to degenerate. Therefore, in this embodiment, when performing feature extraction on the point cloud, in addition to extracting line features and surface features, a third type of feature is also extracted from the point cloud, and the third type of feature along the normal vector direction in the third type of feature
- the first eigenvalue is basically the same as the two second eigenvalues perpendicular to the normal vector direction, thereby increasing the distribution of feature points.
- some irregular points can provide some useful features (that is, the third type of feature), thereby increasing the distribution of feature points, which can provide stronger constraints for the pose estimation process in degraded scenes, and is conducive to improving the accuracy of pose estimation.
- the embodiment of the present application provides a laser radar-inertial odometer, including:
- a lidar for collecting point clouds, an IMU for collecting motion data, memory for storing executable instructions, and one or more processors;
- the one or more processors execute the executable instructions, they are individually or collectively configured to execute the method described in any one of the first aspect, the second aspect or the third aspect.
- the embodiment of the present application provides a mobile platform, including:
- a power system provided on the fuselage for powering the movable platform
- the lidar-inertial odometer provided on the fuselage as described in the fifth aspect.
- the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores executable instructions, and when the executable instructions are executed by a processor, the first aspect and the second aspect are implemented. Or the method described in any one of the third aspect.
- FIG. 1 is a schematic diagram of a pose estimation system provided by an embodiment of the present application
- Fig. 2 is a schematic flow chart of the first pose estimation method provided by the embodiment of the present application.
- Fig. 3 is a schematic diagram of different flow charts of the second pose estimation method provided by the embodiment of the present application.
- Fig. 4 is a schematic structural diagram of the factor graph model provided by the embodiment of the present application.
- Fig. 5 is a schematic diagram of different flow charts of the third pose estimation method provided by the embodiment of the present application.
- Fig. 6 is a schematic structural diagram of the lidar-inertial odometer provided by the embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a mobile platform provided by an embodiment of the present application.
- Simultaneously Localization and Mapping is a technique for mobile platforms (such as robots, autonomous vehicles, etc.) to estimate their own motion in an unknown environment and build a map of the surrounding environment. It has a wide range of applications in areas such as drones, autonomous driving, mobile robot navigation, virtual reality and augmented reality.
- the pose estimation (Pose estimation) plays a very important role in the field of Simultaneous Localization And Mapping (SLAM).
- the lidar can be carried on a mobile platform, and the lidar can be used for simultaneous localization and mapping.
- LiDAR is used to transmit a laser pulse sequence to the target scene, then receive the laser pulse sequence reflected from the target, and generate a point cloud based on the reflected laser pulse sequence.
- the lidar can determine the receiving time of the reflected laser pulse sequence, for example, by detecting the rising edge time and/or falling edge time of the electrical signal pulse to determine the receiving time of the laser pulse sequence.
- the laser radar can calculate TOF (Time of flight, time of flight) by using the receiving time information and emission time of the laser pulse sequence, thereby determining the distance from the detection object to the laser radar.
- TOF Time of flight, time of flight
- the lidar is an autonomous light-emitting sensor that does not depend on light source illumination, is insensitive to illumination, and is less disturbed by ambient light. It can work normally even in a closed environment without light or at night, and has wide applicability. Radar measures more accurately than cameras.
- LiDAR for simultaneous localization and mapping
- the geometric difference of the lidar's observations of the environment at different positions can be used to estimate the pose of the lidar, and an environmental map can be constructed based on the pose of the lidar.
- an IMU inertial measurement unit
- an inertial measurement unit includes three single-axis accelerometers and three single-axis gyroscopes.
- the IMU can output acceleration and angular velocity information, and the integration of these quantities (acceleration and angular velocity information) can calculate the IMU pose, which can provide motion information when the lidar cannot collect enough effective data; integrate the point cloud collected by the lidar and the IMU acquisition The motion data can effectively improve the positioning accuracy and robustness.
- the lidar and the IMU can collect data at the same frame rate; or, the lidar and the IMU can also collect data at different frame rates, but within a preset time period
- the point cloud collected by the lidar and the motion data collected by the IMU can be obtained.
- FIG. 1 shows a schematic diagram of a system using a lidar-inertial odometry for simultaneous positioning and map building.
- the lidar-inertial odometer can be mounted on a mobile platform, which includes but not limited to drones, self-driving cars, logistics robots or cleaning robots.
- the lidar collects point clouds at a preset frame rate, and then performs feature extraction on each frame of point clouds to The features of the three-dimensional points in the point cloud (such as line features, surface features or other features) are obtained, and then the acquired features of the three-dimensional points in each frame of the point cloud are transmitted to the pose estimation module.
- the feature extraction module collects point clouds at a preset frame rate, and then performs feature extraction on each frame of point clouds to The features of the three-dimensional points in the point cloud (such as line features, surface features or other features) are obtained, and then the acquired features of the three-dimensional points in each frame of the point cloud are transmitted to the pose estimation module.
- the IMU In the pose estimation module, during the process of the lidar collecting point clouds at a preset frame rate, the IMU also collects motion data synchronously during the same time period; Since the direction of the current lidar coordinate system is unknown, and the readings of the IMU accelerometer are affected by gravity, if the direction of gravity is unknown, the IMU cannot integrate to obtain the pose information. At the same time, the initial velocity of the IMU, the accelerometer and the gyroscope bias are also Unknown, so the motion data collected by the IMU can be pre-integrated to obtain the pre-integrated amount; and then the state of the IMU (initial velocity, accelerometer and gyroscope bias) and the direction of gravity can be calculated based on the pre-integrated amount. Initialize processing.
- the self-motion distortion of the point cloud needs to be solved before the point cloud registration of adjacent frames.
- the reason for the distortion is that the points in a point cloud of a frame are not collected at the same time.
- the data of one or several three-dimensional points in a frame of point cloud is collected, and the lidar is moving with the carrier (such as a movable platform) during the process of collecting a frame of point cloud, but the depth of the three-dimensional point refers to the object and The distance between lidars, so the coordinate systems of different three-dimensional points are different, that is, each point in the point cloud corresponds to an independent coordinate system, so as to ensure the accuracy of the three-dimensional coordinates of the points in the corresponding coordinate system
- all 3D points in the point cloud need to be in the same coordinate system, so the point cloud needs to be reprojected to calculate the movement of the lidar during the acquisition process , and compensate for this amount of motion on the corresponding 3D points, so that all 3D
- the IMU pre-integration or uniform motion model can be selected according to actual needs to eliminate the self-motion distortion of the point cloud.
- the motion data collected by the IMU can be used for pre-integration to obtain the pre-integration amount, and then the pre-integration amount, the position conversion relationship between the IMU and the lidar can be used to determine the acquisition of a frame of point cloud The amount of movement of the lidar during the process, and use the amount of movement to compensate for point cloud distortion.
- the uniform motion model it can be assumed that the point cloud is in uniform motion within one acquisition period (such as 100ms), and rotation + translation compensation can be used. After removing the self-distortion of the point cloud, the pose estimation process can be performed.
- the IMU In the pose estimation process, if the IMU is initialized successfully, it can enter the LIO mode, and the lidar obtains the relevant data of the previous frame point cloud (such as the previous frame point cloud and its extracted features) from the local feature map, Then the current frame point cloud and the previous frame point cloud are registered to determine the relative pose of the current frame point cloud and the previous frame point cloud, and then the relative position of the current frame point cloud and the previous frame point cloud can be coupled pose and the motion data collected by the IMU to determine the pose of the lidar-inertial odometer; for example, a graph optimization method can be used to establish a pose graph model, and use the relative The pose and the motion data collected by the IMU optimize the pose graph model to obtain the pose of the lidar-inertial odometer.
- a graph optimization method can be used to establish a pose graph model, and use the relative The pose and the motion data collected by the IMU optimize the pose graph model to obtain the pose of the lidar-inertial odometer.
- the IMU initialization fails, enter the LO mode, that is, use the lidar to perform pose estimation.
- the frame-to-model point cloud registration method can be used to estimate the pose of the lidar-inertial odometer, and at the same time, the IMU's
- the state quantities initial velocity, accelerometer and gyroscope bias
- the direction of gravity are initialized until the initialization is successful.
- feature map management is also required, including a global feature map and a local feature map; the local feature map maintains the features of the point cloud of several frames before the current frame, and in the point cloud registration process In , the features of the point cloud of the previous frame can be obtained from the local feature map for registration.
- the global feature map maintains features of all point clouds collected by the lidar.
- lidar-inertial odometer there are usually many dynamic objects during the movement of a movable platform equipped with a lidar-inertial odometer. For example, when a self-driving vehicle moves in an urban scene, there will be other vehicles in its movement path Or pedestrians, etc., and some dynamic objects will affect the positioning robustness and accuracy of the lidar-inertial odometer.
- the embodiment of the present application provides a method for estimating the pose of a lidar-inertial odometer, including:
- step S101 the point cloud collected by the lidar is acquired.
- step S102 the point cloud is clustered to determine the foreground point representing the dynamic object.
- step S103 perform pose estimation using non-foreground points in the point cloud except the foreground points.
- the point cloud can be clustered first to determine at least two clusters of the point cloud, one of which is the foreground point representing a dynamic animal, and the other A cluster is the non-foreground points other than the foreground points.
- the foreground points representing dynamic objects can be removed, and the non-foreground points in the point cloud except the foreground points can be used for pose estimation, thereby effectively eliminating the influence of animal objects and improving LiDAR-inertial odometry for localization robustness and accuracy.
- the point cloud can be clustered to determine the foreground point representing the dynamic object, and then remove the foreground point representing the dynamic object, using the point
- the non-foreground points in the cloud except the foreground points are subjected to feature extraction, and then pose estimation is performed.
- this embodiment does not impose any restrictions on the clustering algorithm used in the clustering process, and can be specifically set according to the actual application scenario.
- the European clustering method can be used to divide the point cloud into three clusters , are the foreground point, ground point and background point representing the dynamic animal, respectively; the foreground point representing the dynamic animal does not participate in the subsequent pose estimation process to eliminate the influence of the dynamic object.
- other clustering methods such as SOM neural network clustering algorithm can also be used.
- the laser radar and the inertial odometry can be fused to effectively improve positioning accuracy and robustness.
- the motion data collected by the non-foreground points in the point cloud and the IMU can be used for pose estimation to determine the pose of the lidar-inertial odometer; the point cloud and the motion data are collected in the same time period ; For example, use the non-foreground points in two adjacent frames of point clouds to perform pose estimation, determine the relative poses of two adjacent frames of point clouds, and then collect them at the same time according to the relative poses of two adjacent frames of point clouds and the IMU
- the motion data of the lidar-inertial odometer is estimated; the lidar and the IMU are tightly coupled, which is conducive to improving positioning accuracy and robustness.
- the IMU parameters (initial velocity, accelerometer and gyroscope bias and gravity direction) are carried out step by step, such as decomposing the IMU initialization into multiple sub-problems: estimating the gyroscope bias ⁇ ignoring the accelerometer bias, Estimate scale and gravity vector ⁇ estimate accelerometer bias, further optimize scale and gravity ⁇ estimate initial velocity.
- the inventors found that the above-mentioned IMU parameter initialization process ignores the parameter correlation, and each parameter is solved independently, which makes the IMU parameter estimation inaccurate; since there is no prior constraint, it is required that the acceleration change is sufficient to start initialization and the IMU needs to be initialized during the initialization process.
- the embodiment of the present application provides a method for estimating the pose of a lidar-inertial odometer, including:
- step S201 determine the pose when the lidar collects the point cloud, and obtain the pre-integration amount obtained by pre-integrating the motion data collected by the IMU; wherein, the point cloud and the motion data are in the same time period collected.
- step S202 the IMU parameters are initialized according to the pose when the lidar collects the point cloud, the pre-integration amount, and the preset prior value of the IMU parameters.
- step S203 pose estimation is performed according to the motion data collected by the initialized IMU and the point cloud collected by the lidar.
- the IMU parameters can be converged to the prior values, so that the IMU parameters can be successfully initialized in any state.
- the scene is not limited, and the uncertainty (or noise) of the lidar-inertial odometer and the correlation between the various IMU parameters are comprehensively considered.
- the integral and the preset prior values of the IMU parameters are combined to optimize and solve the IMU parameters in one step, so as to obtain the optimal solution in the sense of the maximum posterior probability, and further improve the accuracy and robustness of subsequent pose estimation.
- the pose of the lidar when collecting the point cloud can be used, and the pose of the lidar when collecting the point cloud can be transformed according to the position transformation relationship between the lidar and the IMU.
- the transformed lidar pose is used to solve the IMU parameters.
- the pose of the lidar can be estimated based on the geometric differences of the lidar's observations of the environment at different locations, and the pose of the lidar when collecting point clouds can be obtained.
- the determination of the current frame point cloud of the lidar as an example: the current frame point cloud collected by the lidar can be obtained, and then the matching 3D points in the current frame point cloud and the previous frame point cloud can be determined Yes, and determine the relative pose of the current frame point cloud and the previous frame point cloud according to the matched three-dimensional point pair; finally, determine the relative pose of the current frame point cloud and the previous frame point cloud according to the relative pose of the The pose when the lidar collects the point cloud of the current frame.
- the initial pose of the lidar that is, the pose when the first frame of point cloud is collected
- the ICP algorithm can be used to iteratively calculate the relative poses of the point cloud of the previous frame and the point cloud of the previous frame. For the point cloud of the current frame and the point cloud of the previous frame, the following steps are repeated until reaching Preset iteration requirements: determine the matching 3D point pairs in the current frame point cloud and the previous frame point cloud, and determine the relative relationship between the current frame point cloud and the previous frame point cloud according to the matched 3D point pairs pose.
- the preset iteration requirement can be set according to the actual application scenario, which is not limited in this embodiment. For example, the preset iteration requirement can be that the preset iteration number is reached, or the preset The set iteration time, or the minimum distance between matched 3D point pairs.
- the 3D point in the point cloud of the previous frame can be converted into the coordinate system of the current frame point cloud, and then for the converted 3D point, in the pre- It is assumed that the nearest 3D point in the point cloud of the current frame is searched within the search range to obtain a matching 3D point pair.
- the initial pose can be used to transform the 3D points in the point cloud of the previous frame into the coordinate system where the point cloud of the current frame is located; wherein, the initial pose is based on the lidar
- the pose when collecting the last frame of point cloud and the motion data collected by the IMU are determined.
- the relative poses of the current frame point cloud and the previous frame point cloud determined in the previous iteration process can be used to convert the 3D points in the previous frame point cloud to the current frame points The coordinate system in which the cloud resides.
- the displacement of two adjacent frames of point clouds collected by the lidar is small, and it is considered that the estimated The pose of the lidar is accurate enough to be used in the initialization process of IMU parameters; wherein, in a non-high motion scene, when matching 3D point pairs are determined in each iteration, the preset search range is the same.
- the estimated pose of the lidar in this scenario may be inaccurate.
- the displacement of point clouds in two adjacent frames may be two to three meters.
- the lidar when the lidar is in a highly moving scene for pose estimation, in the process of iteratively calculating the relative poses of the two adjacent frame point clouds, when the matching 3D point pair is determined in the first iteration process, it can be Increase the preset search range, that is, the preset search range determined for the first time is larger than the preset search range when the lidar is in a non-high motion scene, so that a matching three-dimensional point pair can be found, and the position is improved.
- the preset search range can be gradually reduced as the number of iterations increases, that is, the size of the preset search range is the same as The number of iterations is negatively correlated.
- the estimated pose of the lidar can be used to improve the success rate of IMU parameter initialization.
- the motion data collected by the IMU during the same time period when the lidar collects the point cloud can also be used to pre-integrate the motion data collected by the IMU to obtain a pre-integrated credit amount.
- the The pre-integration amount and the preset prior value of the IMU parameter are combined to optimize and solve the IMU parameter in one step.
- the IMU state quantity and gravity direction can be solved at one time by constructing a factor graph problem.
- the factor graph problem is essentially a maximum a posteriori estimation problem. Based on the initialization method of the maximum a posteriori estimation, one-step joint optimization is used to solve the IMU parameters, the optimal solution in the sense of maximum a posteriori probability is obtained.
- a factor graph model can be established according to the pose, the pre-integrated amount, and the preset prior value of the IMU parameter when the lidar collects the point cloud; using the The factor graph model initializes the IMU parameters.
- the uncertainty (or noise) of the lidar-inertial odometer and the correlation between the various IMU parameters are comprehensively considered, and the factor graph model is used to jointly optimize and solve the IMU parameters in one step, which is conducive to improving the initialization of the IMU. Accuracy of parameters.
- the factor graph model includes a variable node (ellipse in Fig. 4) and a factor node (rectangle in Fig. 4); the variable node indicates the quantity to be solved of the IMU parameter, and the IMU parameter includes an accelerometer
- the three IMU speed variable nodes shown in FIG. 4 are only for illustration, and do not constitute a limitation on the number of IMU speed variable nodes.
- the factor nodes indicate the first residual and the second residual.
- the first residual is determined according to the preset prior value of the IMU parameter and the amount to be solved of the IMU parameter;
- the triangle of ), the pre-integration amount and the amount to be solved of the IMU parameter are determined.
- r p is the first residual, is the second residual.
- the optimization goal of this objective function is to obtain the amount to be solved of the IMU parameter corresponding to the minimum sum of the first residual error and the second residual error, that is, the value of the IMU parameter after initialization includes: the first residual error
- the amount to be solved of the IMU parameter corresponding to the minimum sum of the first residual error and the second residual error is obtained.
- ⁇ p is the covariance matrix of the first residual, which is used to determine the weight of the first residual in the IMU parameter initialization process, and the covariance of the first residual can be based on the given prior value Uncertainty is estimated, or the weight of the first residual is determined according to the noise of the preset prior value; is the covariance matrix of the second residual, which is used to determine the weight of the second residual in the IMU parameter initialization process, and the covariance of the second residual can be transferred in the process of pre-integration, or , the weight of the second residual is determined according to the noise accumulated by the IMU in the pre-integration process, and the weight of the second residual is negatively correlated with the noise, that is, the greater the noise, the The smaller the weight.
- the first residual is determined according to a difference between a preset prior value of the IMU parameter and a quantity to be solved for the IMU parameter.
- the accelerometer bias is difficult to observe without enough motion, where the accelerometer bias tends to remain at When it can be observed, it will converge to the true value, so the preset prior value of the bias of the accelerometer can be set to 0.
- the preset prior value of the bias of the gyroscope may also be set to 0.
- the direction of gravity can be determined according to the direction of the average value of N accelerations collected by the IMU, where N is an integer greater than 1, for example, N is 20, because at the initial moment When the motion is not large enough, the acceleration of motion is much smaller than the acceleration of gravity, so this approach can give the approximate direction of gravity.
- the preset prior value of the initial velocity can be obtained according to the pose estimation when the lidar collects the point cloud.
- the second residual includes rotation residual, translation residual and velocity residual, namely in, represents the rotation residual, represents the translation residual, Indicates the velocity residual.
- the pre-integration amount includes a velocity pre-integration amount, a translation pre-integration amount, and a rotation pre-integration amount; and/or, the pose when the lidar collects point clouds includes a position amount and an attitude amount.
- the rotation residual represents the difference between two relative rotations between two adjacent moments, and one of the relative rotations is based on the respective corresponding values of the lidar at the two adjacent moments.
- the attitude quantity is determined, and the other relative rotation quantity is determined according to the rotation pre-integration quantity and the gyroscope bias to be solved.
- ⁇ R ij represents the amount of pre-integration of rotation between time i and time j
- (b g ) T represents the amount to be solved for the bias of the gyroscope
- the speed residual characterizes the difference between two relative speeds between two adjacent moments; one of the relative speeds is based on the waiting time of the corresponding speeds of the IMU at the two adjacent moments.
- the amount to be solved, one of the attitude quantities of the lidar and the amount to be solved in the direction of gravity are determined, and the other relative velocity is determined according to the velocity pre-integrated amount, the amount to be solved for the bias of the gyroscope, and the bias of the accelerometer.
- the quantity to be solved is determined.
- the translation residual represents the difference between two relative translations between two adjacent moments; one of the relative translations is based on the respective corresponding values of the laser radar at the two adjacent moments
- the position quantity, one of the attitude quantities and the to-be-resolved quantity of the gravity direction are determined, and the other relative translation quantity is determined according to the translation pre-integration quantity, the to-be-resolved quantity of the bias of the gyroscope and the to-be-resolved quantity of the bias of the accelerometer
- the amount of solution is determined.
- p j and p i represent the position values of the laser radar at time i and j respectively
- ⁇ v ij represents the translational pre-integration amount between time i and j .
- the initialization process of the IMU parameters provided in the embodiment of the present application may be applied to the IMU initialization stage shown in FIG. 1 .
- One of the pain points in the SLAM field is that degradation will occur in some scenarios, such as the pose in the tunnel will degrade along the direction of the tunnel, and it will also degrade when there are fewer feature points on the highway.
- the highway problem because the environment is relatively open, the number of feature points that can be extracted by lidar is reduced, and the range of feature points is small, resulting in insufficient direction constraints and degradation.
- the embodiment of the present application provides a method for estimating the pose of a lidar-inertial odometer, the method comprising:
- step S301 the point cloud of the current frame collected by the lidar is obtained.
- step S302 the line feature, the surface feature and the third type feature in the current frame point cloud are extracted; wherein, the first feature value along the normal vector direction and the two perpendicular to the normal vector direction in the third type feature
- the second eigenvalues are basically the same.
- step S303 registration is performed according to the line feature, the surface feature and the third-type feature in both the current frame point cloud and the previous frame point cloud, and it is determined that the current frame point cloud and The relative pose of the point cloud in the previous frame.
- step S304 pose estimation is performed according to the relative poses of the current frame point cloud and the previous frame point cloud.
- the feature points are mainly distributed in the vicinity in the degraded scene, it is difficult to extract features from some distant street signs, road poles, and walls with relatively small angles with the incident light, which makes the feature points
- the distribution area of is relatively small, and the constraints of some degrees of freedom are not enough, so it is easy to degenerate. Therefore, in this embodiment, when performing feature extraction on the point cloud, in addition to extracting line features and surface features, a third type of feature is also extracted from the point cloud, and the third type of feature along the normal vector direction in the third type of feature
- the first eigenvalue is basically the same as the two second eigenvalues perpendicular to the normal vector direction, thereby increasing the distribution of feature points.
- some irregular points can provide some useful features (that is, the third type of feature), thereby increasing the distribution of feature points, which can provide stronger constraints for the pose estimation process in degraded scenes, and is conducive to improving the accuracy of pose estimation.
- the above three types of features can be extracted in any scene; or, considering that the line features and surface features extracted in non-degraded scenes are sufficient for pose If estimated, the third type of feature may be extracted in a degraded scene, and the third type of feature may not be extracted in a non-degraded scene.
- This embodiment does not impose any limitation on this.
- the lidar is mounted on a movable platform
- the third type of feature can be extracted when the movable platform is in a preset degradation scenario
- the preset degradation scenario includes but is not limited to a tunnel scene, highway scene and/or open scene, etc., from another perspective, in the preset degradation scene, the number of line features extracted from the point cloud collected by the lidar is less than the preset number and /or the plane indicated by the surface features is substantially parallel to the forward direction of the movable platform, that is, the third type of features may be when the number of the line features is less than a preset number value and/or the surface features Indicates that the plane is substantially parallel to the case extraction of the movable platform's forward direction.
- the line features may be extracted according to 3D points and/or isolated 3D points in the point cloud whose curvature is greater than a preset value.
- principal component analysis can be used to distinguish surface features and third-type features.
- the first eigenvalue along the direction of the normal vector in the surface features is much smaller than the two second eigenvalues perpendicular to the direction of the normal vector;
- the two second eigenvalues of the vector direction are substantially the same.
- the first eigenvalue and two second eigenvalues of the three-dimensional points in the current frame point cloud are determined according to the covariance of the current frame point cloud; if the first eigenvalue and the second If the ratio of the eigenvalues is less than a preset threshold, determine the surface features of the three-dimensional point; if the ratio of the first eigenvalue to the second eigenvalue is greater than or equal to the preset threshold, determine the first eigenvalue of the three-dimensional point Three types of features; wherein, the first eigenvalue along the normal vector direction in the surface feature is much smaller than the two second eigenvalues perpendicular to the normal vector direction, then the difference between the first eigenvalue and the second eigenvalue The smaller the ratio, the more likely to extract surface features.
- the feature point refers to a line feature, a surface feature or a third
- the three-dimensional points of class features are distributed as evenly as possible in space, and the different depths of the three-dimensional points can correspond to different preset thresholds; the different preset thresholds are used to make the surface obtained from the three-dimensional points of different depths
- the quantity difference of the feature and/or the third type of feature is within the preset difference.
- the depth of the three-dimensional point is positively correlated with the preset threshold, that is, the greater the depth of the three-dimensional point, the The larger the preset threshold value is.
- the current frame point cloud and the previous frame point cloud can be The line features, surface features and the third type of features in the two are registered to determine the relative pose of the current frame point cloud and the previous frame point cloud.
- the above steps can be repeated until the preset iteration requirements are reached, such as the The preset iteration requirement includes: reaching a preset number of iterations, or reaching a preset iteration duration, or it may also be an iteration requirement described below.
- the three-dimensional points in the point cloud of the previous frame may be transformed into the coordinate system where the point cloud of the current frame is located.
- the initial pose is used to transform the 3D points in the point cloud of the previous frame into the coordinate system where the point cloud of the current frame is located; wherein, the initial pose is collected according to the lidar
- the pose of the last frame point cloud and the motion data collected by the IMU are determined.
- use the relative pose of the current frame point cloud and the previous frame point cloud determined by the previous iteration process to convert the 3D points in the previous frame point cloud to the current frame point cloud in the coordinate system.
- the point-to-line matching method can be used for line features, and can be used for surface features and irregular features. Point-to-surface matching method.
- a straight line matching it can be determined in the current frame point cloud, and the matching straight line includes: the conversion between the point cloud of the current frame and the line feature A straight line formed by multiple three-dimensional points closest to the converted three-dimensional point; and then obtaining a first distance from the converted three-dimensional point to the matching straight line.
- the iterative optimization can be performed by minimizing the distance from the point to the line, and the optimization goal (or iteration requirement) of the iterative optimization process is to solve the corresponding two adjacent frame point clouds when the first distance is the smallest. relative pose.
- the first plane matching it is determined in the current frame point cloud, and the first plane matching it includes:
- the converted 3D point of the feature is a plane composed of a plurality of 3D points closest to the distance, and then the projection point of the converted 3D point on the first plane is determined, and finally the converted 3D point and the Second distance between projected points.
- iterative optimization is performed by minimizing the second distance, and the optimization goal of the iterative optimization process is to solve the relative poses of the two adjacent frame point clouds corresponding to the minimum second distance; and
- the optimization goal of the iterative optimization process is to solve the relative poses of the two adjacent frame point clouds corresponding to the minimum second distance; and
- a small variance is set for the first eigenvalue along the direction of the normal vector
- a large variance is set for the two second eigenvalues perpendicular to the direction of the normal vector, in other words, two eigenvalues perpendicular to the direction of the normal vector
- the variance of the two second eigenvalues is much larger than the variance of the first eigenvalue along the normal vector direction, and the variance of the two second eigenvalues perpendicular to the normal vector direction is not infinite, so that the degenerate direction constraint can be provided.
- the second plane matched with it can be determined in the current frame point cloud, and the second plane matched with it includes: A plane formed by a plurality of three-dimensional points closest to the converted three-dimensional point belonging to the third type of feature; and then obtaining a third distance from the converted three-dimensional point to the second plane.
- the iterative optimization can be performed by minimizing the distance from the point to the surface, and the optimization goal of the iterative optimization process is to solve the relative poses of the two adjacent frame point clouds corresponding to the minimum third distance.
- the relative distance between the two adjacent frame point clouds can be determined according to the first distance, the second distance and/or the third distance pose. So far, an iterative process of determining the relative poses of the two adjacent frames of point clouds ends.
- the iterative requirements to be satisfied in the optimization iterative process of determining the relative poses of the two adjacent frame point clouds may be: reaching a preset number of iterations, or reaching a preset iteration duration, or, the The first distance, the second distance and/or the third distance are the smallest. Specific settings can be made according to actual application scenarios.
- point cloud A is aimed at two adjacent frames of point clouds, namely, point cloud A and point cloud B.
- the optimization function is The optimization goal is to solve the relative pose of the lidar corresponding to the minimum second distance between Tp sel and p proj .
- ⁇ is the covariance matrix of the first plane, which provides different weights for the direction along the normal vector and the two directions perpendicular to the normal vector, and the setting variance is small in the direction along the normal vector, and the variance is perpendicular to The normal vector is large in both directions, thus providing a constraint on the direction of degradation.
- ⁇ is the standard deviation of the radar measurement and s is a coefficient with a large value.
- R n VU T .
- the relative pose of the point cloud of the current frame and the point cloud of the previous frame and the IMU can be The motion data collected in the system is used to estimate the pose and determine the pose of the lidar-inertial odometer.
- the weight of the lidar is reduced and the weight of the IMU is increased to adjust the tightly coupled weights of the lidar and the IMU, so that the IMU can more fully constrain the optimization problem.
- the laser radar-inertial odometer mounted on the self-driving vehicle Take the laser radar-inertial odometer mounted on the self-driving vehicle as an example:
- the laser SLAM is very easy to degrade, and based on the feature extraction method of this application, it can fully use all the features in the scene.
- Such information such as scattered signs in the tunnel and grooves on the wall, the tight coupling of this information with the IMU provides constraints for the directions that are prone to degradation, and achieves resistance to degradation.
- the scene is relatively empty, the degree of structure is low, and there is not much information available, and the speed of vehicles in the highway is fast, and it is easy to degrade.
- the feature extraction method of this application Based on the feature extraction method of this application, light poles, The road signs, vegetation and other information have been fully utilized to achieve excellent robustness in highway scenes and avoid degradation.
- the method of excluding foreground points mentioned in the embodiment of FIG. 2 can be performed in the system shown in FIG. 1 after the lidar collects the point cloud; the method of IMU initialization mentioned in the embodiment of FIG. 3 can be performed in In the system shown in Figure 1, it is performed during the IMU initialization process; the feature extraction method mentioned in the embodiment of Figure 5 can be used in the point cloud feature extraction stage in the system shown in Figure 1, and implemented in Figure 5
- the registration method mentioned in the example can be used in LIO mode or LO mode.
- some or all of the methods mentioned in the embodiments in Fig. 2 , Fig. 3 and Fig. 5 can be used in the pose estimation process of the lidar-inertial odometer according to the actual situation.
- the embodiment of the present application also provides a laser radar-inertial odometer 40, including:
- processors 44 execute the executable instructions, they are individually or collectively configured to perform any one of the above-mentioned methods.
- the one or more processors 44 execute the executable instructions, they are individually or jointly configured to: acquire the point cloud collected by the lidar 42; cluster the point cloud to determine the characteristic A foreground point of a dynamic object; performing pose estimation using non-foreground points in the point cloud except the foreground point.
- the non-foreground points include ground points and/or background points.
- the clustering comprises Euclidean clustering.
- the processor 44 is specifically configured to: use the non-foreground points in the point cloud and the motion data collected by the initialized IMU41 in the same time period to perform pose estimation, and determine the lidar-inertial mileage Count 40 poses.
- processors 44 execute the executable instructions, they are individually or jointly configured to:
- the IMU41 parameter is initialized
- the pose estimation is performed according to the motion data collected by the initialized IMU 41 and the point cloud collected by the lidar 42 .
- the processor 44 is also used to: establish a factor graph model according to the pose when the lidar 42 collects the point cloud, the pre-integrated amount, and the preset prior value of the IMU41 parameter;
- the factor graph model initializes the parameters of the IMU41.
- the factor graph model includes a variable node and a factor node; the variable node indicates the quantity to be solved of the IMU41 parameter, and the factor node indicates the first residual and the second residual.
- the first residual is determined according to the preset prior value of the IMU41 parameter and the quantity to be solved;
- the integral quantity and the quantity to be solved of the IMU41 parameters are determined.
- the value of the IMU41 parameter after initialization includes: the amount to be solved corresponding to the IMU41 parameter when the sum of the first residual error and the second residual error is minimum.
- the IMU41 parameters include the bias of the accelerometer, the bias of the gyroscope, the direction of gravity and/or the initial velocity; and/or, the pre-integration amount includes the velocity pre-integration amount, the translation pre-integration amount and the rotation Pre-integrated amount; and/or, the pose when the lidar 42 collects the point cloud includes a position amount and an attitude amount.
- the second residual includes a rotation residual; the rotation residual represents the difference between two relative rotations between two adjacent moments; one of the relative rotations is based on the laser radar 42 in the The attitude quantities corresponding to two adjacent moments are respectively determined, and the other relative rotation quantity is determined according to the rotation pre-integration quantity and the gyroscope bias to be solved.
- the second residual includes a translation residual; the translation residual represents the difference between two relative translations between two adjacent moments; one of the relative translations is based on the laser radar 42 in the The position quantities corresponding to two adjacent moments, one of the attitude quantities and the quantity to be solved in the direction of gravity are determined, and the other relative translation quantity is determined according to the translation pre-integration quantity, the bias to be solved quantity of the gyroscope and the obtained quantity. The amount to be solved for the bias of the accelerometer is determined.
- the second residual includes a velocity residual; the velocity residual characterizes the difference between two relative velocities between two adjacent moments; one of the relative velocities is based on the IMU41 in the two adjacent
- the amount to be solved for the velocity corresponding to each moment, one of the attitude quantities of the lidar 42 and the amount to be solved for the direction of gravity are determined, and the other relative velocity is determined according to the amount to be solved for the speed pre-integration amount and the bias of the gyroscope.
- the amount to be solved for and the bias of the accelerometer are determined.
- the preset a priori value of the bias of the accelerometer is 0; the preset a priori value of the bias of the gyroscope is 0; the gravity direction is based on the N accelerations collected by the IMU41 The direction of the average value is determined, N is an integer greater than 1; and/or, the preset prior value of the initial velocity is obtained according to the pose estimation when the lidar 42 collects the point cloud.
- the weight of the first residual is determined according to the noise of the preset prior value; and/or, the weight of the second residual is determined according to the noise accumulated by the IMU41 in the pre-integration process .
- the weight of the second residual is negatively correlated with the noise.
- the processor 44 is also configured to: acquire the current frame point cloud collected by the lidar 42; determine the matching three-dimensional point pairs in the current frame point cloud and the previous frame point cloud, and according to the The matched three-dimensional point pair determines the relative pose of the current frame point cloud and the previous frame point cloud; according to the relative pose of the current frame point cloud and the previous frame point cloud, it is determined that the lidar 42 collects the current frame The pose of the point cloud.
- the processor 44 is also configured to: convert the 3D points in the point cloud of the previous frame into the coordinate system where the point cloud of the current frame is located; for the converted 3D points, within the preset search range Search for the nearest 3D point in the point cloud of the current frame to obtain matching 3D point pairs.
- the processor 44 repeatedly executes the following steps until the preset iteration requirement is reached: determine the matching 3D point pairs in the current frame point cloud and the previous frame point cloud, and Determine the relative pose of the current frame point cloud and the previous frame point cloud; wherein, the preset iteration requirement includes: reaching a preset number of iterations, and/or, the matched three-dimensional point pair Minimum distance.
- the size of the preset search range is negatively correlated with the number of iterations.
- the initial pose is used to transform the 3D points in the point cloud of the previous frame into the coordinate system where the point cloud of the current frame is located; wherein, the initial pose is based on the laser
- the pose when the radar 42 collects the last frame of point cloud and the motion data collected by the IMU41 are determined.
- the one or more processors 44 execute the executable instructions, they are individually or jointly configured to: acquire the current frame point cloud collected by the lidar 42;
- the eigenvalues are basically the same;
- the first eigenvalue along the direction of the normal vector is much smaller than the two second eigenvalues perpendicular to the direction of the normal vector.
- the lidar 42 is mounted on a movable platform; the third type of feature is extracted when the movable platform is in a preset degradation scenario.
- the preset degradation scenarios include tunnel scenarios, highway scenarios and/or open spaces.
- the lidar 42 is mounted on a movable platform; the third type of features is when the number of the line features is less than a preset number and/or the plane indicated by the surface features is substantially parallel to the Extracted from the context of the forward direction of the movable platform.
- the line feature is determined according to three-dimensional points and/or isolated three-dimensional points in the point cloud whose curvature is greater than a preset value.
- the processor 44 is further configured to: determine the first eigenvalue and two second eigenvalues of the three-dimensional points in the current frame point cloud according to the covariance of the current frame point cloud; if the The ratio of the first eigenvalue to the second eigenvalue is less than a preset threshold, determining the surface characteristics of the three-dimensional point; if the ratio of the first eigenvalue to the second eigenvalue is greater than or equal to the preset Threshold to determine the third type of features of the 3D point.
- different depths of the three-dimensional points correspond to different preset thresholds; the different preset thresholds are used to make the difference in the number of surface features and/or third-type features obtained from three-dimensional points of different depths Within the preset difference.
- the depth of the three-dimensional point is positively correlated with the preset threshold.
- processor 44 is also used for:
- For the converted three-dimensional point belonging to the line feature determine a matching straight line in the current frame point cloud, and obtain a first distance from the converted three-dimensional point to the matching straight line;
- For the converted 3D point belonging to the surface feature determine the first plane matching it in the current frame point cloud, and obtain the distance between the converted 3D point and its projection point on the first plane second distance;
- the relative poses of the two adjacent frames of point clouds are determined.
- the matching straight line includes: a straight line formed by a plurality of three-dimensional points closest to the converted three-dimensional point belonging to the line feature in the point cloud of the current frame.
- the matched first plane includes: a plane composed of a plurality of three-dimensional points closest to the converted three-dimensional point belonging to the surface feature in the point cloud of the current frame.
- the matched second plane includes: a plane composed of a plurality of three-dimensional points in the point cloud of the current frame that are closest to the converted three-dimensional point belonging to the third type of feature.
- the variance of the two second eigenvalues perpendicular to the normal vector direction is much larger than the variance of the first eigenvalue along the normal vector direction, and the two perpendicular to the normal vector direction
- the variance of the second eigenvalue is non-infinite.
- the processor 44 repeatedly executes the following steps until the preset iteration requirement is reached: perform configuration according to the line features, surface features, and third-type features in both the current frame point cloud and the previous frame point cloud. Accurate, determine the relative pose of the current frame point cloud and the previous frame point cloud; wherein, the preset iteration requirements include: reaching the preset number of iterations, or, the first distance, the second distance and/or said third distance is minimal.
- the initial pose is used to transform the 3D points in the point cloud of the previous frame into the coordinate system where the point cloud of the current frame is located; wherein, the initial pose is based on the laser The pose when the radar 42 collects the last frame of point cloud and the motion data collected by the IMU41 are determined;
- the processor 44 is also configured to: perform pose estimation according to the relative pose of the current frame point cloud and the previous frame point cloud and the motion data collected by the initialized IMU41 in the same time period, and determine LiDAR-inertial odometry 40 pose.
- the lidar 42 is mounted on a movable platform; when the movable platform is in a preset degradation scenario, in the process of determining the pose of the lidar-inertial odometer 40, reduce the The weight of the lidar 42 and the weight of the IMU 41 are increased.
- the embodiment of the present application also provides a movable platform 50, including:
- a power system 52 is arranged on the fuselage 51 and is used to provide power for the movable platform;
- the aforementioned lidar-inertial odometer 40 arranged on the fuselage 51 .
- non-transitory computer-readable storage medium including instructions, such as a memory including instructions, which are executable by a processor of an apparatus to perform the above method.
- the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
- a non-transitory computer-readable storage medium enabling the terminal to execute the above method when instructions in the storage medium are executed by a processor of the terminal.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
一种位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质,所述方法包括:获取激光雷达采集的点云;对所述点云进行聚类,确定表征动态物体的前景点;使用所述点云中除所述前景点之外的非前景点进行位姿估计。本实施例有利于消除动态物体的影响。
Description
本申请涉及定位和地图构建技术领域,具体而言,涉及一种位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质。
同步定位与建图(Simultaneously Localization and Mapping,SLAM)是一种机器人在未知环境中估计自身运动并且建立周围环境地图的技术。它在无人机、自动驾驶、移动机器人导航、虚拟现实和增强现实等领域有着广泛的应用。
其中,位姿估计(Pose estimation)在同时定位和地图构建(Simultaneous Localization And Mapping,SLAM)领域扮演着十分重要的角色。例如利用搭载于可移动平台上的各类传感器(如激光雷达、RGBD相机、IMU、GPS)估计可移动平台(如机器人、自动驾驶车辆等)在运动过程中的位姿,进而可实现可移动平台的定位导航、在空间中的自主探索或者地图测绘等。
发明内容
有鉴于此,本申请的目的之一是提供一种位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质。
第一方面,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,包括:
获取激光雷达采集的点云;
对所述点云进行聚类,确定表征动态物体的前景点;
使用所述点云中除所述前景点之外的非前景点进行位姿估计。
本实施例中,在激光雷达采集点云之后,可以先对所述点云进行聚类处理,确定所述点云的至少两个类簇,其中一个类簇为表征动态动物的前景点,另一个类簇为除所述前景点之外的非前景点。在进行位姿估计的过程中,可以去除表征动态物体的前景点,使用所述点云中除所述前景点之外的非前景点进行位姿估计,从而能够有效消除动物物体的影响,提高激光雷达-惯性里程计的定位鲁棒性和精度。
第二方面,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,包括:
确定激光雷达采集点云时的位姿,以及,获取对IMU采集的运动数据进行预积分得到的预积分量;其中,所述点云和所述运动数据在同一时间段内采集得到;
根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化;
根据初始化后的IMU采集的运动数据以及所述激光雷达采集的点云进行位姿估计。
本实施例中,由于有IMU参数的预设先验值的约束,即使IMU采集的运动数据不足,也可以让IMU参数收敛于先验值,实现在任意状态下均能成功初始化IMU参数,初始化场景不受限,而且综合考虑了激光雷达-惯性里程计的不确定性(或者说噪声)以及各个IMU参数之间的相关性,根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,一步联合优化求解所述IMU参数,从而获得最 大后验概率意义下的最优解,进一步提高后续进行位姿估计的准确性和鲁棒性。
第三方面,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,包括:
获取激光雷达采集的当前帧点云;
提取所述当前帧点云中的线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同;
根据所述当前帧点云和上一帧点云两者中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿;
根据所述当前帧点云与上一帧点云的相对位姿进行位姿估计。
本实施例中,考虑到在退化场景中特征点主要分布在近处,一些远处的路牌,路杆,以及与入射光线夹角比较小的墙面都很难提取出特征,这使特征点的分布区域比较小,某些自由度的约束不够,容易发生退化。因此本实施例在对点云进行特征提取时,除了提取线特征和面特征之外,还另外从点云中提取了第三类特征,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同,从而增大了特征点的分布,比如在车辆驾驶场景中,一些不规则的点(如树冠)可以提供一些有用的特征(即第三类特征),从而增大了特征点的分布,能够在退化场景中为位姿估计过程提供更强的约束,有利于提高位姿估计的准确性。
第四方面,本申请实施例提供了一种激光雷达-惯性里程计,包括:
用于采集点云的激光雷达、用于采集运动数据的IMU、用于存储可执行指令的存储器以及一个或多个处理器;
其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成执行第一方面、第二方面或第三方面任意一项所述的方法。
第五方面,本申请实施例提供了一种可移动平台,包括:
机身;
动力系统,设于所述机身上,用于为所述可移动平台提供动力;以及,
如第五方面所述的设于所述机身上的激光雷达-惯性里程计。
第六方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如第一方面、第二方面或第三方面任意一项所述的方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种位姿估计的系统示意图;
图2是本申请实施例提供的第一种位姿估计方法的流程示意图;
图3是本申请实施例提供的第二种位姿估计方法的不同流程示意图;
图4是本申请实施例提供的因子图模型的结构示意图;
图5是本申请实施例提供的第三种位姿估计方法的不同流程示意图;
图6是本申请实施例提供的激光雷达-惯性里程计的结构示意图;
图7是本申请实施例提供的可移动平台的结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
同步定位与建图(Simultaneously Localization and Mapping,SLAM)是一种可移动平台(如机器人、自动驾驶车辆等)在未知环境中估计自身运动并且建立周围环境地图的技术。它在无人机、自动驾驶、移动机器人导航、虚拟现实和增强现实等领域有着广泛的应用。而位姿估计(Pose estimation)在同时定位和地图构建(Simultaneous Localization And Mapping,SLAM)领域扮演着十分重要的角色。
在一些实施例中,可以在可移动平台中搭载激光雷达,使用激光雷达进行同时定位和地图构建。激光雷达用于向目标场景发射激光脉冲序列,然后接收从目标反射回来的激光脉冲序列,并根据反射回来的激光脉冲序列生成点云。在一个例子中,所述激光雷达可以确定反射回来的激光脉冲序列的接收时间,例如,通过探测电信号脉冲的上升沿时间和/或下降沿时间确定激光脉冲序列的接收时间。如此,所述激光雷达可以利用激光脉冲序列的接收时间信息和发射时间计算TOF(Time of flight,飞行时间),从而确定探测物到所述激光雷达的距离。所述激光雷达属于自主发光的传感器,不依赖于光源光照,对光照不敏感,受环境光干扰比较小,即使在无光封闭环境或者夜间环境也可以正常工作,具有广泛的适用性,且激光雷达比相机的测量精度更高。
因此,比起使用相机进行同时定位和地图构建,使用激光雷达进行同时定位和地图构建有着显著优势。在使用激光雷达进行同时定位和地图构建的过程中,通过激光雷达在不同位置对环境观测的几何差异,可以估计出激光雷达的位姿,并根据激光雷达的位姿构建出环境地图。
进一步地,考虑到仅依赖激光雷达定位,容易受外界环境影响,比如雷达被遮挡,或者进入了空旷地带等场景下,激光雷达无法采集到足够的有效数据,所以需要寻找在特性上和它互补的传感器做融合。为了提高定位精度,可以在激光雷达中搭载IMU(惯性测量单元),得到激光雷达-惯性里程计,一般的,一个惯性测量单元包含了三个单轴的加速度计和三个单轴的陀螺,IMU可以输出加速度和角速度信息,对这些量(加速度和角速度信息)积分可以算得IMU位姿,可以在激光雷达无法采集到足够的有效数据时提供运动信息;融合激光雷达采集的点云和IMU采集的运动数据,可以有效提高定位精度和鲁棒性。
其中,所述激光雷达和所述IMU可以以相同的帧率进行数据采集;或者,所述激光雷达和所述IMU也可以以不同的帧率进行数据采集,但满足在预设的时间周期内能够获取到激光雷达采集的点云以及IMU采集的运动数据。
在一示例性的实施例中,请参阅图1,图1示出了使用激光雷达-惯性里程计进行同时定位和地图构建的系统示意图。其中,所述激光雷达-惯性里程计可搭载于可移动平台中,所述可移动平台包括但不限于无人机、自动驾驶汽车、物流机器人或者清洁机器人等。
在图1所示的系统中,包括特征提取模块和位姿估计模块,在所述特征提取模块中,所述激光雷达按照预设帧率采集点云,然后对每帧点云进行特征提取以获得点云中的三维点的特征(比如线特征、面特征或者其他特征),进而将获取的每帧点云中的三维点的特征传输给所述位姿估计模块。
在所述位姿估计模块中,在所述激光雷达按照预设帧率采集点云的过程中,在同一时间段内所述IMU也同步采集运动数据;并且由于在系统的初始时刻,重力相对于当前激光雷达坐标系的方向未知,且IMU加速度计的读数是受重力影响的,如果重力 方向未知,IMU则无法积分获取位姿信息,同时IMU的初始速度,加速度计和陀螺仪偏置也是未知的,因此可以先对IMU采集的运动数据进行预积分处理获得预积分量;进而可以基于所述预积分量对IMU的状态量(初始速度,加速度计和陀螺仪偏置)和重力方向进行初始化处理。
其中,在对相邻帧点云配准之前需要解决点云的自运动畸变问题,畸变产生的原因是一帧点云中的点不是在同一时刻采集的,在采集过程中,在同一时刻可能采集到一帧点云中的一个或数个三维点的数据,激光雷达在采集一帧点云的过程中随着载体(如可移动平台)在运动,但是三维点的深度指的是物体和激光雷达之间的距离,所以不同三维点所在的坐标系就不一样了,即点云中的每个点均对应有独立的坐标系,从而保证点在对应的坐标系下的三维坐标的准确性,但在后续利用三维点云进行位姿估计时,需要点云中的所有三维点在同一坐标系下,因此需要对所述点云进行重投影,把采集过程中激光雷达的运动计算出来,并在对应的三维点上补偿这个运动量,从而将一帧点云中的所有三维点都转换成在同一坐标系下的三维点。
在一些实施例中,可以根据实际需要选择IMU预积分或者匀速运动模型来消除点云的自运动畸变。例如在IMU预积分方式中,可以使用IMU采集的运动数据进行预积分得到预积分量,进而利用预积分量、所述IMU和所述激光雷达之间的位置转换关系,确定采集一帧点云的过程中激光雷达的运动量,并使用该运动量进行点云畸变补偿。例如在匀速运动模型中,可假设点云的一个采集周期(比如100ms)内为匀速运动,可以采用旋转+平移补偿。在消除了点云的自畸变后,可以进行位姿估计过程。
在位姿估计过程中,如果所述IMU初始化成功,可以进入LIO模式,所述激光雷达从局部特征地图中获取上一帧点云的相关数据(如上一帧点云及其提取的特征),然后将当前帧点云和上一帧点云进行配准,确定当前帧点云和上一帧点云的相对位姿,进而可以耦合所述当前帧点云和上一帧点云的相对位姿和所述IMU采集的运动数据,确定激光雷达-惯性里程计的位姿;例如可以采用图优化方式,建立位姿图模型,并利用所述当前帧点云和上一帧点云的相对位姿和所述IMU采集的运动数据优化所述位姿图模型,得到激光雷达-惯性里程计的位姿。
如果所述IMU初始化失败,则进入LO模式,即使用激光雷达进行位姿估计,比如可以使用帧到模型的点云配准方法来估计激光雷达-惯性里程计的位姿,同时再次对IMU的状态量(初始速度,加速度计和陀螺仪偏置)和重力方向进行初始化直到初始化成功。
在所述位姿估计模块中,还需进行特征地图管理,包括全局特征地图和局部特征地图;所述局部特征地图维护有在当前帧之前的几帧点云的特征,在点云配准过程中,可以从所述局部特征地图中获取上一帧点云的特征进行配准。所述全局特征地图维护有所述激光雷达采集的所有点云的特征。
在一些场景中,在搭载有激光雷达-惯性里程计的可移动平台在运动过程中,通常会有很多动态物体,如自动驾驶车辆在城市场景中移动时,在其运动路径中会出现有其他车辆或者行人等,而有些动态物体会影响激光雷达-惯性里程计的定位鲁棒性和精度。
基于此,请参阅图2,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,包括:
在步骤S101中,获取激光雷达采集的点云。
在步骤S102中,对所述点云进行聚类,确定表征动态物体的前景点。
在步骤S103中,使用所述点云中除所述前景点之外的非前景点进行位姿估计。
本实施例中,在激光雷达采集点云之后,可以先对所述点云进行聚类处理,确定所述点云的至少两个类簇,其中一个类簇为表征动态动物的前景点,另一个类簇为除所述前景点之外的非前景点。在进行位姿估计的过程中,可以去除表征动态物体的前 景点,使用所述点云中除所述前景点之外的非前景点进行位姿估计,从而能够有效消除动物物体的影响,提高激光雷达-惯性里程计的定位鲁棒性和精度。
比如在图1所示的系统中,可以在激光雷达采集点云之后,对所述点云进行聚类处理,确定表征动态物体的前景点,然后去除表征动态物体的前景点,使用所述点云中除所述前景点之外的非前景点进行特征提取,进而进行位姿估计。
示例性的,本实施例对于聚类处理过程中使用的聚类算法不做任何限制,可依据实际应用场景进行具体设置,例如可以使用欧式聚类方法将所述点云分割成3个类簇,分别为表征动态动物的前景点、地面点和背景点;其中表征动态动物的前景点不参与后续的位姿估计过程,以消除动态物体的影响。当然,也可以采用其他聚类方法如SOM神经网络聚类算法等。
在一些实施例中,在激光雷达-惯性里程计中,可以融合激光雷达和惯性里程计,以有效提高定位精度和鲁棒性。则可以使用所述点云中的非前景点和IMU采集的运动数据进行位姿估计,确定激光雷达-惯性里程计的位姿;所述点云和所述运动数据在同一时间段内采集得到;比如使用相邻两帧点云中的非前景点进行位姿估计,确定相邻两帧点云的相对位姿,进而根据相邻两帧点云的相对位姿和IMU在同一时间内采集的运动数据估计激光雷达-惯性里程计的位姿;激光雷达和IMU两者紧密耦合,有利于提高定位精度和鲁棒性。
相关技术中针对于IMU参数(初始速度,加速度计和陀螺仪偏置和重力方向)是分步进行的,如将IMU初始化分解为多个子问题:估计陀螺仪偏置→忽略加速度计偏置,估计尺度和重力矢量→估计加速度计偏置,进一步优化尺度和重力→估计初始速度。发明人发现,上述IMU参数初始化过程忽略参数相关性,各参数独立求解,使IMU参数估计不准确;由于没有先验约束,因此要求加速度的变化足够才会开始初始化且在初始化的过程中IMU要保持持续运动,初始化场景受限;而且因没有先验约束,在IMU采集的运动数据不足时,IMU的状态参数(如初始速度,加速度计和陀螺仪偏置)能观性很差,很容易发散。
针对于相关技术中的问题,请参阅图3,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,包括:
在步骤S201中,确定激光雷达采集点云时的位姿,以及,获取对IMU采集的运动数据进行预积分得到的预积分量;其中,所述点云和所述运动数据在同一时间段内采集得到。
在步骤S202中,根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化。
在步骤S203中,根据初始化后的IMU采集的运动数据以及所述激光雷达采集的点云进行位姿估计。
本实施例中,由于有IMU参数的预设先验值的约束,即使IMU采集的运动数据不足,也可以让IMU参数收敛于先验值,实现在任意状态下均能成功初始化IMU参数,初始化场景不受限,而且综合考虑了激光雷达-惯性里程计的不确定性(或者说噪声)以及各个IMU参数之间的相关性,根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,一步联合优化求解所述IMU参数,从而获得最大后验概率意义下的最优解,进一步提高后续进行位姿估计的准确性和鲁棒性。
其中,对IMU参数进行初始化的过程中可以借助激光雷达在采集点云时的位姿,可以根据所述激光雷达与IMU之间的位置变换关系,将激光雷达采集点云时的位姿变换至IMU所在的惯性坐标系下,进而使用变换后的激光雷达的位姿参与求解IMU参数。
在一些实施例中,通过激光雷达在不同位置对环境观测的几何差异,可以估计出激光雷达的位姿,获得所述激光雷达在采集点云时的位姿。示例性的,以确定所述激 光雷达当前帧点云进行举例说明:可以获取所述激光雷达采集的当前帧点云,然后确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;最后根据所述当前帧点云和上一帧点云的相对位姿,确定所述激光雷达采集当前帧点云时的位姿。示例性的,所述激光雷达的初始位姿(即采集首帧点云)时的位姿可以通过其他传感器(如GNSS模块)测量得到,或者通过标定得到。
在一个例子中,可以使用ICP算法来迭代计算所述相当前帧点云和上一帧点云的相对位姿,对于所述当前帧点云和上一帧点云,重复执行以下步骤直到达到预设的迭代要求:确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿。可以理解的是,所述预设的迭代要求可依据实际应用场景进行设置,本实施例对此不做任何限制,比如所述预设的迭代要求可以是达到预设的迭代次数,或者达到预设的迭代时长,或者匹配的三维点对之间的距离最小。
在每次迭代过程中确定匹配的三维点对时,可以将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中,然后对于转换后的三维点,在预设搜索范围内搜索所述当前帧点云中与其最邻近的三维点,获得匹配的三维点对。其中,在首次迭代过程中,可以使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达采集上一帧点云时的位姿以及所述IMU采集的运动数据确定。在非首次迭代过程中,可以使用上一次迭代过程确定的所述当前帧点云和上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
其中,在非高度运动场景中,即所述激光雷达的运动速度小于或等于预设速度值时,所述激光雷达采集的相邻两帧点云的位移较小,认为在该场景下估计的所述激光雷达的位姿足够准确,可以用于IMU参数的初始化过程;其中,在非高度运动场景中,在每次迭代过程中确定匹配的三维点对时,所述预设搜索范围相同。
在高速运动场景中,即所述激光雷达的运动速度大于预设速度值时,所述激光雷达采集的相邻两帧点云的位移较大,使点云配准过程容易陷入局部最优解,在该场景下估计的所述激光雷达的位姿可能不准,具体原因在于,在高速运动时,以自动驾驶车辆举例,相邻两帧点云的位移可能有两到三米,在每次迭代过程中确定匹配的三维点对时,如果三维点的搜索范围太小,估计的所述激光雷达的位姿可能不准,而如果搜索范围太大,会出现很多错误的特征关联,引入很多杂点。因此,在所述激光雷达处于高度运动场景下进行位姿估计时,在迭代计算所述相邻两帧点云的相对位姿的过程中,在首次迭代过程确定匹配的三维点对时,可以将所述预设搜索范围增大,即首次确定的所述预设搜索范围比在所述激光雷达处于非高度运动场景下的预设搜索范围大,从而可以找到匹配的三维点对,提高位姿估计的准确性,且为了避免搜索范围增大引入的过多杂点的影响,可以随着迭代次数的增多,所述预设搜索范围逐渐减小,即所述预设搜索范围的尺寸与迭代次数呈负相关关系。通过对迭代过程中的预设搜索范围的调整,可以在杂点过多以及无法获得匹配的三维点对两者之间取得平衡,可以提高在高度运动场景下估计的激光雷达的位姿的准确性,进而可以借助估计的激光雷达的位姿提高IMU参数初始化的成功率。
其中,对IMU参数进行初始化的过程中还可以借助在所述激光雷达采集所述点云的同一时间段内所述IMU采集的运动数据,可以对所述IMU采集的运动数据进行预积分得到预积分量。
在一些实施例中,综合考虑了激光雷达-惯性里程计的不确定性(或者说噪声)以及各个IMU参数之间的相关性,可以根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,一步联合优化求解所述IMU参数。示例性,可以 通过构建一个因子图问题一次性求解IMU状态量和重力方向,该因子图问题实质上是一个最大后验估计问题,基于最大后验估计的初始化方式,一步联合优化求解所述IMU参数,得到最大后验概率意义下的最优解。
在一示例性的实施例中,请参阅图4,可以根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,建立因子图模型;使用所述因子图模型对所述IMU参数进行初始化。本实施例综合考虑了激光雷达-惯性里程计的不确定性(或者说噪声)以及各个IMU参数之间的相关性,使用因子图模型一步联合优化求解所述IMU参数,有利于提高初始化的IMU参数的准确性。
其中,所述因子图模型包括变量节点(图4中的椭圆形)和因子节点(图4中的矩形);所述变量节点指示所述IMU参数的待求解量,所述IMU参数包括加速度计的偏置、陀螺仪的偏置、重力方向(重力相对于激光雷达坐标系的方向)和/或初始速度,其中,v
i、v
i+1和v
i+2表示不同时刻下所述IMU的速度,b
g表示陀螺仪的偏置,b
a表示加速度计的偏置,R
wg表示重力加速度的方向;以及图4中的三角形表示激光雷达在T
i、T
i+1和T
i+2三个时刻的位姿。可以理解的是,图4所示的3个IMU速度的变量节点仅为举例说明,并不构成对IMU速度的变量节点数量的限制。
请参阅图4,所述因子节点指示第一残差和第二残差。所述第一残差根据所述IMU参数的预设先验值和所述IMU参数的待求解量确定;所述第二残差根据所述激光雷达采集点云时的位姿(图4中的三角形)、所述预积分量和所述IMU参数的待求解量确定。
所述因子图模型所表示的最大后验估计问题可以转换为最小二乘问题,其目标函数为:
其中,Θ为要优化的IMU参数,Θ={R
wg,b
a,b
g,v
0:k},即重力方向、加速度计和陀螺仪偏置、以及IMU速度。r
p为第一残差,
为第二残差。该目标函数的优化目标为获得所述第一残差和所述第二残差之和最小时对应的所述IMU参数的待求解量,即初始化后所述IMU参数的值包括:所述第一残差和所述第二残差之和最小时对应的所述IMU参数的待求解量。
其中,Σ
p为第一残差的协方差矩阵,其用于确定所述第一残差在IMU参数初始化过程中的权重,所述第一残差的协方差可以根据给定先验值的不确定性来估计,或者说所述第一残差的权重根据所述预设先验值的噪声所确定;
为第二残差的协方差矩阵,其用于确定所述第二残差在IMU参数初始化过程中的权重,所述第二残差的协方差可以在预积分的过程中进行传递,或者说,所述第二残差的权重根据所述IMU在预积分过程中累积的噪声确定,所述第二残差的权重与所述噪声呈负相关关系,即噪声越大,第二残差的权重越小。
所述第一残差根据所述IMU参数的预设先验值和所述IMU参数的待求解量之间的差值确定。在一个例子中,由于重力和加速度计偏置耦合在一起,在没有足够的运动时加速度计的偏置很难被观测到,在这里如果运动不够大时,加速度计偏置会倾向于维持在先验值附近,当它可以被观测到时会收敛于真值,因此所述加速度计的偏置的预设先验值可以设置为0。所述陀螺仪的偏置的预设先验值也可以设置为0。另外,由于重力和加速度计偏置耦合在一起,所述重力方向可以根据所述IMU采集的N个加速度的平均值的方向确定,N为大于1的整数,比如N为20,因为在初始时刻运动不够大时运动加速度远小于重力加速度,所以这种做法可以给出重力的大概方向。所述初始速度的预设先验值可以根据所述激光雷达采集点云时的位姿估计得到。
所述第二残差包括有旋转残差、平移残差和速度残差,即
其中,
表示旋转残差,
表示平移残差,
表示速度残差。所述预积分量包括速度预积分量、平移预积分量和旋转预积分量;和/或,所述激光雷达采集点云时的 位姿包括位置量和姿态量。
针对于所述旋转残差,所述旋转残差表征相邻两个时刻之间两个相对旋转量的差异,其中一个相对旋转量根据所述激光雷达在所述相邻两个时刻分别对应的姿态量确定,另一个相对旋转量根据所述旋转预积分量和所述陀螺仪的偏置的待求解量确定。即
其中,ΔR
ij表示在i时刻和j时刻之间的旋转预积分量,(b
g)
T表示所述陀螺仪的偏置的待求解量,
表示所述激光雷达在i时刻和j时刻的相对姿态量。
针对于所述速度残差,所述速度残差表征相邻两个时刻之间两个相对速度的差异;其中一个相对速度根据所述IMU在所述相邻两个时刻分别对应的速度的待求解量、所述激光雷达其中一个姿态量以及重力方向的待求解量确定,另一个相对速度根据所述速度预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。即
其中,
为所述激光雷达在i时刻的姿态量,v
j、v
i表示IMU在i时刻和j时刻分别对应的速度,R
wg表示重力加速度的方向,g
I=[0 0-9.805]为重力在东北天坐标系系下的坐标,Δt
ij表示i时刻和j时刻之前的时间间隔,Δv
ij表示在i时刻和j时刻之间的速度预积分量,b
g表示所述陀螺仪的偏置的待求解量,b
a表示所述加速度计的偏置的待求解量。
针对于所述平移残差,所述平移残差表征相邻两个时刻之间两个相对平移量的差异;其中一个相对平移量根据所述激光雷达在所述相邻两个时刻分别对应的位置量、其中一个姿态量以及重力方向的待求解量确定,另一个相对平移量根据所述平移预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。即
为所述激光雷达在i时刻的姿态量,p
j、p
i表示所述激光雷达在i时刻和j时刻分别对应的位置量,Δv
ij表示在i时刻和j时刻之间的平移预积分量。
示例性的,在如图1所示的系统中,本申请实施例提供的IMU参数的初始化过程可以应用于图1所示的IMU初始化阶段。
SLAM领域的一个痛点是在部分场景下会发生退化,比如在隧道中位姿会沿着隧道方向退化,以及在高速公路上特征点比较少的时候也会退化。发明人发现,退化问题往往都是由于特征不足引起的。例如在隧道中,几乎没有线特征,而平面特征也几乎都是平行于前进方向的,这就导致优化问题在前进方向上没有约束,使系统沿该方向发生退化。而在高速公路问题中,由于环境比较空旷,激光雷达能提取的特征点变少,特征点的范围较小,导致一些方向约束不够,发生退化。
基于上述问题,请参阅图4,本申请实施例提供了一种激光雷达-惯性里程计的位姿估计方法,所述方法包括:
在步骤S301中,获取激光雷达采集的当前帧点云。
在步骤S302中,提取当前帧点云中的线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同。
在步骤S303中,根据所述当前帧点云和上一帧点云两者中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿。
在步骤S304中,根据所述当前帧点云与上一帧点云的相对位姿进行位姿估计。
本实施例中,考虑到在退化场景中特征点主要分布在近处,一些远处的路牌,路杆,以及与入射光线夹角比较小的墙面都很难提取出特征,这使特征点的分布区域比较小,某些自由度的约束不够,容易发生退化。因此本实施例在对点云进行特征提取时,除了提取线特征和面特征之外,还另外从点云中提取了第三类特征,所述第三类 特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同,从而增大了特征点的分布,比如在车辆驾驶场景中,一些不规则的点(如树冠)可以提供一些有用的特征(即第三类特征),从而增大了特征点的分布,能够在退化场景中为位姿估计过程提供更强的约束,有利于提高位姿估计的准确性。
可理解的是,可以在任意场景中均提取上述三类特征(线特征、面特征和第三类特征);或者,考虑到非退化场景中提取的线特征和面特征已足够用于位姿估计,则可以在退化场景中提取所述第三类特征,在非退化场景中不提取所述第三类特征。本实施例对此不做任何限制。
在一个例子中,所述激光雷达搭载于可移动平台中,所述第三类特征可以是在所述可移动平台处于预设退化场景下提取的,所述预设退化场景包括但不限于隧道场景、高速公路场景和/或空旷场景等,从另一个角度说,在所述预设退化场景中,从所述激光雷达采集的点云中提取的线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向,即所述第三类特征可以是在所述线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向的情况下提取的。
对于提取所述点云中的线特征,可以根据所述点云中曲率大于预设值的三维点和/或孤立的三维点提取线特征。
对于提取所述点云中的面特征和不规则特征,可以使用主成分分析法来区别面特征和第三类特征。所述面特征中沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值;所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同。示例性的,根据所述当前帧点云的协方差确定所述当前帧点云中的三维点的第一特征值和两个第二特征值;若所述第一特征值与所述第二特征值的比值小于预设阈值,确定所述三维点的面特征;若所述第一特征值与所述第二特征值的比值大于或等于所述预设阈值,确定所述三维点的第三类特征;其中,所述面特征中沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值,则所述第一特征值与所述第二特征值的比值越小,越有可能提取出面特征。
在一些实施例中,考虑到随着所述三维点的深度越大,所能提取的特征的数量越少,因此为了使特征点(所述特征点指可以提取出线特征、面特征或者第三类特征的三维点)在空间中的分布尽可能均匀,所述三维点的不同深度可以对应不同的预设阈值;所述不同的预设阈值用于使从不同深度的三维点中获取的面特征和/或第三类特征的数量差值在预设差值内,示例性的,所述三维点的深度与所述预设阈值呈正相关关系,即所述三维点的深度越大,所述预设阈值越大。
在提取出当前帧点云的线特征、面特征和第三类特征之后,对于所述多帧点云中的相邻两帧点云,可以根据所述当前帧点云和上一帧点云两者中的线特征、面特征和第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿。其中,针对于所述当前帧点云与上一帧点云,可以重复执行上述步骤(确定所述相邻两帧点云的相对位姿的步骤)直到达到预设的迭代要求,比如所述预设的迭代要求包括:达到预设的迭代次数,或者,达到预设的迭代时长,或者还可以是以下说明的迭代要求。
在每一次迭代过程中,针对于所述当前帧点云与上一帧点云,可以将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。其中,在首次迭代过程中,使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达采集上一帧点云时的位姿以及所述IMU采集的运动数据确定。在非首次迭代过程中,使用上一次迭代过程确定的所述当前帧点云与上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
在将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中之后,针对于线特征可以使用点到线的匹配方式,针对于面特征和不规则特征可以使用点到面的 匹配方式。
比如对于属于线特征的转换后的三维点,可以在所述当前帧点云中确定与其匹配的直线,所述与其匹配的直线包括:所述当前帧点云中与所述属于线特征的转换后的三维点距离最近的多个三维点组成的直线;然后获取所述转换后的三维点到匹配的直线的第一距离。其中,可以使用最小化点到线的距离的方式进行迭代优化,该迭代优化过程的优化目标(或者说迭代要求)为求解所述第一距离最小时对应的所述相邻两帧点云的相对位姿。
在退化场景中,面特征只能提供沿法向量方向的约束,因此沿前进方向几乎没有约束。发明人发现,针对于面特征使用点到面的匹配方式,如果使用最小化点到面的距离的迭代优化方式,相当于认为匹配点在平面法向量方向的不确定性(或者说噪声)很小,而垂直于法向量方向的不确定性(或者说噪声)无限大;而实际上点在垂直于平面法向量方向是有一定确定性的,因此本实施例将这个不确定性设置为一个比较大的值而不是无穷,从而提供退化方向的约束。
因此对于属于面特征的转换后的三维点,在所述当前帧点云中确定与其匹配的第一平面,所述与其匹配的第一平面包括:所述当前帧点云中与所述属于面特征的转换后的三维点距离最近的多个三维点组成的平面,然后确定所述转换后的三维点在所述第一平面上的投影点,最后获取所述转换后的三维点与所述投影点之间的第二距离。本实施例中,使用最小化第二距离的方式进行迭代优化,该迭代优化过程的优化目标为求解所述第二距离最小时对应的所述相邻两帧点云的相对位姿;并且在所述第一平面中,为沿法向量方向的第一特征值设置小方差,为垂直于法向量方向的两个第二特征值的设置大方差,换句话说,垂直于法向量方向的两个第二特征值的方差远大于沿法向量方向的第一特征值的方差,且所述垂直于法向量方向的两个第二特征值的方差非无穷大,从而可以提供退化方向约束。
而对于属于第三类特征的转换后的三维点,可以在所述当前帧点云中确定与其匹配的第二平面,所述与其匹配的第二平面包括:所述当前帧点云中与所述属于第三类特征的转换后的三维点距离最近的多个三维点组成的平面;然后获取所述转换后的三维点到所述第二平面的第三距离。其中,可以使用最小化点到面的距离的方式进行迭代优化,该迭代优化过程的优化目标为求解所述第三距离最小时对应的所述相邻两帧点云的相对位姿。
在获取所述第一距离、第二距离和第三距离之后,可以根据所述第一距离、所述第二距离和/或所述第三距离,确定所述相邻两帧点云的相对位姿。至此,确定所述相邻两帧点云的相对位姿的一次迭代过程结束。
示例性的,确定所述相邻两帧点云的相对位姿的优化迭代过程所需满足的迭代要求可以是:达到预设的迭代次数,或者,达到预设的迭代时长,或者,所述第一距离、所述第二距离和/或所述第三距离最小。可依据实际应用场景进行具体设置。
在一示例性的实施例中,针对于相邻两帧点云,即点云A和点云B。
对于线特征之间的匹配过程:针对于点云A中的属于线特征的三维点p
sel,转换为点云B坐标系下的Tp
sel,为激光雷达采集点云B时的位姿;进而确定点云B中与三维点Tp
sel最近的多个三维点组成的直线以及直线上任意两点pa,pb,计算三维点Tp
sel到该直线的第一距离
其中,×为向量叉乘。迭代优化目标是:找到使Tp
sel与直线之间的第一距离最小时对应的激光雷达的相对位姿。
对于面特征之间的匹配过程:针对于点云A中的属于平面特征的三维点p
sel,转换为点云B坐标系下的Tp
sel;T为激光雷达采集点云B时的位姿;进而确定点云B中与三维点Tp
sel最近的一个或多个三维点组成的第一平面,找到三维点Tp
sel在该平面上的投影点p
proj,p
proj=Tp
sel-(w
TTpsel+d)w;确定Tp
sel和p
proj之间的第二距离。 其中,优化函数为
其优化目标是:求解使Tp
sel和p
proj之间的第二距离最小时对应的激光雷达的相对位姿。其中,Σ为第一平面的协方差矩阵,其为沿法向量的方向和垂直于法向量的两个方向提供不同的权重,设置方差在沿法向量的方向上很小,且方差在垂直于法向量的两个方向上很大,从而提供退化方向的约束。假定三维坐标系中的X轴为所述第一平面沿法向量的方向,则
其中,σ为雷达测量标准差,s为一个值很大的系数。对于旋转矩阵R
n,存在一个旋转矩阵,将e
1旋转到平面法向量方向:R
ne
1=w,
R
n可以通过SVD分解的方法来求解,对e
1w
T进行SVD分解,则有:R
n=VU
T。
针对于第三类特征之间的匹配:针对于点云A中的属于不规则特征的三维点p
sel,转换为点云B坐标系下的Tp
sel,T为激光雷达采集点云B时的位姿;确定点云B中与三维点Tp
sel最近的多个三维点组成的第二平面,计算三维点Tp
sel到该第二平面的第三距离为w
TTpsel+d。其中,w,d为平面参数,优化目标是:找到使Tp
sel到平面的第三距离最小时对应的激光雷达的相对位姿。
在一些实施例中,在确定所述当前帧点云与上一帧点云的相对位姿之后,可以根据所述当前帧点云与上一帧点云的相对位姿和IMU在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计的位姿。其中,在退化场景中,减小所述激光雷达的权重以及增大所述IMU的权重,实现调整激光雷达和IMU的紧耦合权重,使IMU更充分地约束优化问题。
以激光雷达-惯性里程计搭载于自动驾驶车辆上为例:在隧道场景中,由于场景单一,激光SLAM极易发生退化,而基于本申请的特征提取方式,可以充分地使用到场景中的各类信息,比如隧道里零散的指示牌和墙壁上的凹槽,这些信息与IMU的紧耦合为容易发生退化的方向提供了约束,实现了对退化的抵御。在高速公路场景中,场景较空旷,结构化程度较低,可利用的信息不多,而且高速公路中车辆运动速度快,也很容易发生退化,而基于本申请的特征提取方法,灯杆,路牌,植被等信息都得到了充分的利用,实现在高速公路场景中也能得到出色的鲁棒性,避免了退化的发生。
示例性的,图2实施例中提及的排除前景点的方式可以在图1所示的系统中,在激光雷达采集点云之后进行;图3实施例中提及的IMU初始化的方式可以在图1所示的系统中,在IMU初始化过程中进行;图5实施例中提及的特征提取的方式,可以在图1所示的系统中,在点云特征提取阶段采用,以及图5实施例中提及的配准方式,可以在LIO模式或者LO模式中采用。在实际应用场景中,可以根据实际情况,在激光雷达-惯性里程计的位姿估计过程中采用图2、图3以及图5实施例中提及的部分或全部方法。
相应的,请参阅图6,本申请实施例还提供了一种激光雷达-惯性里程计40,包括:
用于采集点云的激光雷达42、用于采集运动数据的IMU41、用于存储可执行指令的存储器43以及一个或多个处理器44;
其中,所述一个或多个处理器44执行所述可执行指令时,被单独地或共同地配置成执行上述任意一项方法。
示例性的,所述一个或多个处理器44执行所述可执行指令时,被单独地或共同地配置成:获取激光雷达42采集的点云;对所述点云进行聚类,确定表征动态物体的前景点;使用所述点云中除所述前景点之外的非前景点进行位姿估计。
在一实施例中,所述非前景点包括地面点和/或背景点。
在一实施例中,所述聚类包括欧式聚类。
在一实施例中,所述处理器44具体用于:使用所述点云中的非前景点和初始化后 的IMU41在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计40的位姿。
示例性的,所述一个或多个处理器44执行所述可执行指令时,被单独地或共同地配置成:
确定激光雷达42采集点云时的位姿,以及,获取对IMU41采集的运动数据进行预积分得到的预积分量;其中,所述点云和所述运动数据在同一时间段内采集得到;
根据所述激光雷达42采集点云时的位姿、所述预积分量以及IMU41参数的预设先验值,对所述IMU41参数进行初始化;
根据初始化后的IMU41采集的运动数据以及所述激光雷达42采集的点云进行位姿估计。
可选地,所述处理器44还用于:根据所述激光雷达42采集点云时的位姿、所述预积分量以及IMU41参数的预设先验值,建立因子图模型;使用所述因子图模型对所述IMU41参数进行初始化。
可选地,所述因子图模型包括变量节点和因子节点;所述变量节点指示所述IMU41参数的待求解量,所述因子节点指示第一残差和第二残差。
其中,所述第一残差根据所述IMU41参数的预设先验值和所述待求解量确定;所述第二残差根据所述激光雷达42采集点云时的位姿、所述预积分量和所述IMU41参数的待求解量确定。
可选地,初始化后所述IMU41参数的值包括:所述第一残差和所述第二残差之和最小时对应的所述IMU41参数的待求解量。
可选地,所述IMU41参数包括加速度计的偏置、陀螺仪的偏置、重力方向和/或初始速度;和/或,所述预积分量包括速度预积分量、平移预积分量和旋转预积分量;和/或,所述激光雷达42采集点云时的位姿包括位置量和姿态量。
可选地,所述第二残差包括旋转残差;所述旋转残差表征相邻两个时刻之间两个相对旋转量的差异;其中一个相对旋转量根据所述激光雷达42在所述相邻两个时刻分别对应的姿态量确定,另一个相对旋转量根据所述旋转预积分量和所述陀螺仪的偏置的待求解量确定。
可选地,所述第二残差包括平移残差;所述平移残差表征相邻两个时刻之间两个相对平移量的差异;其中一个相对平移量根据所述激光雷达42在所述相邻两个时刻分别对应的位置量、其中一个姿态量以及重力方向的待求解量确定,另一个相对平移量根据所述平移预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
可选地,所述第二残差包括速度残差;所述速度残差表征相邻两个时刻之间两个相对速度的差异;其中一个相对速度根据所述IMU41在所述相邻两个时刻分别对应的速度的待求解量、所述激光雷达42其中一个姿态量以及重力方向的待求解量确定,另一个相对速度根据所述速度预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
可选地,所述加速度计的偏置的预设先验值为0;所述陀螺仪的偏置的预设先验值为0;所述重力方向根据所述IMU41采集的N个加速度的平均值的方向确定,N为大于1的整数;和/或,所述初始速度的预设先验值根据所述激光雷达42采集点云时的位姿估计得到。
可选地,所述第一残差的权重根据所述预设先验值的噪声所确定;和/或,所述第二残差的权重根据所述IMU41在预积分过程中累积的噪声确定。
可选地,所述第二残差的权重与所述噪声呈负相关关系。
可选地,所述处理器44还用于:获取所述激光雷达42采集的当前帧点云;确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所 述当前帧点云和上一帧点云的相对位姿;根据所述当前帧点云和上一帧点云的相对位姿,确定所述激光雷达42采集当前帧点云时的位姿。
可选地,所述处理器44还用于:将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;对于转换后的三维点,在预设搜索范围内搜索所述当前帧点云中与其最邻近的三维点,获得匹配的三维点对。
可选地,所述处理器44重复执行以下步骤直到达到预设的迭代要求:确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,和/或,所述匹配的三维点对之间的距离最小。
可选地,在所述激光雷达42的运动速度大于预设速度值时,所述预设搜索范围的尺寸与迭代次数呈负相关关系。
可选地,在首次迭代过程中,使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达42采集上一帧点云时的位姿以及所述IMU41采集的运动数据确定。
在非首次迭代过程中,使用上一次迭代过程确定的所述当前帧点云和上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
示例性的,所述一个或多个处理器44执行所述可执行指令时,被单独地或共同地配置成:获取激光雷达42采集的当前帧点云;
提取所述当前帧点云中的线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同;
根据所述当前帧点云和上一帧点云两者中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿;
根据所述当前帧点云与上一帧点云的相对位姿进行位姿估计。
可选地,所述面特征中,沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值。
可选地,所述激光雷达42搭载于可移动平台中;所述第三类特征是在所述可移动平台处于预设退化场景下提取的。
可选地,所述预设退化场景包括隧道场景、高速公路场景和/或空旷场景。
可选地,所述激光雷达42搭载于可移动平台中;所述第三类特征是在所述线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向的情况下提取的。
可选地,所述线特征根据所述点云中曲率大于预设值的三维点和/或孤立的三维点确定。
可选地,所述处理器44还用于:根据所述当前帧点云的协方差确定所述当前帧点云中的三维点的第一特征值和两个第二特征值;若所述第一特征值与所述第二特征值的比值小于预设阈值,确定所述三维点的面特征;若所述第一特征值与所述第二特征值的比值大于或等于所述预设阈值,确定所述三维点的第三类特征。
可选地,所述三维点的不同深度对应不同的预设阈值;所述不同的预设阈值用于使从不同深度的三维点中获取的面特征和/或第三类特征的数量差值在预设差值内。
可选地,所述三维点的深度与所述预设阈值呈正相关关系。
可选地,所述处理器44还用于:
将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;
对于属于线特征的转换后的三维点,在所述当前帧点云中确定与其匹配的直线,并获取所述转换后的三维点到匹配的直线的第一距离;
对于属于面特征的转换后的三维点,在所述当前帧点云中确定与其匹配的第一平面,并获取所述转换后的三维点与其在所述第一平面上的投影点之间的第二距离;
对于属于第三类特征的转换后的三维点,在所述当前帧点云中确定与其匹配的第二平面,并获取所述转换后的三维点到所述第二平面的第三距离;
根据所述第一距离、所述第二距离和/或所述第三距离,确定所述相邻两帧点云的相对位姿。
可选地,所述与其匹配的直线包括:所述当前帧点云中与所述属于线特征的转换后的三维点距离最近的多个三维点组成的直线。
可选地,所述与其匹配的第一平面包括:所述当前帧点云中与所述属于面特征的转换后的三维点距离最近的多个三维点组成的平面。
可选地,所述与其匹配的第二平面包括:所述当前帧点云中与所述属于第三类特征的转换后的三维点距离最近的多个三维点组成的平面。
可选地,在所述第一平面中,垂直于法向量方向的两个第二特征值的方差远大于沿法向量方向的第一特征值的方差,且所述垂直于法向量方向的两个第二特征值的方差非无穷大。
可选地,所述处理器44重复执行以下步骤直到达到预设的迭代要求:根据所述当前帧点云和上一帧点云两者中的线特征、面特征和第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,或者,所述第一距离、所述第二距离和/或所述第三距离最小。
可选地,在首次迭代过程中,使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达42采集上一帧点云时的位姿以及所述IMU41采集的运动数据确定;
在非首次迭代过程中,使用上一次迭代过程确定的所述当前帧点云与上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
可选地,所述处理器44还用于:根据所述当前帧点云与上一帧点云的相对位姿和初始化后的IMU41在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计40的位姿。
可选地,所述激光雷达42搭载于可移动平台中;在所述可移动平台处于预设退化场景的情况下,在确定激光雷达-惯性里程计40的位姿的过程中,减小所述激光雷达42的权重以及增大所述IMU41的权重。
相应的,请参阅图7,本申请实施例还提供了一种可移动平台50,包括:
机身51;
动力系统52,设于所述机身51上,用于为所述可移动平台提供动力;以及,
上述的设于所述机身51上的激光雷达-惯性里程计40。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由装置的处理器执行以完成上述方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由终端的处理器执行时,使得终端能够执行上述方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (103)
- 一种激光雷达-惯性里程计的位姿估计方法,其特征在于,包括:获取激光雷达采集的点云;对所述点云进行聚类,确定表征动态物体的前景点;使用所述点云中除所述前景点之外的非前景点进行位姿估计。
- 根据权利要求1所述的方法,其特征在于,所述非前景点包括地面点和/或背景点。
- 根据权利要求1所述的方法,其特征在于,所述聚类包括欧式聚类。
- 根据权利要求1所述的方法,其特征在于,所述使用所述点云中除所述前景点之外的非前景点进行位姿估计,包括:使用所述点云中的非前景点和初始化后的IMU在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计的位姿。
- 根据权利要求4所述的方法,其特征在于,所述IMU通过以下方式进行初始化:确定激光雷达采集点云时的位姿,以及,获取对IMU在同一时间段内采集的运动数据进行预积分得到的预积分量;根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化。
- 根据权利要求5所述的方法,其特征在于,所述根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化,包括:根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,建立因子图模型;使用所述因子图模型对所述IMU参数进行初始化。
- 根据权利要求6所述的方法,其特征在于,所述因子图模型包括变量节点和因子节点;所述变量节点指示所述IMU参数的待求解量,所述因子节点指示第一残差和第二残差;其中,所述第一残差根据所述IMU参数的预设先验值和所述待求解量确定;所述第二残差根据所述激光雷达采集点云时的位姿、所述预积分量和所述IMU参数的待求解量确定。
- 根据权利要求7所述的方法,其特征在于,初始化后所述IMU参数的值包括:所述第一残差和所述第二残差之和最小时对应的所述IMU参数的待求解量。
- 根据权利要求7所述的方法,其特征在于,所述IMU参数包括加速度计的偏置、陀螺仪的偏置、重力方向和/或初始速度;和/或,所述预积分量包括速度预积分量、平移预积分量和旋转预积分量;和/或,所述激光雷达采集点云时的位姿包括位置量和姿态量。
- 根据权利要求9所述的方法,其特征在于,所述第二残差包括旋转残差;所述旋转残差表征相邻两个时刻之间两个相对旋转量的差异;其中一个相对旋转量根据所述激光雷达在所述相邻两个时刻分别对应的姿态量确定,另一个相对旋转量根据所述旋转预积分量和所述陀螺仪的偏置的待求解量确定。
- 根据权利要求9所述的方法,其特征在于,所述第二残差包括平移残差;所述平移残差表征相邻两个时刻之间两个相对平移量的差异;其中一个相对平移量根据所述激光雷达在所述相邻两个时刻分别对应的位置量、其中一个姿态量以及重力方向的待求解量确定,另一个相对平移量根据所述平移预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求9所述的方法,其特征在于,所述第二残差包括速度残差;所述速度残差表征相邻两个时刻之间两个相对速度的差异;其中一个相对速度根据所述IMU在所述相邻两个时刻分别对应的速度的待求解量、所述激光雷达其中一个姿态量以及重力方向的待求解量确定,另一个相对速度根据所述速度预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求9所述的方法,其特征在于,所述加速度计的偏置的预设先验值为0;所述陀螺仪的偏置的预设先验值为0;所述重力方向根据所述IMU采集的N个加速度的平均值的方向确定,N为大于1的整数;和/或,所述初始速度的预设先验值根据所述激光雷达采集点云时的位姿估计得到。
- 根据权利要求7所述的方法,其特征在于,所述第一残差的权重根据所述预设先验值的噪声所确定;和/或,所述第二残差的权重根据所述IMU在预积分过程中累积的噪声确定;其中,所述第二残差的权重与所述噪声呈负相关关系。
- 根据权利要求5所述的方法,其特征在于,所述确定激光雷达采集点云时的位姿,包括:获取所述激光雷达采集的当前帧点云;确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;根据所述当前帧点云和上一帧点云的相对位姿,确定所述激光雷达采集当前帧点云时的位姿。
- 根据权利要求15所述的方法,其特征在于,所述确定当前帧点云和上一帧点云中匹配的三维点对,包括:将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;对于转换后的三维点,在预设搜索范围内搜索所述当前帧点云中与其最邻近的三维点,获得匹配的三维点对。
- 根据权利要求16所述的方法,其特征在于,重复执行以下步骤直到达到预设的迭代要求:确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,和/或,所述匹配的三维点对之间的距离最小。
- 根据权利要求17所述的方法,其特征在于,在所述激光雷达的运动速度大于预设速度值时,所述预设搜索范围的尺寸与迭代次数呈负相关关系。
- 根据权利要求1所述的方法,其特征在于,所述使用所述点云中除所述前景点之外的非前景点进行位姿估计,包括:从所述点云的非前景点中提取线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同;根据相邻两帧点云中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述相邻两帧点云的相对位姿;根据所述相邻两帧点云的相对位姿进行位姿估计。
- 根据权利要求19所述的方法,其特征在于,所述面特征中,沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值。
- 根据权利要求19所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述可移动平台处于预设退化场景下提取的。
- 根据权利要求21所述的方法,其特征在于,所述预设退化场景包括隧道场景、高速公路场景和/或空旷场景。
- 根据权利要求19至22任意一项所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向的情况下提取的。
- 根据权利要求19所述的方法,其特征在于,所述线特征根据所述点云中曲率大于预设值的非前景点和/或孤立的非前景点确定。
- 根据权利要求19所述的方法,其特征在于,所述从所述点云的非前景点中提取面特征和第三类特征,包括:根据所述点云的协方差确定所述点云中的非前景点的第一特征值和两个第二特征值;若所述第一特征值与所述第二特征值的比值小于预设阈值,确定所述非前景点的面特征;若所述第一特征值与所述第二特征值的比值大于或等于所述预设阈值,确定所述非前景点的第三类特征。
- 根据权利要求25所述的方法,其特征在于,所述非前景点的不同深度对应不同的预设阈值;所述不同的预设阈值用于使从不同深度的非前景点中获取的面特征和/或第三类特征的数量差值在预设差值内。
- 根据权利要求19所述的方法,其特征在于,所述根据所述相邻两帧点云两者中的线特征、面特征和第三类特征进行配准,确定所述相邻两帧点云的相对位姿,包括:将其中一帧点云中的非前景点转换到另一帧点云所在坐标系中;对于属于线特征的转换后的非前景点,在所述另一帧点云中确定与其匹配的直线,并获取所述转换后的非前景点到匹配的直线的第一距离;对于属于面特征的转换后的非前景点,在所述另一帧点云中确定与其匹配的第一平面,并获取所述转换后的非前景点与其在所述第一平面上的投影点之间的第二距离;对于属于第三类特征的转换后的非前景点,在所述另一帧点云中确定与其匹配的第二平面,并获取所述转换后的非前景点到所述第二平面的第三距离;根据所述第一距离、所述第二距离和/或所述第三距离,确定所述相邻两帧点云的相对位姿。
- 根据权利要求27所述的方法,其特征在于,所述与其匹配的直线包括:所述另一帧点云中与所述属于线特征的转换后的非前景点距离最近的多个三维点组成的直线。
- 根据权利要求27所述的方法,其特征在于,所述与其匹配的第一平面包括:所述另一帧点云中与所述属于面特征的转换后的非前景点距离最近的多个三维点组成的平面。
- 根据权利要求27所述的方法,其特征在于,所述与其匹配的第二平面包括:所述另一帧点云中与所述属于第三类特征的转换后的非前景点距离最近的多个三维点组成的平面。
- 根据权利要求27所述的方法,其特征在于,在所述第一平面中,垂直于法向量方向的两个第二特征值的方差远大于沿法向量方向的第一特征值的方差,且所述垂直于法向量方向的两个第二特征值的方差非无穷大。
- 根据权利要求27所述的方法,其特征在于,重复执行以下步骤直到达到预设的迭代要求:根据所述相邻两帧点云两者中的线特征、面特征和第三类特征进行配准, 确定所述相邻两帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,或者,所述第一距离、所述第二距离和/或所述第三距离最小。
- 根据权利要求19至32任意一项所述的方法,其特征在于,所述根据所述相邻两帧点云的相对位姿进行位姿估计,包括:根据所述相邻两帧点云的相对位姿和初始化的IMU在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计的位姿。
- 根据权利要求33所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;在所述可移动平台处于预设退化场景的情况下,在确定激光雷达-惯性里程计的位姿的过程中,减小所述激光雷达的权重以及增大所述IMU的权重。
- 一种激光雷达-惯性里程计的位姿估计方法,其特征在于,包括:确定激光雷达采集点云时的位姿,以及,获取对IMU采集的运动数据进行预积分得到的预积分量;其中,所述点云和所述运动数据在同一时间段内采集得到;根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化;根据初始化后的IMU采集的运动数据以及所述激光雷达采集的点云进行位姿估计。
- 根据权利要求35所述的方法,其特征在于,所述根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化,包括:根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,建立因子图模型;使用所述因子图模型对所述IMU参数进行初始化。
- 根据权利要求36所述的方法,其特征在于,所述因子图模型包括变量节点和因子节点;所述变量节点指示所述IMU参数的待求解量,所述因子节点指示第一残差和第二残差;其中,所述第一残差根据所述IMU参数的预设先验值和所述待求解量确定;所述第二残差根据所述激光雷达采集点云时的位姿、所述预积分量和所述IMU参数的待求解量确定。
- 根据权利要求37所述的方法,其特征在于,初始化后所述IMU参数的值包括:所述第一残差和所述第二残差之和最小时对应的所述IMU参数的待求解量。
- 根据权利要求37所述的方法,其特征在于,所述IMU参数包括加速度计的偏置、陀螺仪的偏置、重力方向和/或初始速度;和/或,所述预积分量包括速度预积分量、平移预积分量和旋转预积分量;和/或,所述激光雷达采集点云时的位姿包括位置量和姿态量。
- 根据权利要求39所述的方法,其特征在于,所述第二残差包括旋转残差;所述旋转残差表征相邻两个时刻之间两个相对旋转量的差异;其中一个相对旋转量根据所述激光雷达在所述相邻两个时刻分别对应的姿态量确定,另一个相对旋转量根据所述旋转预积分量和所述陀螺仪的偏置的待求解量确定。
- 根据权利要求39所述的方法,其特征在于,所述第二残差包括平移残差;所述平移残差表征相邻两个时刻之间两个相对平移量的差异;其中一个相对平移量根据所述激光雷达在所述相邻两个时刻分别对应的位置量、其中一个姿态量以及重力方向的待求解量确定,另一个相对平移量根据所述平移预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求39所述的方法,其特征在于,所述第二残差包括速度残差;所述速度残差表征相邻两个时刻之间两个相对速度的差异;其中一个相对速度根据所述IMU在所述相邻两个时刻分别对应的速度的待求解量、所述激光雷达其中一个姿态量以及重力方向的待求解量确定,另一个相对速度根据所述速度预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求39所述的方法,其特征在于,所述加速度计的偏置的预设先验值为0;所述陀螺仪的偏置的预设先验值为0;所述重力方向根据所述IMU采集的N个加速度的平均值的方向确定,N为大于1的整数;和/或,所述初始速度的预设先验值根据所述激光雷达采集点云时的位姿估计得到。
- 根据权利要求37所述的方法,其特征在于,所述第一残差的权重根据所述预设先验值的噪声所确定;和/或,所述第二残差的权重根据所述IMU在预积分过程中累积的噪声确定。
- 根据权利要求44所述的方法,其特征在于,所述第二残差的权重与所述噪声呈负相关关系。
- 根据权利要求35所述的方法,其特征在于,所述确定激光雷达采集点云时的位姿,包括:获取所述激光雷达采集的当前帧点云;确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;根据所述当前帧点云和上一帧点云的相对位姿,确定所述激光雷达采集当前帧点云时的位姿。
- 根据权利要求46所述的方法,其特征在于,所述确定当前帧点云和上一帧点云中匹配的三维点对,包括:将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;对于转换后的三维点,在预设搜索范围内搜索所述当前帧点云中与其最邻近的三维点,获得匹配的三维点对。
- 根据权利要求47所述的方法,其特征在于,重复执行以下步骤直到达到预设的迭代要求:确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,和/或,所述匹配的三维点对之间的距离最小。
- 根据权利要求48所述的方法,其特征在于,在所述激光雷达的运动速度大于预设速度值时,所述预设搜索范围的尺寸与迭代次数呈负相关关系。
- 根据权利要求48所述的方法,其特征在于,在首次迭代过程中,使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达采集上一帧点云时的位姿以及所述IMU采集的运动数据确定;在非首次迭代过程中,使用上一次迭代过程确定的所述当前帧点云和上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
- 根据权利要求35所述的方法,其特征在于,所述根据初始化后的IMU采集的运动数据以及所述激光雷达采集的点云进行位姿估计,包括:对所述点云进行聚类,确定表征动态物体的前景点;使用所述点云中除所述前景点之外的非前景点和初始化后的IMU采集的运动数 据进行位姿估计。
- 根据权利要求51所述的方法,其特征在于,所述非前景点包括地面点和/或背景点。
- 根据权利要求35所述的方法,其特征在于,所述根据初始化后的IMU采集的运动数据以及所述激光雷达采集的点云进行位姿估计,包括:提取所述点云中的线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同;根据相邻两帧点云中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述相邻两帧点云的相对位姿;根据所述相邻两帧点云的相对位姿和初始化后的IMU采集的运动数据进行位姿估计。
- 根据权利要求53所述的方法,其特征在于,所述面特征中,沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值。
- 根据权利要求53所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述可移动平台处于预设退化场景下提取的。
- 根据权利要求55所述的方法,其特征在于,所述预设退化场景包括隧道场景、高速公路场景和/或空旷场景。
- 根据权利要求53至56任意一项所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向的情况下提取的。
- 根据权利要求53所述的方法,其特征在于,所述线特征根据所述点云中曲率大于预设值的三维点和/或孤立的三维点确定。
- 根据权利要求53所述的方法,其特征在于,所述提取所述点云中的面特征和第三类特征,包括:根据所述点云的协方差确定所述点云中的三维点的第一特征值和两个第二特征值;若所述第一特征值与所述第二特征值的比值小于预设阈值,确定所述三维点的面特征;若所述第一特征值与所述第二特征值的比值大于或等于所述预设阈值,确定所述三维点的第三类特征。
- 根据权利要求59所述的方法,其特征在于,所述三维点的不同深度对应不同的预设阈值;所述不同的预设阈值用于使从不同深度的三维点中获取的面特征和/或第三类特征的数量差值在预设差值内。
- 根据权利要求53所述的方法,其特征在于,所述根据所述相邻两帧点云中的线特征、面特征和第三类特征进行配准,确定所述相邻两帧点云的相对位姿,包括:将所述其中一帧点云中的三维点转换到另一帧点云所在坐标系中;对于属于线特征的转换后的三维点,在所述另一帧点云中确定与其匹配的直线,并获取所述转换后的三维点到匹配的直线的第一距离;对于属于面特征的转换后的三维点,在所述另一帧点云中确定与其匹配的第一平面,并获取所述转换后的三维点与其在所述第一平面上的投影点之间的第二距离;对于属于第三类特征的转换后的三维点,在所述另一帧点云中确定与其匹配的第二平面,并获取所述转换后的三维点到所述第二平面的第三距离;根据所述第一距离、所述第二距离和/或所述第三距离,确定所述相邻两帧点云的相对位姿。
- 根据权利要求61所述的方法,其特征在于,所述与其匹配的直线包括:所述 另一帧点云中与所述属于线特征的转换后的三维点距离最近的多个三维点组成的直线。
- 根据权利要求61所述的方法,其特征在于,所述与其匹配的第一平面包括:所述另一帧点云中与所述属于面特征的转换后的三维点距离最近的多个三维点组成的平面。
- 根据权利要求61所述的方法,其特征在于,所述与其匹配的第二平面包括:所述另一帧点云中与所述属于第三类特征的转换后的三维点距离最近的多个三维点组成的平面。
- 根据权利要求61所述的方法,其特征在于,在所述第一平面中,垂直于法向量方向的两个第二特征值的方差远大于沿法向量方向的第一特征值的方差,且所述垂直于法向量方向的两个第二特征值的方差非无穷大。
- 根据权利要求53所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;在所述可移动平台处于预设退化场景的情况下,在确定激光雷达-惯性里程计的位姿的过程中,减小所述激光雷达的权重以及增大所述IMU的权重。
- 一种激光雷达-惯性里程计的位姿估计方法,其特征在于,包括:获取激光雷达采集的当前帧点云;提取所述当前帧点云中的线特征、面特征和第三类特征;其中,所述第三类特征中的沿法向量方向的第一特征值与垂直于法向量方向的两个第二特征值基本相同;根据所述当前帧点云和上一帧点云两者中的所述线特征、所述面特征和所述第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿;根据所述当前帧点云与上一帧点云的相对位姿进行位姿估计。
- 根据权利要求67所述的方法,其特征在于,所述面特征中,沿法向量方向的第一特征值远小于垂直于法向量方向的两个第二特征值。
- 根据权利要求67所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述可移动平台处于预设退化场景下提取的。
- 根据权利要求69所述的方法,其特征在于,所述预设退化场景包括隧道场景、高速公路场景和/或空旷场景。
- 根据权利要求67至70任意一项所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;所述第三类特征是在所述线特征的数量少于预设数量值和/或所述面特征指示的平面基本平行于所述可移动平台的前进方向的情况下提取的。
- 根据权利要求67所述的方法,其特征在于,所述线特征根据所述点云中曲率大于预设值的三维点和/或孤立的三维点确定。
- 根据权利要求67所述的方法,其特征在于,所述提取所述当前帧点云中的面特征和第三类特征,包括:根据所述当前帧点云的协方差确定所述当前帧点云中的三维点的第一特征值和两个第二特征值;若所述第一特征值与所述第二特征值的比值小于预设阈值,确定所述三维点的面特征;若所述第一特征值与所述第二特征值的比值大于或等于所述预设阈值,确定所述三维点的第三类特征。
- 根据权利要求73所述的方法,其特征在于,所述三维点的不同深度对应不同的预设阈值;所述不同的预设阈值用于使从不同深度的三维点中获取的面特征和/或第三类特征的数量差值在预设差值内。
- 根据权利要求74所述的方法,其特征在于,所述三维点的深度与所述预设阈值呈正相关关系。
- 根据权利要求67所述的方法,其特征在于,所述根据所述当前帧点云和上一帧点云两者中的线特征、面特征和第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿,包括:将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;对于属于线特征的转换后的三维点,在所述当前帧点云中确定与其匹配的直线,并获取所述转换后的三维点到匹配的直线的第一距离;对于属于面特征的转换后的三维点,在所述当前帧点云中确定与其匹配的第一平面,并获取所述转换后的三维点与其在所述第一平面上的投影点之间的第二距离;对于属于第三类特征的转换后的三维点,在所述当前帧点云中确定与其匹配的第二平面,并获取所述转换后的三维点到所述第二平面的第三距离;根据所述第一距离、所述第二距离和/或所述第三距离,确定所述当前帧点云与上一帧点云的相对位姿。
- 根据权利要求76所述的方法,其特征在于,所述与其匹配的直线包括:所述当前帧点云中与所述属于线特征的转换后的三维点距离最近的多个三维点组成的直线。
- 根据权利要求76所述的方法,其特征在于,所述与其匹配的第一平面包括:所述当前帧点云中与所述属于面特征的转换后的三维点距离最近的多个三维点组成的平面。
- 根据权利要求76所述的方法,其特征在于,所述与其匹配的第二平面包括:所述当前帧点云中与所述属于第三类特征的转换后的三维点距离最近的多个三维点组成的平面。
- 根据权利要求76所述的方法,其特征在于,在所述第一平面中,垂直于法向量方向的两个第二特征值的方差远大于沿法向量方向的第一特征值的方差,且所述垂直于法向量方向的两个第二特征值的方差非无穷大。
- 根据权利要求76所述的方法,其特征在于,重复执行以下步骤直到达到预设的迭代要求:根据所述当前帧点云和上一帧点云两者中的线特征、面特征和第三类特征进行配准,确定所述当前帧点云与上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,或者,所述第一距离、所述第二距离和/或所述第三距离最小。
- 根据权利要求81所述的方法,其特征在于,在首次迭代过程中,使用初始位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;其中,所述初始位姿根据所述激光雷达采集上一帧点云时的位姿以及IMU采集的运动数据确定;在非首次迭代过程中,使用上一次迭代过程确定的所述当前帧点云与上一帧点云的相对位姿将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中。
- 根据权利要求67至82任意一项所述的方法,其特征在于,所述根据所述当前帧点云与上一帧点云的相对位姿进行位姿估计,包括:根据所述当前帧点云与上一帧点云的相对位姿和初始化后的IMU在同一时间段内采集的运动数据进行位姿估计,确定激光雷达-惯性里程计的位姿。
- 根据权利要求83所述的方法,其特征在于,所述激光雷达搭载于可移动平台中;在所述可移动平台处于预设退化场景的情况下,在确定激光雷达-惯性里程计的位姿的过程中,减小所述激光雷达的权重以及增大所述IMU的权重。
- 根据权利要求67所述的方法,其特征在于,所述提取所述当前帧点云中的线特征、面特征和第三类特征,包括:对所述当前帧点云进行聚类,确定表征动态物体的前景点;从所述当前帧点云中除所述前景点之外的非前景点提取线特征、面特征和第三类特征。
- 根据权利要求85所述的方法,其特征在于,所述非前景点包括地面点和/或背景点。
- 根据权利要求84所述的方法,其特征在于,所述IMU通过以下方式进行初始化:确定激光雷达采集点云时的位姿,以及,获取对IMU在同一时间段内采集的运动数据进行预积分得到的预积分量;根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化。
- 根据权利要求87所述的方法,其特征在于,所述根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,对所述IMU参数进行初始化,包括:根据所述激光雷达采集点云时的位姿、所述预积分量以及IMU参数的预设先验值,建立因子图模型;使用所述因子图模型对所述IMU参数进行初始化。
- 根据权利要求88所述的方法,其特征在于,所述因子图模型包括变量节点和因子节点;所述变量节点指示所述IMU参数的待求解量,所述因子节点指示第一残差和第二残差;其中,所述第一残差根据所述IMU参数的预设先验值和所述待求解量确定;所述第二残差根据所述激光雷达采集点云时的位姿、所述预积分量和所述IMU参数的待求解量确定。
- 根据权利要求89所述的方法,其特征在于,初始化后所述IMU参数的值包括:所述第一残差和所述第二残差之和最小时对应的所述IMU参数的待求解量。
- 根据权利要求89所述的方法,其特征在于,所述IMU参数包括加速度计的偏置、陀螺仪的偏置、重力方向和/或初始速度;和/或,所述预积分量包括速度预积分量、平移预积分量和旋转预积分量;和/或,所述激光雷达采集点云时的位姿包括位置量和姿态量。
- 根据权利要求91所述的方法,其特征在于,所述第二残差包括旋转残差;所述旋转残差表征相邻两个时刻之间两个相对旋转量的差异;其中一个相对旋转量根据所述激光雷达在所述相邻两个时刻分别对应的姿态量确定,另一个相对旋转量根据所述旋转预积分量和所述陀螺仪的偏置的待求解量确定。
- 根据权利要求91所述的方法,其特征在于,所述第二残差包括平移残差;所述平移残差表征相邻两个时刻之间两个相对平移量的差异;其中一个相对平移量根据所述激光雷达在所述相邻两个时刻分别对应的位置量、其中一个姿态量以及重力方向的待求解量确定,另一个相对平移量根据所述平移预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求91所述的方法,其特征在于,所述第二残差包括速度残差;所述速度残差表征相邻两个时刻之间两个相对速度的差异;其中一个相对速度根据所述IMU在所述相邻两个时刻分别对应的速度的待求解量、所述激光雷达其中一个姿态量以及重力方向的待求解量确定,另一个相对速度根据所述速度预积分量、所述陀螺仪的偏置的待求解量和所述加速度计的偏置的待求解量确定。
- 根据权利要求91所述的方法,其特征在于,所述加速度计的偏置的预设先验值为0;所述陀螺仪的偏置的预设先验值为0;所述重力方向根据所述IMU采集的N个加速度的平均值的方向确定,N为大于1的整数;和/或,所述初始速度的预设先验值根据所述激光雷达采集点云时的位姿估计得到。
- 根据权利要求89所述的方法,其特征在于,所述第一残差的权重根据所述预设先验值的噪声所确定;和/或,所述第二残差的权重根据所述IMU在预积分过程中累积的噪声确定;其中,所述第二残差的权重与所述噪声呈负相关关系。
- 根据权利要求87所述的方法,其特征在于,所述确定激光雷达采集点云时的位 姿,包括:获取所述激光雷达采集的当前帧点云;确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;根据所述当前帧点云和上一帧点云的相对位姿,确定所述激光雷达采集当前帧点云时的位姿。
- 根据权利要求97所述的方法,其特征在于,所述确定当前帧点云和上一帧点云中匹配的三维点对,包括:将所述上一帧点云中的三维点转换到所述当前帧点云所在坐标系中;对于转换后的三维点,在预设搜索范围内搜索所述当前帧点云中与其最邻近的三维点,获得匹配的三维点对。
- 根据权利要求98所述的方法,其特征在于,重复执行以下步骤直到达到预设的迭代要求:确定所述当前帧点云和上一帧点云中匹配的三维点对,并根据所述匹配的三维点对确定所述当前帧点云和上一帧点云的相对位姿;其中,所述预设的迭代要求包括:达到预设的迭代次数,和/或,所述匹配的三维点对之间的距离最小。
- 根据权利要求99所述的方法,其特征在于,在所述激光雷达的运动速度大于预设速度值时,所述预设搜索范围的尺寸与迭代次数呈负相关关系。
- 一种激光雷达-惯性里程计,其特征在于,包括:用于采集点云的激光雷达、用于采集运动数据的IMU、用于存储可执行指令的存储器以及一个或多个处理器;其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成执行权利要求1~100任意一项所述的方法。
- 一种可移动平台,其特征在于,包括:机身;动力系统,设于所述机身上,用于为所述可移动平台提供动力;以及,如权利要求101所述的设于所述机身上的激光雷达-惯性里程计。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如权利要求1至101任意一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/108093 WO2023000294A1 (zh) | 2021-07-23 | 2021-07-23 | 位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/108093 WO2023000294A1 (zh) | 2021-07-23 | 2021-07-23 | 位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023000294A1 true WO2023000294A1 (zh) | 2023-01-26 |
Family
ID=84980368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/108093 WO2023000294A1 (zh) | 2021-07-23 | 2021-07-23 | 位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023000294A1 (zh) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222543A (zh) * | 2023-04-26 | 2023-06-06 | 齐鲁工业大学(山东省科学院) | 用于机器人环境感知的多传感器融合地图构建方法及系统 |
CN116295507A (zh) * | 2023-05-26 | 2023-06-23 | 南京师范大学 | 一种基于深度学习的激光惯性里程计优化方法、系统 |
CN117278941A (zh) * | 2023-09-15 | 2023-12-22 | 广东省机场管理集团有限公司工程建设指挥部 | 基于5g网络和数据融合的车辆驾驶辅助定位方法及装置 |
CN117689698A (zh) * | 2024-02-04 | 2024-03-12 | 安徽蔚来智驾科技有限公司 | 点云配准方法、智能设备及存储介质 |
CN118031976A (zh) * | 2024-04-15 | 2024-05-14 | 中国科学院国家空间科学中心 | 一种探索未知环境的人机协同系统 |
CN118031983A (zh) * | 2024-04-11 | 2024-05-14 | 江苏集萃清联智控科技有限公司 | 自动驾驶融合定位方法及系统 |
CN118067101A (zh) * | 2023-12-25 | 2024-05-24 | 江苏金陵智造研究院有限公司 | 一种反光标记匹配方法及激光雷达位姿纠正方法 |
CN118226401A (zh) * | 2024-05-22 | 2024-06-21 | 南京航空航天大学 | 基于激光雷达三维点云的固定翼飞机位姿测量方法及装置 |
CN118334259A (zh) * | 2024-04-11 | 2024-07-12 | 西安工业大学 | 融合增强多线激光雷达与imu的建图方法及系统 |
CN118537606A (zh) * | 2024-07-26 | 2024-08-23 | 集美大学 | 一种三维点云匹配退化的处理方法、装置及程序产品 |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107991680A (zh) * | 2017-11-21 | 2018-05-04 | 南京航空航天大学 | 动态环境下基于激光雷达的slam方法 |
CN110879400A (zh) * | 2019-11-27 | 2020-03-13 | 炬星科技(深圳)有限公司 | 激光雷达与imu融合定位的方法、设备及存储介质 |
CN111156984A (zh) * | 2019-12-18 | 2020-05-15 | 东南大学 | 一种面向动态场景的单目视觉惯性slam方法 |
CN111443359A (zh) * | 2020-03-26 | 2020-07-24 | 达闼科技成都有限公司 | 定位方法、装置及设备 |
CN111476812A (zh) * | 2020-04-03 | 2020-07-31 | 浙江大学 | 地图分割方法、装置、位姿估计方法和设备终端 |
CN111531546A (zh) * | 2020-05-22 | 2020-08-14 | 济南浪潮高新科技投资发展有限公司 | 一种机器人位姿估计方法、装置、设备及存储介质 |
CN111583369A (zh) * | 2020-04-21 | 2020-08-25 | 天津大学 | 一种基于面线角点特征提取的激光slam方法 |
EP3715785A1 (en) * | 2019-03-29 | 2020-09-30 | Trimble Inc. | Slam assisted ins |
CN111929699A (zh) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | 一种顾及动态障碍物的激光雷达惯导里程计与建图方法及系统 |
CN112348897A (zh) * | 2020-11-30 | 2021-02-09 | 上海商汤临港智能科技有限公司 | 位姿确定方法及装置、电子设备、计算机可读存储介质 |
CN112363158A (zh) * | 2020-10-23 | 2021-02-12 | 浙江华睿科技有限公司 | 机器人的位姿估计方法、机器人和计算机存储介质 |
CN112966542A (zh) * | 2020-12-10 | 2021-06-15 | 武汉工程大学 | 一种基于激光雷达的slam系统和方法 |
CN113126115A (zh) * | 2021-04-06 | 2021-07-16 | 北京航空航天大学杭州创新研究院 | 基于点云的语义slam方法、装置、电子设备和存储介质 |
-
2021
- 2021-07-23 WO PCT/CN2021/108093 patent/WO2023000294A1/zh active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107991680A (zh) * | 2017-11-21 | 2018-05-04 | 南京航空航天大学 | 动态环境下基于激光雷达的slam方法 |
EP3715785A1 (en) * | 2019-03-29 | 2020-09-30 | Trimble Inc. | Slam assisted ins |
CN110879400A (zh) * | 2019-11-27 | 2020-03-13 | 炬星科技(深圳)有限公司 | 激光雷达与imu融合定位的方法、设备及存储介质 |
CN111156984A (zh) * | 2019-12-18 | 2020-05-15 | 东南大学 | 一种面向动态场景的单目视觉惯性slam方法 |
CN111443359A (zh) * | 2020-03-26 | 2020-07-24 | 达闼科技成都有限公司 | 定位方法、装置及设备 |
CN111476812A (zh) * | 2020-04-03 | 2020-07-31 | 浙江大学 | 地图分割方法、装置、位姿估计方法和设备终端 |
CN111583369A (zh) * | 2020-04-21 | 2020-08-25 | 天津大学 | 一种基于面线角点特征提取的激光slam方法 |
CN111531546A (zh) * | 2020-05-22 | 2020-08-14 | 济南浪潮高新科技投资发展有限公司 | 一种机器人位姿估计方法、装置、设备及存储介质 |
CN111929699A (zh) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | 一种顾及动态障碍物的激光雷达惯导里程计与建图方法及系统 |
CN112363158A (zh) * | 2020-10-23 | 2021-02-12 | 浙江华睿科技有限公司 | 机器人的位姿估计方法、机器人和计算机存储介质 |
CN112348897A (zh) * | 2020-11-30 | 2021-02-09 | 上海商汤临港智能科技有限公司 | 位姿确定方法及装置、电子设备、计算机可读存储介质 |
CN112966542A (zh) * | 2020-12-10 | 2021-06-15 | 武汉工程大学 | 一种基于激光雷达的slam系统和方法 |
CN113126115A (zh) * | 2021-04-06 | 2021-07-16 | 北京航空航天大学杭州创新研究院 | 基于点云的语义slam方法、装置、电子设备和存储介质 |
Non-Patent Citations (3)
Title |
---|
PANG FAN, WEI SHUANGFENG;SHI XIANJIE;CHEN KAI: "Odometry and mapping method for coupling LiDAR and IMU", APPLICATION RESEARCH OF COMPUTERS, CHENGDU, CN, vol. 38, no. 7, 5 July 2021 (2021-07-05), CN , XP093027162, ISSN: 1001-3695, DOI: 10.19734/j.issn.1001-3695.2020.06.0259 * |
PANG FAN: "Research on Odometry and Mapping Method of Lidar Inertial Coupled", CHINA MASTER'S' THESES FULL-TEXT DATABASE (ELECTRONIC JOURNAL)-INFORMATION & TECHNOLOGY), TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 8, 15 August 2020 (2020-08-15), CN , XP093027174, ISSN: 1674-0246 * |
SHANG TIANXIANG, WANG JINGCHUAN; DONG LINGFENG; CHEN WEIDONG: "3Dlidar SLAM technology in lunar environment", ACTA AERONAUTICA ET ASTRONAUTICA SINICA, vol. 42, no. 1, 25 January 2021 (2021-01-25), pages 296 - 304, XP093027160, ISSN: 1000-6893, DOI: 10.7527/S1000-6893.2020.24166 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116222543A (zh) * | 2023-04-26 | 2023-06-06 | 齐鲁工业大学(山东省科学院) | 用于机器人环境感知的多传感器融合地图构建方法及系统 |
CN116295507A (zh) * | 2023-05-26 | 2023-06-23 | 南京师范大学 | 一种基于深度学习的激光惯性里程计优化方法、系统 |
CN116295507B (zh) * | 2023-05-26 | 2023-08-15 | 南京师范大学 | 一种基于深度学习的激光惯性里程计优化方法、系统 |
CN117278941A (zh) * | 2023-09-15 | 2023-12-22 | 广东省机场管理集团有限公司工程建设指挥部 | 基于5g网络和数据融合的车辆驾驶辅助定位方法及装置 |
CN118067101A (zh) * | 2023-12-25 | 2024-05-24 | 江苏金陵智造研究院有限公司 | 一种反光标记匹配方法及激光雷达位姿纠正方法 |
CN117689698A (zh) * | 2024-02-04 | 2024-03-12 | 安徽蔚来智驾科技有限公司 | 点云配准方法、智能设备及存储介质 |
CN117689698B (zh) * | 2024-02-04 | 2024-04-19 | 安徽蔚来智驾科技有限公司 | 点云配准方法、智能设备及存储介质 |
CN118031983A (zh) * | 2024-04-11 | 2024-05-14 | 江苏集萃清联智控科技有限公司 | 自动驾驶融合定位方法及系统 |
CN118334259A (zh) * | 2024-04-11 | 2024-07-12 | 西安工业大学 | 融合增强多线激光雷达与imu的建图方法及系统 |
CN118031976A (zh) * | 2024-04-15 | 2024-05-14 | 中国科学院国家空间科学中心 | 一种探索未知环境的人机协同系统 |
CN118226401A (zh) * | 2024-05-22 | 2024-06-21 | 南京航空航天大学 | 基于激光雷达三维点云的固定翼飞机位姿测量方法及装置 |
CN118537606A (zh) * | 2024-07-26 | 2024-08-23 | 集美大学 | 一种三维点云匹配退化的处理方法、装置及程序产品 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023000294A1 (zh) | 位姿估计方法、激光雷达-惯性里程计、可移动平台及存储介质 | |
CN112347840B (zh) | 视觉传感器激光雷达融合无人机定位与建图装置和方法 | |
KR102063534B1 (ko) | 라이다를 이용한 지도 생성 방법 | |
TWI705263B (zh) | 使用光達的運輸工具定位系統 | |
Chiang et al. | Seamless navigation and mapping using an INS/GNSS/grid-based SLAM semi-tightly coupled integration scheme | |
US10295365B2 (en) | State estimation for aerial vehicles using multi-sensor fusion | |
CN111156998B (zh) | 一种基于rgb-d相机与imu信息融合的移动机器人定位方法 | |
US10762673B2 (en) | 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map | |
CN112639502B (zh) | 机器人位姿估计 | |
CN111288989B (zh) | 一种小型无人机视觉定位方法 | |
CN112304307A (zh) | 一种基于多传感器融合的定位方法、装置和存储介质 | |
KR102083911B1 (ko) | 점군을 포함하는 라이다 지도 생성 방법 | |
JP2020064056A (ja) | 位置推定装置及び方法 | |
Steiner et al. | A vision-aided inertial navigation system for agile high-speed flight in unmapped environments: Distribution statement a: Approved for public release, distribution unlimited | |
CN110764531B (zh) | 基于激光雷达与人工势场法的无人机编队飞行避障方法 | |
CN113741503B (zh) | 一种自主定位式无人机及其室内路径自主规划方法 | |
KR20200032776A (ko) | 다중 센서 플랫폼 간 정보 융합을 위한 시스템 | |
Van Dalen et al. | Absolute localization using image alignment and particle filtering | |
Andert et al. | Optical-aided aircraft navigation using decoupled visual SLAM with range sensor augmentation | |
Zhang et al. | Online ground multitarget geolocation based on 3-D map construction using a UAV platform | |
Srinara et al. | Performance analysis of 3D NDT scan matching for autonomous vehicles using INS/GNSS/3D LiDAR-SLAM integration scheme | |
Li et al. | Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system | |
KR102083913B1 (ko) | 라이다를 이용한 지도 생성 장치 | |
Wang et al. | Micro aerial vehicle navigation with visual-inertial integration aided by structured light | |
CN117330052A (zh) | 基于红外视觉、毫米波雷达和imu融合的定位与建图方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21950543 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21950543 Country of ref document: EP Kind code of ref document: A1 |