CN112083725B - Structure-shared multi-sensor fusion positioning system for automatic driving vehicle - Google Patents

Structure-shared multi-sensor fusion positioning system for automatic driving vehicle Download PDF

Info

Publication number
CN112083725B
CN112083725B CN202010923469.8A CN202010923469A CN112083725B CN 112083725 B CN112083725 B CN 112083725B CN 202010923469 A CN202010923469 A CN 202010923469A CN 112083725 B CN112083725 B CN 112083725B
Authority
CN
China
Prior art keywords
vehicle
pose
module
laser
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010923469.8A
Other languages
Chinese (zh)
Other versions
CN112083725A (en
Inventor
秦晓辉
庞涛
徐彪
谢国涛
胡满江
秦兆博
秦洪懋
王晓伟
边有钢
丁荣军
常灯祥
王哲文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202010923469.8A priority Critical patent/CN112083725B/en
Publication of CN112083725A publication Critical patent/CN112083725A/en
Application granted granted Critical
Publication of CN112083725B publication Critical patent/CN112083725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a structure-shared multi-sensor fusion positioning system for an automatic driving vehicle, which comprises: the vehicle kinematic model module predicts the vehicle motion state according to the vehicle kinematic model; a laser SLAM algorithm module; and the extended Kalman filtering module fuses the vehicle prediction pose provided by the vehicle kinematics model and the vehicle optimization pose provided by the laser SLAM algorithm by using the extended Kalman filtering technology so as to acquire accurate vehicle pose information. The structure-shared multi-sensor fusion positioning system for the automatic driving vehicle can effectively realize positioning through the arrangement of the vehicle kinematics model module, the laser SLAM algorithm module and the extended Kalman filtering module.

Description

Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
Technical Field
The invention relates to a positioning method of an automatic driving vehicle, in particular to a structure-shared multi-sensor fusion positioning system for the automatic driving vehicle.
Background
In recent years, research into automatic driving techniques has become a hotspot and trend. The automatic driving technology comprises environment perception, positioning navigation, path planning and decision control, wherein the positioning navigation is the key of automatic driving. Accurate and robust real-time positioning is the basis for safe driving of the automatic driving vehicle, and the position information of the vehicle in the global coordinate system is determined to provide a basis for a planning and control module. The automatic driving vehicle obtains the self motion state and the surrounding terrain environment information through the sensor, and the accurate estimation of the vehicle pose is realized.
Currently, the common positioning techniques for autonomous vehicles are divided into two categories: 1) global Navigation Satellite System (GNSS) based positioning. 2) Autonomous sensor based positioning. The GNSS positioning accuracy is high, but the GNSS positioning accuracy is easy to lose effectiveness due to the shielding influence of using environments such as high buildings, tunnels, elevated garages and underground garages. Autonomous sensor-based positioning mainly utilizes lidar, inertial measurement units, or odometers. The method for estimating the track by using the odometer or the IMU sensor is a low-cost and autonomous vehicle positioning mode, has the advantages of strong anti-interference capability, and can provide high-precision vehicle positioning information in a short time according to sensor data, but the error of a track estimation positioning algorithm is continuously accumulated along with the time, so the method is not suitable for long-time independent positioning. SLAM algorithms utilize laser radars or cameras to achieve real-time positioning of an autonomous vehicle. In most cases, the automatic driving vehicle mainly moves in a plane, and the SLAM algorithm does not consider the constraint influence of the laser radar on an installation platform, so that the optimal pose of the vehicle deviates. In summary, the positioning accuracy requirements of the autonomous vehicle are difficult to meet by using a single sensor for positioning. Therefore, the structure-shared multi-sensor fusion positioning system for the automatic driving vehicle is designed, and the positioning accuracy and robustness of the automatic driving vehicle are improved by fusing multiple sensor data.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a structure-shared multi-sensor fusion positioning system for an automatic driving vehicle, which predicts the motion state of the vehicle through a vehicle kinematic model and optimizes the pose of the vehicle by using a laser SLAM algorithm. And the pose fusion of the vehicle kinematics model and the laser SLAM algorithm is realized by adopting an extended Kalman filtering technology so as to improve the positioning precision and robustness of the automatic driving vehicle. In order to achieve the purpose, the invention provides the following technical scheme: a structural shared multi-sensor fusion positioning system for autonomous vehicles, comprising:
the vehicle kinematic model module is used for constructing a vehicle kinematic model through data of a wheel speed sensor and a steering wheel corner sensor before a new frame of laser point cloud data is not reached, and predicting the motion state of the vehicle according to the vehicle kinematic model;
the laser SLAM algorithm module is divided into a front end matching part and a rear end graph optimizing part, and the front end matching is completed: based on the vehicle prediction state, correcting the motion distortion of the laser point cloud, calculating the local smoothness of the point cloud through neighborhood point information, extracting the characteristics of an angular point and a plane point by adopting a double-neighborhood strategy, matching the characteristics with the previous frame of point cloud, and finishing the rear-end optimization: constructing a global descriptor based on the point cloud strength and the local smoothness, calculating scanning similarity by chi-square inspection and verifying according to characteristic points, optimizing the vehicle pose through frame and local map matching, constructing a pose graph optimization based on pose conversion of a current frame and a historical loopback frame, and reducing vehicle pose accumulation errors;
and the extended Kalman filtering module fuses the vehicle prediction pose provided by the vehicle kinematics model and the vehicle optimization pose provided by the laser SLAM algorithm by using the extended Kalman filtering technology so as to acquire accurate vehicle pose information.
As a further improvement of the invention, the vehicle motion model module specifically comprises the following steps of constructing a vehicle kinematic model and predicting the vehicle state:
step 1, directly providing a vehicle longitudinal speed v through a wheel speed sensor and providing a steering wheel corner delta required by calculating a vehicle yaw angular speed omega through a steering wheel corner sensors
Step 2, based on the speed v and the steering wheel angle deltasCalculating the vehicle yaw angular velocity omega by the steering gear angular transmission ratio eta and the axle distance L:
Figure BDA0002667517300000031
and 3, taking the vehicle coordinate system at the previous moment as a reference coordinate system, always running the vehicle on a plane between two moments, and predicting the vehicle pose variation between two moments according to the vehicle kinematics model:
Figure BDA0002667517300000032
Figure BDA0002667517300000033
Figure BDA0002667517300000034
Figure BDA0002667517300000035
Figure BDA0002667517300000036
where, δ t represents the time interval of the adjacent underlying data,
Figure BDA0002667517300000037
representing the accumulated yaw angle increment relative to the corresponding time instant of the last laser frame,
Figure BDA0002667517300000038
the vehicle is always in a plane motion state between adjacent moments, namely: the vehicle coordinate system at the previous moment is taken as a reference coordinate system, and no movement exists in the Z-axis direction;
step 4, based on the optimized vehicle pose at the last moment
Figure BDA0002667517300000039
Calculating the pose of the automatic driving vehicle at the moment j in the world coordinate system
Figure BDA00026675173000000310
Figure BDA00026675173000000311
Figure BDA00026675173000000312
Wherein the content of the first and second substances,
Figure BDA00026675173000000313
as a further improvement of the invention, the rotation matrix on the plane in the step 4
Figure BDA00026675173000000314
The calculation formula is as follows:
Figure BDA00026675173000000315
as a further improvement of the present invention, the laser SLAM algorithm module includes a front end matching module and a back end optimization module, wherein the front end matching module includes:
the point cloud distortion correction module receives the latest laser point cloud data and corrects the point cloud motion distortion according to the vehicle prediction state;
the point cloud feature extraction module is used for calculating the local smoothness of the point cloud according to the neighborhood point information and extracting the characteristics of the angular points and the plane points by adopting a double neighborhood strategy;
the inter-frame matching module is used for acquiring an optimized pose of the vehicle by matching the current frame point cloud with the previous frame point cloud based on the initial estimation of the pose of the vehicle;
the back-end optimization module comprises:
the descriptor construction module is used for constructing a global descriptor by utilizing the point cloud intensity and the local smoothness;
the similarity calculation module is used for calculating scanning similarity by using chi-square test;
verifying the characteristic points, namely verifying the correctness of the loop by using the matching number of the characteristic points;
and matching the loop frames, constructing a local map by using the corresponding pose of the loop frames, matching the frames with the local map, and acquiring the optimized pose of the vehicle.
And (4) optimizing the pose graph, namely constructing vehicle pose graph optimization according to the pose transformation of the current frame and the corresponding loop frame and the pose transformation of two adjacent frames in the loop, and reducing the accumulated error of the vehicle pose.
As a further improvement of the invention, the pose optimization module constructs a cost function according to the laser odometer residual error and the laser loop residual error, and performs combined nonlinear optimization by using a gradient descent method to obtain the vehicle optimization pose based on the laser SLAM algorithm.
As a further improvement of the invention, the algorithm in the extended Kalman filtering module is divided into two stages: a prediction stage: and predicting the vehicle pose and the covariance matrix at the current moment according to the optimized state quantity and the system covariance matrix at the previous moment, namely:
Figure BDA0002667517300000041
Figure BDA0002667517300000051
where Σ denotes the system covariance matrix, MjRepresenting the system noise covariance matrix. G (-) represents a state equation, G represents a state transition matrix, and the derivation formula is as follows:
Figure BDA0002667517300000052
wherein the right-times BCH approximates the inverse of the Jacobian matrix
Figure BDA0002667517300000053
Calculated as follows:
Figure BDA0002667517300000054
where θ a denotes a rotation vector corresponding to the rotation matrix R, θ denotes a rotation angle, and a denotes a rotation axis. The A is an antisymmetric symbol, and the V is an antisymmetric symbol;
and (3) an updating stage: calculating Kalman gain and updating the current pose and covariance matrix of the vehicle according to the measured value and the predicted value, namely:
Figure BDA0002667517300000055
Figure BDA0002667517300000056
Figure BDA0002667517300000057
wherein, KjRepresenting the kalman gain. N is a radical ofjRepresenting the observed noise covariance matrix. HjRepresenting a system observation matrix, which is a 6 x 6 identity matrix.
The invention has the beneficial effects that 1) the invention provides a structure-shared multi-sensor fusion positioning method for the automatic driving vehicle, realizes data fusion of a laser radar, a wheel speed sensor and a steering wheel corner sensor, and solves the problem that the positioning accuracy and robustness of a single sensor cannot meet the safe driving requirement of the automatic driving vehicle. 2) According to the method, the vehicle motion state is predicted through the vehicle kinematics model, the vehicle pose is optimized through the laser SLAM algorithm, the vehicle kinematics model and the laser SLAM algorithm are fused by adopting the extended Kalman filtering technology, and the positioning accuracy and robustness of the automatic driving vehicle are improved. 3) The method realizes the use of the vehicle kinematic model in the three-dimensional space by considering the vehicle space orientation, and improves the applicability of the structure sharing multi-sensor fusion positioning method for the automatic driving vehicle. 4) The method provides a basis for correcting the motion distortion of the point cloud through the vehicle kinematic model state prediction, and obtains the accurate laser point cloud based on a linear interpolation method, thereby being beneficial to the accuracy of laser point cloud data association. 5) According to the method, the local smoothness of the point cloud is calculated through neighborhood point information, the characteristics of the angular points and the plane points are extracted by adopting a double-neighborhood strategy, the influence of neighborhood size on the local smoothness calculation is reduced, and the accuracy and the robustness of characteristic extraction are improved. 6) The invention realizes the data fusion of the laser radar, the wheel speed sensor and the steering wheel angle sensor, and solves the problem that the positioning accuracy and the robustness of a single sensor are difficult to meet the safe driving requirement of the automatic driving vehicle. The vehicle prediction state provided by the vehicle kinematics model is used as a predicted value, the vehicle optimization pose provided by the laser SLAM algorithm is used as a measured value, and an extended Kalman filtering technology is adopted to fuse the two vehicle poses. The system has low calculation complexity and strong real-time performance, and can ensure the positioning precision and robustness of the automatic driving vehicle. 7) The method constructs a global descriptor based on the point cloud intensity and the local smoothness, calculates the similarity of the descriptors by chi-square test, accurately extracts a loop frame, constructs a pose graph for optimization, and reduces the vehicle positioning accumulated error. The direct utilization of point cloud intensity information and the one-object dual-purpose of local smoothness improve the utilization efficiency of characteristics, reduce the calculated amount and ensure real-time relocation. 8) The invention realizes the sharing of the original vehicle sensor without adding an additional sensor, thereby realizing the purposes of saving cost, reducing complexity and simultaneously improving reliability. The method uses a small amount of sensors to complete data fusion, does not need complex and expensive sensors such as IMU, GNSS and the like, is easy to pass vehicle gauge level test while being easy to be practiced, and 9) the method uses a pre-integration technology to synchronize data of a plurality of information sources with different frame rates, thereby avoiding repeated integration operation in the optimization process and saving the calculated amount.
Drawings
FIG. 1 is a structural shared multi-sensor fusion positioning algorithm architecture diagram for an autonomous vehicle according to the present invention;
FIG. 2 is a schematic diagram of the vehicle kinematics model pre-integration of the present invention;
FIG. 3 is a schematic representation of a kinematic model of a vehicle according to the present invention;
fig. 4 is a diagram of the laser SLAM algorithm architecture of the present invention.
Detailed Description
The invention will be further described in detail with reference to the following examples, which are given in the accompanying drawings.
As shown in fig. 1, the architecture diagram of the architecture-shared multi-sensor fusion positioning algorithm for autonomous vehicles comprises three modules: the system comprises a vehicle kinematics model module, a laser SLAM algorithm module and an extended Kalman filtering module.
The input to the positioning system comprises three sensors: the device comprises a multi-line laser radar, a wheel speed sensor and a steering wheel corner sensor. Based on the vehicle optimization pose provided by the last-time extended Kalman filtering module, the automatic driving vehicle pose is optimized by using laser point cloud data and wheel speed sensor and steering wheel corner sensor data between corresponding moments of two frames of point clouds. At time j, the state quantity of the system to be optimized is defined as follows:
Figure BDA0002667517300000071
where the subscript w denotes the world coordinate system and b denotes the vehicle coordinate system.
Figure BDA0002667517300000072
Representing the translation transformation of the vehicle coordinate system to the world coordinate system at time j.
Figure BDA0002667517300000073
And (4) representing the lie algebra corresponding to the rotation matrix between the vehicle coordinate system and the world coordinate system at the moment j. The rotation matrix and lie algebra have the following relationship:
R=exp(φ)
1. vehicle kinematics model module
Before a new frame of laser point cloud data does not arrive, a vehicle kinematic model is built by utilizing data of a wheel speed sensor and data of a steering wheel corner sensor, and the motion state of the vehicle is predicted according to the vehicle kinematic model. The vehicle kinematics model contains two inputs: 1) the wheel speed sensor directly provides the vehicle longitudinal speed v. 2) Steering wheel angle sensor for providing the required method for calculating the yaw rate omega of a vehicleSteering wheel corner deltas
The yaw angular velocity omega of the vehicle is composed of the velocity v and the steering wheel angle deltasThe steering gear angle transmission ratio eta and the axle base L are jointly determined, namely:
Figure BDA0002667517300000074
the vehicle kinematics model is based on a plane motion hypothesis, so that the vehicle coordinate system at the previous moment is used as a reference coordinate system, the vehicle runs on a plane all the time between two moments, and the vehicle pose variation quantity between the two moments is predicted according to the vehicle kinematics model:
Figure BDA0002667517300000081
Figure BDA0002667517300000082
Figure BDA0002667517300000083
Figure BDA0002667517300000084
Figure BDA0002667517300000085
where δ t represents the time interval of the adjacent underlying data.
Figure BDA0002667517300000086
Indicating the amount of yaw angle change between adjacent laser frames.
Figure BDA0002667517300000087
Indicating that the vehicle is always in plane motion between adjacent moments, i.e.: the vehicle coordinate system at the previous moment is taken as a reference coordinate system, and no movement exists in the Z-axis direction.
Vehicle pose optimized based on last moment
Figure BDA0002667517300000088
Calculating the pose of the automatic driving vehicle at the moment j in the world coordinate system
Figure BDA0002667517300000089
Figure BDA00026675173000000810
Figure BDA00026675173000000811
Wherein the content of the first and second substances,
Figure BDA00026675173000000812
rotation matrix on a plane
Figure BDA00026675173000000813
The calculation formula is as follows:
Figure BDA00026675173000000814
the state prediction based on the vehicle kinematic model provides a predicted value for an extended Kalman filtering algorithm, and the predicted value is also used as an initial value for solving the problem by optimizing the laser SLAM.
2. Laser SLAM algorithm module
The laser SLAM algorithm module comprises three parts: the system comprises a laser odometer module, a laser loop detection module and a pose optimization module.
The laser odometer module comprises four parts: point cloud distortion correction, point cloud feature extraction, local map construction and frame-to-map matching. These four parts are completed separately: 1) and receiving laser point cloud data at the latest moment, and correcting the point cloud motion distortion according to the vehicle prediction state. 2) And calculating the local smoothness of the point cloud according to the neighborhood point information, and extracting the characteristics of the angular points and the plane points by adopting a double neighborhood strategy. 3) And constructing a local point cloud map based on the vehicle optimization pose provided by the filtering module at the last moment. 4) And constructing a laser odometer residual error for joint optimization by utilizing a frame and local map matching algorithm based on the initial estimation of the vehicle pose.
The laser loop detection module comprises four parts: descriptor construction, similarity calculation, feature point verification and loop frame matching. These four parts are completed separately: 1) a global descriptor is constructed using the point cloud intensity and local smoothness. 2) Scan similarity was calculated using the chi-square test. 3) And verifying the loop correctness by using the matching number of the feature points. 4) And constructing a local map by utilizing the corresponding pose of the loop frame, and matching the frame with the local map.
And the pose optimization module constructs a cost function according to the laser odometer residual error and the laser loop residual error, and performs combined nonlinear optimization by using a gradient descent method to obtain the vehicle optimization pose based on the laser SLAM algorithm. The joint optimization pose of the autonomous vehicle will be passed to the extended kalman filter module for use as a measurement.
3. Extended Kalman filtering module
Three kinds of measurement data of a laser radar, a wheel speed sensor and a steering wheel corner sensor are fused, and a vehicle kinematics model and a laser SLAM algorithm are fused by adopting an extended Kalman filtering technology so as to improve the positioning accuracy and robustness of the automatic driving vehicle. The extended Kalman filtering module uses the vehicle state prediction of the vehicle kinematics model as a predicted value, and the vehicle optimization pose of the laser SLAM module as an observed value.
The extended Kalman filtering algorithm is divided into two stages: and (4) predicting and updating.
A prediction stage: and predicting the vehicle pose and the covariance matrix at the current moment according to the optimized state quantity and the system covariance matrix at the previous moment, namely:
Figure BDA0002667517300000101
Figure BDA0002667517300000102
where Σ denotes the system covariance matrix, MjRepresenting the system noise covariance matrix. G (-) represents a state equation, G represents a state transition matrix, and the derivation formula is as follows:
Figure BDA0002667517300000103
wherein the right-times BCH approximates the inverse of the Jacobian matrix
Figure BDA0002667517300000104
Calculated as follows:
Figure BDA0002667517300000105
where θ a denotes a rotation vector corresponding to the rotation matrix R, θ denotes a rotation angle, and a denotes a rotation axis. The A is an antisymmetric symbol and the V is an antisymmetric symbol.
And (3) an updating stage: calculating Kalman gain and updating the current pose and covariance matrix of the vehicle according to the measured value and the predicted value, namely:
Figure BDA0002667517300000106
Figure BDA0002667517300000107
Figure BDA0002667517300000108
wherein, KjRepresenting the kalman gain. N is a radical ofjRepresenting observation noise co-ordinatesAnd (4) a variance matrix. HjRepresenting a system observation matrix, which is a 6 x 6 identity matrix.
And obtaining the accurate pose of the vehicle through multiple iterations of the filtering module, and using the accurate pose for local map construction in the laser SLAM module and vehicle state prediction in the vehicle kinematics model module at the next moment.
The method for estimating the pose of the vehicle by using the structure-shared multi-sensor fusion positioning method facing the automatic driving vehicle for the first time needs initialization operation. Its initialization is divided into three parts: 1) and time stamp alignment of the laser radar data, the wheel speed sensor data and the steering wheel angle sensor data is realized through linear interpolation. 2) Updating a translation transformation p between a lidar coordinate system and a vehicle coordinate system by measurementbl. 3) Optimizing rotational transformations q between a lidar coordinate system and a vehicle coordinate system using rotational constraintsbl. 4) Setting an initial system noise covariance matrix and an observation noise covariance matrix, namely:
Figure BDA0002667517300000111
Figure BDA0002667517300000112
there are two adjacent moments i, j: vehicle kinematics model rotation constraints
Figure BDA0002667517300000113
Laser odometer rotational constraint
Figure BDA0002667517300000114
Then there are:
Figure BDA0002667517300000115
wherein q represents a quaternion.
The derivation can be found as follows:
Figure BDA0002667517300000116
wherein [ ·]L,[·]RRepresenting the left and right multiplication of the quaternion, respectively.
Accumulating a plurality of time linear equations and adding robust kernel weight to obtain a joint linear equation:
Figure BDA0002667517300000117
wherein:
Figure BDA0002667517300000118
from the rotation matrix and the inter-axis-angle transformation relationship tr (r) ═ 1+2cos θ, a calculation formula of the angle error r is obtained:
Figure BDA0002667517300000119
solving a joint linear equation by using SVD (singular value decomposition), wherein a singular vector corresponding to a minimum singular value is a rotation transformation q between a laser radar coordinate system and a vehicle coordinate systembl
Fig. 2 is a schematic diagram of the vehicle kinematics model pre-integration. Relative to the frame rate of the laser radar, the frequency of the wheel speed sensor and the steering wheel angle sensor is high, so that the vehicle kinematic model pre-integration is required to obtain the vehicle motion state prediction between the adjacent laser frames.
And performing pre-integration on a plurality of vehicle bottom layer data between adjacent laser frames to obtain a vehicle kinematic model pre-integration value at a corresponding time. Calculating a vehicle kinematic model pre-integration value by using a median method, namely calculating relative pose change by using an average measurement value between adjacent bottom layer data:
Figure BDA0002667517300000121
Figure BDA0002667517300000122
Figure BDA0002667517300000123
based on a calculation formula given by a vehicle kinematics model module according to
Figure BDA0002667517300000124
And
Figure BDA0002667517300000125
calculating a vehicle kinematic model pre-integral quantity
Figure BDA0002667517300000126
And
Figure BDA0002667517300000127
and the vehicle kinematic model pre-integration quantity is used as initial motion estimation between two adjacent frame point clouds and is transmitted to a laser SLAM algorithm module.
Fig. 3 is a schematic view of a vehicle kinematic model. The vehicle kinematics model is simplified into a two-degree-of-freedom bicycle model, and front and rear wheels are replaced by single wheels. Establishing a vehicle coordinate system with the center B of the rear axle of the vehicle as an origin, and taking the X direction as the advancing direction of the vehiclebAxial direction, perpendicular to XbThe left side of the axle pointing to the vehicle body is YbThe axial direction.
Figure BDA0002667517300000128
Representing the increase in yaw angle of the vehicle between adjacent moments, L representing the wheelbase, deltafIs the corner of the front wheel. In order to ensure safety, the automatic driving system cannot enter a limit working condition, so that the mass center slip angle is small and can be ignored. In general, the rear wheel of the vehicle is not controllable, so that the rear wheel steering angle control input δ in the bicycle model can be consideredr0. By steering wheel angle deltasAnd the steering angle transmission ratio eta canObtaining the front wheel corner deltafNamely: deltaf=δs/η。
The vehicle kinematic model building principle is to ensure that the model is simple and simultaneously reflect the real motion characteristics of the vehicle as much as possible. Too strict vehicle kinematics models are not conducive to ensuring real-time performance. Based on the vehicle structure and the motion characteristics, the front-wheel steering four-wheel vehicle is simplified into a two-wheel bicycle model. The bicycle model is based on the following assumptions: 1) the motion of the vehicle in the Z-axis direction is not considered, and only the motion in the XY horizontal plane is considered. 2) The left and right side tires are combined into one tire with the same wheel rotation angle. 3) Vehicle steering is controlled by the front wheels only.
The vehicle kinematics model has two inputs: vehicle speed v provided by a wheel speed sensor and front wheel steering angle delta provided by a steering wheel steering angle sensorf. The vehicle coordinate system at the previous moment is taken as a reference coordinate system, and the expression of the vehicle kinematic model at the current moment is as follows:
Figure BDA0002667517300000131
Figure BDA0002667517300000132
Figure BDA0002667517300000133
wherein v isxAnd vyRespectively representing the speed of the automatic driving vehicle in the X-axis direction and the speed of the automatic driving vehicle in the Y-axis direction under a reference coordinate system.
As shown in fig. 4, the architecture diagram of the laser SLAM algorithm contains three modules: the system comprises a laser odometer module, a laser loop detection module and a pose optimization module.
1. Laser odometer module
The laser odometer module comprises four parts: point cloud distortion correction, point cloud feature extraction, local map construction and frame-to-map matching.
(1) Point cloud distortion correction
The point cloud distortion needs to be corrected to ensure the accuracy of point cloud matching. And correcting the motion distortion of the laser point cloud by a linear interpolation method based on the assumption of uniform motion. The undistorted point cloud is used for subsequent local smoothness computation and point cloud matching.
(2) Point cloud feature extraction
And carrying out point cloud wiring harness division by utilizing the angle information of the undistorted point cloud. The point cloud on each laser scanning line calculates the local smoothness according to the following formula:
Figure BDA0002667517300000134
wherein s isjRepresenting the local smoothness of the point cloud.
Figure BDA0002667517300000135
Representing a point cloud measurement. A denotes a neighborhood point set.
The local smoothness threshold in this example is set to 0.1. Ordering the point cloud smoothness and extracting two types of feature points through local smoothness and neighborhood point distribution: 1) corner points: and the local smoothness is larger than the threshold value, and meanwhile, the adjacent region points are not mutated. 2) Plane points: and the local smoothness is smaller than the threshold value, and meanwhile, the adjacent region points are not mutated. And calculating the local smoothness in the double neighborhood range and extracting the characteristics of the corner points and the plane points which simultaneously meet the threshold value according to the local smoothness. The double-neighborhood feature extraction algorithm avoids the influence of neighborhood size on local smoothness, and improves the accuracy and robustness of feature extraction.
In order to realize uniform distribution of the characteristic point cloud, each wire harness is equally divided into 4 independent areas. Each region provides at most 20 angular points and 40 plane points to form an angular point set
Figure BDA0002667517300000141
And sets of plane points
Figure BDA0002667517300000142
It is desirable to avoid selecting the following categories of points when selecting feature points: 1) points that may be occluded. 2) Has already been used forPoints around the selected point are selected. 3) Located at a plane point where the laser lines are nearly parallel.
(3) Local map construction
The local point cloud map is used for determining an accurate feature correspondence. Based on the previously optimized vehicle pose, a dense local point cloud map is constructed using a fixed number of historical laser frames. Corresponding corner feature in the time period of (j-m, …, j-1)
Figure BDA0002667517300000143
And plane point features
Figure BDA0002667517300000144
And uniformly transforming the relative poses of different moments into a laser radar coordinate system corresponding to the key frame k, so as to realize the construction of the corner point local map and the plane point local map. Wherein the content of the first and second substances,
Figure BDA0002667517300000145
m is an even number.
(4) Frame and map matching
And based on the constructed local map and the initial estimation of the vehicle pose, constructing a laser odometer residual error by matching the frame with the local map. Current time characteristic point cloud set
Figure BDA0002667517300000146
And
Figure BDA0002667517300000147
and (4) realizing point cloud projection by the point cloud in the step (1) according to the relative pose relation, namely converting the characteristic point cloud into a corresponding laser radar coordinate system at the moment k. Based on feature point clouds in two types of local maps, a corresponding kd tree is constructed, and feature lines corresponding to angular points and feature planes corresponding to plane points are found, namely, 1) point lines ICP: quickly finding two nearest points of each angular point by using a kd tree algorithm, constructing a straight line by using the nearest points and calculating the coordinates of the foot from the point to the straight line
Figure BDA0002667517300000148
2) Point surface ICP: using kd-Tree algorithm to quickly find each plane pointThree closest points, constructing a plane by using the closest points and calculating the foot coordinate from the point to the plane
Figure BDA0002667517300000149
And for the characteristic points in the j moment laser point cloud, projecting the characteristic points to a k moment laser radar coordinate system to obtain the following values:
Figure BDA0002667517300000151
where the subscript l denotes the lidar coordinate system.
Figure BDA0002667517300000152
And representing the three-dimensional coordinates of the laser characteristic point at the moment j in a laser radar coordinate system.
Figure BDA0002667517300000153
And the method represents the conversion of the laser characteristic point at the time j to a theoretical value in a laser radar coordinate system at the time k. T denotes a transformation matrix between two coordinate systems, including a rotation matrix and a translation vector, i.e.:
Figure BDA0002667517300000154
the three-dimensional coordinate form is:
Figure BDA0002667517300000155
and constructing a laser odometer residual error by constraining the measurement value of the laser radar at the same moment. Laser odometer residual r expressed by point-to-line and point-to-plane distancesl
Figure BDA0002667517300000156
Wherein the content of the first and second substances,
Figure BDA0002667517300000157
and the method is used for converting the laser measuring point at the time j into a projection point in a laser radar coordinate system at the time k.
Figure BDA0002667517300000158
Representing the corresponding point of the projection point in the lidar coordinate system at time k.
Namely:
Figure BDA0002667517300000159
wherein c represents the characteristic point cloud corresponding relation determined by matching the frame and the local map.
Construction of Jacobian matrix J by partial derivation of system state quantity through residual error of laser odometerl
Figure BDA00026675173000001510
The derivation can be found as follows:
Figure BDA00026675173000001511
2. laser loop detection module
The laser loop detection module comprises four parts: descriptor construction, similarity calculation, feature point verification and loop frame matching.
(1) Descriptor construction
And (4) dividing coordinate axes by using a principal component analysis method. After the reference frame is obtained, the boundary and the coordinate axes are aligned, so that the corresponding division coordinate axes are obtained. The global descriptor based on the point cloud intensity and the local smoothness is composed of m non-overlapping regions, namely, an outer sphere radius and an inner sphere radius are defined by taking the laser radar as a center, a sphere is divided into an upper part and a lower part, each hemisphere is divided into four regions, and therefore m is 16. The region description is performed using a corresponding histogram, i.e., the k laser points falling in the region correspond to the laser point cloud intensity and local smoothness are calculated as a histogram having n divided regions.
(2) Similarity calculation
And setting the size of the candidate area to be 40 multiplied by 10m by taking the initial estimation of the vehicle pose as a central point. And under the condition that the time interval is met, finding the laser frame which meets the similarity threshold and has the lowest similarity by using descriptor matching. The similarity measure of the ith area of the defined descriptor A and the ith area of the defined descriptor B is determined by the chi-square test, namely:
Figure BDA0002667517300000161
the similarity of the two descriptors is calculated using the region similarity, as follows:
Figure BDA0002667517300000162
and (3) determining ambiguity of the coordinate axes by using a principal component analysis method, and realizing correct alignment of the coordinate axes through classification discussion to eliminate interference caused by the process. There are 4 cases in total according to the division of the x-axis and z-axis. When the similarity of a and B is calculated using the above formula, four different values occur, and the minimum value is taken as the descriptor matching result. Namely:
SAB=min{SAB1,SAB2,SAB3,SAB4}
(3) feature point verification
And selecting the laser frame with the highest similarity and meeting the threshold value as the loop back candidate frame. In order to ensure the accuracy of loop detection, the feature points are used for further verification. Based on the corresponding vehicle pose of the loop candidate frame, k historical frames are searched by using a k nearest neighbor algorithm, and the corresponding feature point cloud of the historical frames is projected to the coordinate system of the laser radar corresponding to the loop candidate frame, so that the local map construction is realized. And projecting the feature point cloud in the current frame to a loop candidate frame based on the initial estimation of the vehicle pose. And searching a straight line corresponding to the angular point and a plane corresponding to the plane point by using a k nearest neighbor algorithm, and calculating the distance from the point to the line and the distance from the point to the plane. And taking the point cloud meeting the minimum distance threshold as a matching point, counting the number of the matching points and calculating the ratio of the number of the matching points to the number of the characteristic points. If the ratio satisfies the set threshold, the loop detection is correct. Otherwise, the loop detection is wrong, and the laser loop residual error is not constructed.
(4) Loop frame matching
For the j moment laser feature point cloud, the value of the j moment laser feature point cloud projected to the o moment (corresponding moment of the loopback frame) laser radar coordinate system is as follows:
Figure BDA0002667517300000171
wherein the content of the first and second substances,
Figure BDA0002667517300000172
and representing the three-dimensional coordinates of the laser measuring point at the moment j in a laser radar coordinate system.
Figure BDA0002667517300000173
And the theoretical value of the laser radar coordinate system at the moment j is converted into the theoretical value of the laser radar coordinate system at the moment n.
The three-dimensional coordinate form is:
Figure BDA0002667517300000174
for corner features, two closest points of each feature point are quickly found by using a kd-Tree algorithm. Constructing a straight line using the closest points and calculating the coordinates of the drop foot from point to straight line
Figure BDA0002667517300000175
For plane point features, a kd tree algorithm is used for quickly finding three closest points of each feature point, a plane is constructed by using the closest points, and the foot coordinate from the point to the plane is calculated
Figure BDA0002667517300000176
For angular points, using a table of distances from point to lineShowing the laser loopback residual. For planar points, the laser loop back residual is represented by the point-to-plane distance. Constructing a laser loopback residual r by constraining the measured values of the laser radar at the same momento
Figure BDA0002667517300000177
Wherein the content of the first and second substances,
Figure BDA0002667517300000181
and the method is used for converting the laser measuring point at the time j into a projection point in a laser radar coordinate system at the time n.
Figure BDA0002667517300000182
Representing the corresponding point of the projection point in the lidar coordinate system at time n.
Namely:
Figure BDA0002667517300000183
wherein d represents the feature point cloud corresponding relation determined by matching the frame with the local map.
Construction of Jacobian matrix J by derivation of system state quantity through laser loop residual erroro
Figure BDA0002667517300000184
The derivation can be found as follows:
Figure BDA0002667517300000185
3. pose optimization module
And the pose optimization module constructs a cost function according to the laser odometer residual error and the laser loop residual error, and performs combined nonlinear optimization by using a gradient descent method to obtain the vehicle optimization pose based on the laser SLAM algorithm. The jacobian matrix of the cost function is needed in the implementation process of the gradient descent method, and relevant derivation and description are already performed in the foregoing, and are not described herein again. The joint optimization pose of the autonomous vehicle will be passed to the extended kalman filter module for use as a measurement.
And obtaining the maximum posterior estimation of the state quantity X of the system to be optimized by calculating the minimum value of the cost function of the system. The cost function of the structure-shared multi-sensor fusion positioning system for the automatic driving vehicle is constructed as follows:
Figure BDA0002667517300000186
wherein both residuals are represented by mahalanobis distances. r isl(c, X) denotes the laser odometer residual, ro(d, X) represents the laser loopback residual, covariance matrix ∑LDetermined by the lidar accuracy.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (5)

1. The utility model provides a structure sharing multisensor towards autopilot vehicle fuses positioning system which characterized in that: the method comprises the following steps:
the vehicle kinematic model module is used for constructing a vehicle kinematic model through data of a wheel speed sensor and a steering wheel corner sensor before a new frame of laser point cloud data is not reached, and predicting the motion state of the vehicle according to the vehicle kinematic model;
the laser SLAM algorithm module is divided into a front end matching part and a rear end optimizing part, and the front end matching is completed: based on the vehicle prediction state, correcting the motion distortion of the laser point cloud, calculating the local smoothness of the point cloud through neighborhood point information, extracting the characteristics of an angular point and a plane point by adopting a double-neighborhood strategy, matching the characteristics with the previous frame of point cloud, and finishing the rear-end optimization: constructing a global descriptor based on the point cloud strength and the local smoothness, calculating scanning similarity by chi-square inspection and verifying according to characteristic points, optimizing the vehicle pose through frame and local map matching, constructing a pose graph optimization based on pose conversion of a current frame and a historical loopback frame, and reducing vehicle pose accumulation errors;
the extended Kalman filtering module is used for fusing a vehicle prediction pose provided by a vehicle kinematics model and a vehicle optimization pose provided by a laser SLAM algorithm by using an extended Kalman filtering technology so as to acquire accurate vehicle pose information; the vehicle kinematics model module specifically comprises the following steps of constructing a vehicle kinematics model and predicting the vehicle motion state:
step 1, directly providing a vehicle longitudinal speed v through a wheel speed sensor and providing a steering wheel corner delta required by calculating a vehicle yaw angular speed omega through a steering wheel corner sensors
Step 2, based on the speed v and the steering wheel angle deltasCalculating the vehicle yaw angular velocity omega by the steering gear angular transmission ratio eta and the axle distance L:
Figure FDA0003293325320000011
and 3, taking the vehicle coordinate system at the previous moment as a reference coordinate system, always running the vehicle on a plane between two moments, and predicting the vehicle pose variation between two moments according to the vehicle kinematics model:
Figure FDA0003293325320000012
Figure FDA0003293325320000021
Figure FDA0003293325320000022
Figure FDA0003293325320000023
Figure FDA0003293325320000024
where, δ t represents the time interval of the adjacent underlying data,
Figure FDA0003293325320000025
representing the accumulated yaw angle increment relative to the corresponding time instant of the last laser frame,
Figure FDA0003293325320000026
the vehicle is always in a plane motion state between adjacent moments, namely: the vehicle coordinate system at the previous moment is taken as a reference coordinate system, and no movement exists in the Z-axis direction;
step 4, based on the optimized vehicle pose at the last moment
Figure FDA0003293325320000027
Calculating the pose of the automatic driving vehicle at the moment j in the world coordinate system
Figure FDA0003293325320000028
Figure FDA0003293325320000029
Figure FDA00032933253200000210
Wherein the content of the first and second substances,
Figure FDA00032933253200000211
2. the autonomous-vehicle-oriented structurally-shared multi-sensor fusion positioning system of claim 1, characterized in that: the rotation matrix on the plane in the step 4
Figure FDA00032933253200000212
The calculation formula is as follows:
Figure FDA00032933253200000213
3. the autonomous-vehicle-oriented structurally-shared multi-sensor fusion positioning system of claim 1 or 2, characterized in that: the laser SLAM algorithm module comprises a front end matching module and a rear end optimization module, wherein the front end matching module comprises:
the point cloud distortion correction module receives the latest laser point cloud data and corrects the point cloud motion distortion according to the vehicle prediction state;
the point cloud feature extraction module is used for calculating the local smoothness of the point cloud according to the neighborhood point information and extracting the characteristics of the angular points and the plane points by adopting a double neighborhood strategy;
the inter-frame matching module is used for acquiring an optimized pose of the vehicle by matching the current frame point cloud with the previous frame point cloud based on the initial estimation of the pose of the vehicle;
the back-end optimization module comprises:
the descriptor construction module is used for constructing a global descriptor by utilizing the point cloud intensity and the local smoothness;
the similarity calculation module is used for calculating scanning similarity by using chi-square test;
verifying the characteristic points, namely verifying the correctness of the loop by using the matching number of the characteristic points;
matching the loop frames, constructing a local map by using the corresponding pose of the loop frames, matching the frames with the local map to obtain the optimized pose of the vehicle,
and (4) optimizing the pose graph, namely constructing vehicle pose graph optimization according to the pose transformation of the current frame and the corresponding loop frame and the pose transformation of two adjacent frames in the loop, and reducing the accumulated error of the vehicle pose.
4. The autonomous-vehicle-oriented structurally-shared multi-sensor fusion positioning system of claim 3, characterized in that: and constructing a cost function according to the laser odometer residual error and the laser loop-back residual error by the pose graph optimization, and performing combined nonlinear optimization by using a gradient descent method to obtain the vehicle optimization pose based on the laser SLAM algorithm.
5. The autonomous vehicle-oriented architecture-shared multi-sensor fusion positioning system of any of claims 1 to 3, characterized in that: the algorithm in the extended Kalman filtering module is divided into two stages: a prediction stage: and predicting the vehicle pose and the covariance matrix at the current moment according to the optimized state quantity and the system covariance matrix at the previous moment, namely:
Figure FDA0003293325320000031
Figure FDA0003293325320000032
where, Σ represents the system covariance matrix, MjRepresenting a system noise covariance matrix, G (-) representing a state equation, and G representing a state transition matrix, the derivation formula is as follows:
Figure FDA0003293325320000041
wherein the right-times BCH approximates the inverse of the Jacobian matrix
Figure FDA0003293325320000042
Calculated as follows:
Figure FDA0003293325320000043
wherein theta a represents a rotation vector corresponding to the rotation matrix R, theta represents a rotation angle, a represents a rotation axis, Λ is an anti-symmetric symbol, and V is a solution anti-symmetric symbol;
and (3) an updating stage: calculating Kalman gain and updating the current pose and covariance matrix of the vehicle according to the measured value and the predicted value, namely:
Figure FDA0003293325320000044
Figure FDA0003293325320000045
Figure FDA0003293325320000046
wherein, KjRepresenting the Kalman gain, NjRepresenting the observed noise covariance matrix, HjRepresenting a system observation matrix, which is a 6 x 6 identity matrix.
CN202010923469.8A 2020-09-04 2020-09-04 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle Active CN112083725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010923469.8A CN112083725B (en) 2020-09-04 2020-09-04 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010923469.8A CN112083725B (en) 2020-09-04 2020-09-04 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle

Publications (2)

Publication Number Publication Date
CN112083725A CN112083725A (en) 2020-12-15
CN112083725B true CN112083725B (en) 2021-11-30

Family

ID=73732396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010923469.8A Active CN112083725B (en) 2020-09-04 2020-09-04 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN112083725B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712549A (en) * 2020-12-31 2021-04-27 上海商汤临港智能科技有限公司 Data processing method, data processing device, electronic equipment and storage medium
CN113781569B (en) * 2021-01-04 2024-04-05 北京京东乾石科技有限公司 Loop detection method and device
CN112596089B (en) * 2021-03-02 2021-05-18 腾讯科技(深圳)有限公司 Fusion positioning method and device, electronic equipment and storage medium
CN113267178B (en) * 2021-03-25 2023-07-07 浙江大学 Model pose measurement system and method based on multi-sensor fusion
CN113063414A (en) * 2021-03-27 2021-07-02 上海智能新能源汽车科创功能平台有限公司 Vehicle dynamics pre-integration construction method for visual inertia SLAM
CN113299158B (en) * 2021-05-10 2023-04-18 易显智能科技有限责任公司 Steady teaching method and system for driving
CN113758491B (en) * 2021-08-05 2024-02-23 重庆长安汽车股份有限公司 Relative positioning method and system based on multi-sensor fusion unmanned vehicle and vehicle
CN113654555A (en) * 2021-09-14 2021-11-16 上海智驾汽车科技有限公司 Automatic driving vehicle high-precision positioning method based on multi-sensor data fusion
CN113804183B (en) * 2021-09-17 2023-12-22 广东汇天航空航天科技有限公司 Real-time topographic mapping method and system
CN113933858A (en) * 2021-09-28 2022-01-14 中国科学院深圳先进技术研究院 Abnormal detection method and device of positioning sensor and terminal equipment
CN114120252B (en) * 2021-10-21 2023-09-01 阿波罗智能技术(北京)有限公司 Automatic driving vehicle state identification method and device, electronic equipment and vehicle
CN114170320B (en) * 2021-10-29 2022-10-28 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN114742884B (en) * 2022-06-09 2022-11-22 杭州迦智科技有限公司 Texture-based mapping, mileage calculation and positioning method and system
CN117115759B (en) * 2023-04-12 2024-04-09 盐城工学院 Road side traffic target detection system and method based on category guidance

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105172793B (en) * 2015-09-25 2017-08-22 广州小鹏汽车科技有限公司 The pose evaluation method of autonomous driving vehicle
CN106272423A (en) * 2016-08-31 2017-01-04 哈尔滨工业大学深圳研究生院 A kind of multirobot for large scale environment works in coordination with the method for drawing and location
CN106840179B (en) * 2017-03-07 2019-12-10 中国科学院合肥物质科学研究院 Intelligent vehicle positioning method based on multi-sensor information fusion
CN109781120B (en) * 2019-01-25 2023-04-14 长安大学 Vehicle combination positioning method based on synchronous positioning composition
CN110320912A (en) * 2019-07-10 2019-10-11 苏州欧博智慧机器人有限公司 The AGV positioning navigation device and method that laser is merged with vision SLAM
CN110967011B (en) * 2019-12-25 2022-06-28 苏州智加科技有限公司 Positioning method, device, equipment and storage medium
CN111240331A (en) * 2020-01-17 2020-06-05 仲恺农业工程学院 Intelligent trolley positioning and navigation method and system based on laser radar and odometer SLAM
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111540013B (en) * 2020-04-22 2023-08-22 深圳市启灵图像科技有限公司 Indoor AGV trolley positioning method based on multi-camera visual slam

Also Published As

Publication number Publication date
CN112083725A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN112083725B (en) Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN111707272B (en) Underground garage automatic driving laser positioning system
CN113945206B (en) Positioning method and device based on multi-sensor fusion
CN112083726B (en) Park-oriented automatic driving double-filter fusion positioning system
CN111272165B (en) Intelligent vehicle positioning method based on characteristic point calibration
KR102581263B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN113819914A (en) Map construction method and device
CN112347840A (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN104180818B (en) A kind of monocular vision mileage calculation device
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN114018248B (en) Mileage metering method and image building method integrating code wheel and laser radar
CN113223161B (en) Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling
CN113819905A (en) Multi-sensor fusion-based odometer method and device
CN114966734A (en) Bidirectional depth vision inertial pose estimation method combined with multi-line laser radar
CN113188557B (en) Visual inertial integrated navigation method integrating semantic features
CN113252051A (en) Map construction method and device
CN113129377B (en) Three-dimensional laser radar rapid robust SLAM method and device
Parra-Tsunekawa et al. A kalman-filtering-based approach for improving terrain mapping in off-road autonomous vehicles
Campa et al. A comparison of pose estimation algorithms for machine vision based aerial refueling for UAVs
CN112923934A (en) Laser SLAM technology suitable for combining inertial navigation in unstructured scene
CN116202509A (en) Passable map generation method for indoor multi-layer building
Emter et al. Simultaneous localization and mapping for exploration with stochastic cloning EKF
CN115562076A (en) Simulation system, method and storage medium for unmanned mine car
CN114690230A (en) Automatic driving vehicle navigation method based on visual inertia SLAM
CN110749327B (en) Vehicle navigation method in cooperative environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant